Inference by learning: Speeding-up graphical model optimization via a coarse-to-fine cascade of pruning classifiers

Bruno Conejo, Nikos Komodakis, Sebastien Leprince, Jean Philippe Avouac

Research output: Contribution to journalConference articlepeer-review

7 Citations (Scopus)

Abstract

We propose a general and versatile framework that significantly speeds-up graphical model optimization while maintaining an excellent solution accuracy. The proposed approach, refereed as Inference by Learning or in short as IbyL, relies on a multi-scale pruning scheme that progressively reduces the solution space by use of a coarse-to-fine cascade of learnt classifiers. We thoroughly experiment with classic computer vision related MRF problems, where our novel framework constantly yields a significant time speed-up (with respect to the most efficient inference methods) and obtains a more accurate solution than directly optimizing the MRF. We make our code available on-line [4].

Original languageEnglish
Pages (from-to)2105-2113
Number of pages9
JournalAdvances in Neural Information Processing Systems
Volume3
Issue numberJanuary
Publication statusPublished (in print/issue) - 1 Jan 2014
Event28th Annual Conference on Neural Information Processing Systems 2014, NIPS 2014 - Montreal, Canada
Duration: 8 Dec 201413 Dec 2014

Fingerprint

Dive into the research topics of 'Inference by learning: Speeding-up graphical model optimization via a coarse-to-fine cascade of pruning classifiers'. Together they form a unique fingerprint.

Cite this