Abstract
We propose a general and versatile framework that significantly speeds-up graphical model optimization while maintaining an excellent solution accuracy. The proposed approach, refereed as Inference by Learning or in short as IbyL, relies on a multi-scale pruning scheme that progressively reduces the solution space by use of a coarse-to-fine cascade of learnt classifiers. We thoroughly experiment with classic computer vision related MRF problems, where our novel framework constantly yields a significant time speed-up (with respect to the most efficient inference methods) and obtains a more accurate solution than directly optimizing the MRF. We make our code available on-line [4].
| Original language | English |
|---|---|
| Pages (from-to) | 2105-2113 |
| Number of pages | 9 |
| Journal | Advances in Neural Information Processing Systems |
| Volume | 3 |
| Issue number | January |
| Publication status | Published (in print/issue) - 1 Jan 2014 |
| Event | 28th Annual Conference on Neural Information Processing Systems 2014, NIPS 2014 - Montreal, Canada Duration: 8 Dec 2014 → 13 Dec 2014 |
Fingerprint
Dive into the research topics of 'Inference by learning: Speeding-up graphical model optimization via a coarse-to-fine cascade of pruning classifiers'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver