Semidefinite Programming by Perceptron Learning

Thore Graepel, Ralf Herbrich, Andriy Kharechko, John Shawe-Taylor

Research output: Chapter in Book/Report/Conference proceedingChapterpeer-review

1 Citation (Scopus)


We present a modified version of the perceptron learning algorithm (PLA) which solves semidefinite programs (SDPs) in polynomial time. The algorithm is based on the following three observations: (i) Semidefinite programs are linear programs with infinitely many (linear) constraints; (ii) every linear program can be solved by a sequence of constraint satisfaction problems with linear constraints; (iii) in general, the perceptron learning algorithm solves a constraint satisfaction problem with linear constraints in finitely many updates. Combining the PLA with a probabilistic rescaling algorithm (which, on average, increases the size of the feasible region) results in a probabilistic algorithm for solving SDPs that runs in polynomial time. We present preliminary results which demonstrate that the algorithm works, but is not competitive with state-of-the-art interior point methods.
Original languageEnglish
Title of host publicationAdvances in Neural Information Processing Systems 16
EditorsSebastian Thrun, Lawrence Saul, Bernhard Scholkopf
PublisherMIT Press
Number of pages8
Publication statusPublished (in print/issue) - 2004
EventInternational Conference on Neural Information Processing Systems - Vancouver, Canada
Duration: 9 Dec 200314 Dec 2003
Conference number: 16


ConferenceInternational Conference on Neural Information Processing Systems
Abbreviated titleNIPS


  • semidefinite programming
  • perceptron learning
  • optimization
  • probabilistic algorithm


Dive into the research topics of 'Semidefinite Programming by Perceptron Learning'. Together they form a unique fingerprint.

Cite this