Jump to content

Learning with errors

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Hrushikesh Tilak (talk | contribs) at 18:15, 4 August 2009 (short intro, reference list). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)

Learning with error is a problem in Machine learning. A generalization of the Parity learning problem, it has recently[1][2] been used to create public-key cryptosystems based on worst-case hardness of some lattice problems. The problem was introduced[1] by Oded Regev in 2005.

Given some samples where and , with assurance that for some fixed linear function , with high probability and deviates from it according to some known noise model, the algorithm must be able to recreate or some close approximation of it.

References

  1. ^ a b Oded Regev, “On lattices, learning with errors, random linear codes, and cryptography,” in Proceedings of the thirty-seventh annual ACM symposium on Theory of computing (Baltimore, MD, USA: ACM, 2005), 84-93, http://portal.acm.org/citation.cfm?id=1060590.1060603.
  2. ^ Chris Peikert, “Public-key cryptosystems from the worst-case shortest vector problem: extended abstract,” in Proceedings of the 41st annual ACM symposium on Theory of computing (Bethesda, MD, USA: ACM, 2009), 333-342, http://portal.acm.org/citation.cfm?id=1536414.1536461.