Jump to content

Pseudorandom generator theorem

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by 68.43.229.106 (talk) at 02:39, 27 November 2005. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)

Pseudorandomness

In computational complexity a distribution is considered pseudorandom is no efficient computation can distinguish it from the true uniform distribution by a non-negligible advantage. Formally, a family of distributions Dn is pseudorandom if for any polynomial size circuit C, and any ε polynomial in n

|  ProbxU [C(x)=1] - ProbxD [C(x)=1] | ≤ ε

Pseudorandom Generator

A function Gl:{0,1}l → {0,1}m, where l < m is a pseudorandom generator if:

  • Gl can be computed in time polynomial in l
  • Gl(x) is pseudorandom

Pseudorandom Generator Theorem

It can be shown that if there is a pseudorandom generator Gl:{0,1}l → {0,1}l+1, i.e. a generator that adds only one pseudorandom bit, then for any m = poly(l), there is a pseudorandom generator G'l:{0,1}l → {0,1}m.

The idea of the proof is as follows: first s bits from uniform distributin Ul are picked and used as the seed to the first instance of Gl, which is known to be a pseudorandom generator. Next, the output of the first instance of Gl is divided into two parts: first l bits are fed into the second instance of Gl as a seed, while the last bit becomes the first bit of the output. Repeating this process for m times yields an output of m pseudorandom bits.

It can be shown that such G'l, that consists of m instances of Gl, is indeed a pseudorandom generator by using hybrid approach and proof by contradiction as follows:

Let's consider m+1 intermediate distributions Hi: 0  ≤  i  ≤  m, where first i bits are chosen from the uniform distribution, and last m-i bits are chosen from the output of G'l. Thus, H0 is the full output of G'l and Hm is a true uniform distribution Um. Therefore, distributions Hi and Hi+1 will be different in only one bit (bit number i+1).

Now, assume that G'l is not a pseudorandom distribution, that is, there exists some circuit C that can distinguish between G'l and Um with an advantage ε  =   1/poly(l). In other words this circuit can distinguish between H0 and Hm. Therefore, exists such i that the circuit C can distinguish between Hi and Hi+1 by at least ε/m. Note, that since m are polynomial in l, then ε/m is also polynomial in l and is still a non-negligible advantage.

Now, assume we are given l+1 bits that are either output of Gl or a drawn from uniform distribution. Let's reuse the approach of building large pseudorandom generators out of instances of Gl and construct a string of pseudorandom bits of length m-i-1 in the same way the G'l was constructed above using the first l given bits as the seed. Then, let's create a string consisting of i bits drawn from uniform, last one of the given bits, and created m-i-1 bits. The resulting output is either Hi or Hi+1, since the i+1 bit is either drawn from uniform or from Gl. Since by assumption we can distinguish between Hi and Hi+1 with non-negligible advantage, then we can distinguish betwenn U and Gl, which implies that Gl is not a pseudorandom generator, which is a contradiciton to the hypothesis. Q.E.D.

One-way functions

Intuitively one-way functions are functions that are easy to compute and hard to invert. In other words the complexity (or circuit size) of the function is much smaller than one of its inverse. Formally: A function ƒ: {0,1}n → {0,1}n is (S,ε) one-way if for any circuit C of size ≤ S,

Prob[ƒ(C(ƒ(x))) = ƒ(x)] ≤ ε .

Moreover, ƒ is a one-way function if

  • ƒ can be computed in polynomial time
  • ƒ is (poly(n), 1/poly(n)) one-way