Jump to content

Fixed-point computation

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Erel Segal (talk | contribs) at 15:13, 14 April 2023 (Algorithms for contraction mappings). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Fixed-point computation refers to the process of computing an exact or approximate fixed point of a given function.[1] In its most common form, we are given a function f that satisfies the condition to the Brouwer fixed-point theorem, that is: f is continuous and maps the unit n-cube to itself. The Brouwer fixed-point theorem guarantees that f has a fixed point, but the proof is not constructive. Various algorithms have been devised for computing an approximate fixed point. Such algorithms are used in economics for computing a market equilibrium, in game theory for computing a Nash equilibrium, and more.

Definitions

The unit interval is denoted by E := [0,1], and the unit d-dimensional cube is denoted by Ed. A continuous function f is defined on Ed (from Ed to itself). Often, it is assumed that f is not only continuous but also Lipschitz continuous, that is, for some constant L, for all x,y in Ed.

A fixed point of f is a point x in Ed such that f(x)=x. Given an approximation parameter , An ε-fixed-point of f is a point x in Ed such that , where here |.| denotes the maximum norm. That is, all d coordinates of the difference should be at most ε.[2]: 4 

An alternative possible definition of an approximate fixed point uses an approximation parameter . A δ-near fixed-point of f is a point x in Ed such that , where is any fixed-point of f (this criterion is also called the absolute criterion).[3] Note that, if f is Lipschitz-continuous with constant L, then implies . Since is a fixed-point of f, this implies , so . Therefore, a δ-near fixed-point is also an ε-fixed-point with .

Algorithms based on function evaluations

The most general algorithms use only function evaluations: they receive as an input a black-box that, for any x, returns the value of f(x). They do not use any other information on the function f (e.g. its derivative).

One dimension

For a 1-dimensional function (d=1), a δ-near fixed-point can be found using queries using the bisection method: start with the interval E := [0,1]; at each iteration, let x be the center of the current interval, and compute f(x); if f(x)>x then recurse on the sub-interval to the right of x; otherwise, recurse on the interval to the left of x. Note that the current interval always contains a fixed point, so after queries, any point in the remaining interval is a δ-near fixed-point of f. Setting , where L is the Lipschitz constant, gives an ε-fixed-point, using queries.[2]

Two or more dimensions: algorithms

For functions in two or more dimensions, the problem is much more challenging. Several algorithms based on function evaluations have been developed.

  • The first algorithm to approximate a fixed point was developed by Herbert Scarf in 1967.[4][5] Scarf's algorithm finds an approximate fixed point by finding a fully-labelled "primitive set", in a construction similar to Sperner's lemma. A subtle aspect of Scarf's algorithm is that it finds an ε-fixed-point of f, that is, a point that is almost fixed by a function f, but in general cannot find a δ-near fixed-point of f, that is, a point that is close to an actual fixed point.
  • A later algorithm by Harold Kuhn[6] used simplices and simplicial partitions instead of primitive sets.
  • Developing the simplicial approach further, Orin Harrison Merrill[7] presented the restart algorithm.
  • B. Curtis Eaves[8] presented the homotopy algorithm. The algorithm works by starting with an affine function that approximates f, and deforming it towards f, while following the fixed point. A book by Michael Todd[1] surveys various algorithms developed until 1976.
  • David Gale[9] showed that computing a fixed point of an n-dimensional function (on the unit d-dimensional cube) is equivalent to deciding who is the winner in an d-dimensional game of Hex (a game with d players, each of whom needs to connect two opposite faces of an d-cube). Given the desired accuracy ε
    • Construct a Hex board of size kd, where k>1/ε. Each vertex z corresponds to a point z/k in the unit n-cube.
    • Compute the difference f(z/k)-z/k; note that the difference is an n-vector.
    • Label the vertex z by a label in 1,...,d, denoting the largest coordinate in the difference vector.
    • The resulting labeling corresponds to a possible play of the d-dimensional Hex game among d players. This game must have a winner, and Gale presents an algorithm for constructing the winning path.
    • In the winning path, there must be a point in which fi(z/k)-z/k is positive, and an adjacent point in which fi(z/k)-z/k is negative. This means that there is a fixed point of f between these two points.

In the worst case, the number of function evaluations required by all these algorithms is exponential, that is, .

Two or more dimensions: query complexity

Hirsch, Papadimitriou and Vavasis proved that[2] any algorithm based on function evaluations, that finds an ε-fixed-point of f, requires function evaluations, where is the Lipschitz constant of the function (note that ). More precisely:

  • For a 2-dimensional function (d=2), they prove a tight bound .
  • For any d ≥ 3, finding an ε-fixed-point of a d-dimensional function requires queries and queries.

The latter result leaves a gap in the exponent. Chen and Deng[10] closed the gap. They proved that, for any d ≥ 2 and and , the number of queries required for computing an ε-fixed-point is in .

Algorithms for contraction mappings

A contraction mapping (also called a contractive Lipschitz function) is a Lipschitz-continuous function with constant L<1. When the Lipschitz constant is smaller than 1, a fixed point of a contraction mapping can be found much faster than in the general case. Some algorithms for this case are:

  • The fixed-point iteration algorithm of Banach. The Banach fixed-point theorem implies that, when fixed-point iteration is applied to a contraction mapping, the error after t iterations is in . Therefore, the number of evaluations required for accuracy ε is in .
  • The Newton method - which requires to know not only the function f but also its derivative.[11][12][13]
  • The interior ellipsoid algorithm.[14]

The special case in which the Lipschitz constant is exactly 1 is not covered by the above results, but there are specialized algorithms for this case:

  • Shellman and Sikorski[15] presented an algorithm called BEFix (Bisection Envelope Fixed-point) for computing an ε-fixed-point of a two-dimensional function with Lipschitz constant L=1, using only queries. They later[16] presented an improvement called BEDFix (Bisection Envelope Deep-cut Fixed-point), with the same worst-case guarantee but better empirical performance. When L<1, BEDFix can also compute a δ-near fixed-point, using queries.
  • Shellman and Sikorski[3] presented an algorithm called PFix for computing an ε-fixed-point of a d-dimensional function with Lipschitz constant L=1, using queries. When L<1, PFix can also compute a δ-near fixed-point, using queries (that is: a similar complexity but with ). They also prove that, when L>1, finding a δ-near fixed-point might require infinitely-many evaluation queries.

Discrete fixed-point computation

A discrete function is a function defined on a subset of (the d-dimensional integer grid). There are several discrete fixed-point theorems, stating conditions under which a discrete function has a fixed point. For example, the Iimura-Murota-Tamura theorem states that (in particular) if f is a function from a rectangle subset of Zd to itself, and f is hypercubic direction-preserving, then f has a fixed point.

Let f be a direction-preserving function from the integer cube {1,...,n}d to itself. Chen and Deng[10] prove that, for any d ≥ 2 and n > 48 d, computing such a fixed point requires function evaluations.

Relation between fixed-point computation and root-finding algorithms

Given a function g from Ed to R, a root of g is a point x in Ed such that g(x)=0. An ε-root of g is a point x in Ed such that .

Fixed-point computation is a special case of root-finding: given a function f on Ed, define . Clearly, x is a fixed-point of f if and only if x is a root of g, and x is an ε-fixed-point of f if and only if x is an ε-root of g. Therefore, any root-finding algorithm (an algorithm that computes an approximate root of a function) can be used to find an approximate fixed-point.

The opposite is not true: finding an approximate root of a general function may be harder than finding an approximate fixed point. In particular, Sikorski[17] proved that finding an ε-root requires function evaluations. This gives an exponential lower bound even for a one-dimensional function (in contrast, an ε-fixed-point of a one-dimensional function can be found using queries using the bisection method). Here is a proof sketch.[2]: 35  Construct a function g that is slightly larger than ε everywhere in Ed except in some small cube around some point x0, where x0 is the unique root of g. If g is Lipschitz continuous with constant L, then the cube around x0 can have a side-length of ε/L. Any algorithm that finds an ε-root of g must check a set of cubes that covers the entire Ed; the number of such cubes is at least .

However, there are classes of functions for which finding an approximate root is equivalent to finding an approximate fixed point. One example[10] is the class of functions g such that maps Ed to itself (that is: g(x)+x is in Ed for all x in Ed). This is because, for every such function, the function satisfies the conditions to Brouwer's fixed-point theorem. Clearly, x is a fixed-point of f if and only if x is a root of g, and x is an ε-fixed-point of f if and only if x is an ε-root of g. Chen and Deng[10] show that the discrete variants of these problems are computationally equivalent: both problems require function evaluations.

Communication complexity

Roughgarden and Weinstein[18] studied the communication complexity of computing an approximate fixed-point. In their model, there are two agents: one of them knows a function f and the other knows a function g. Both functions are Lipschitz continuous and satisfy Brouwer's conditions. The goal is to compute an approximate fixed point of the composite function . They show that the deterministic communication complexity is in .

References

  1. ^ a b The Computation of Fixed Points and Applications. doi:10.1007/978-3-642-50327-6.
  2. ^ a b c d Hirsch, Michael D; Papadimitriou, Christos H; Vavasis, Stephen A (1989-12-01). "Exponential lower bounds for finding Brouwer fix points". Journal of Complexity. 5 (4): 379–416. doi:10.1016/0885-064X(89)90017-4. ISSN 0885-064X.
  3. ^ a b Shellman, Spencer; Sikorski, K. (2003-12-01). "A recursive algorithm for the infinity-norm fixed point problem". Journal of Complexity. 19 (6): 799–834. doi:10.1016/j.jco.2003.06.001. ISSN 0885-064X.
  4. ^ Scarf, Herbert (1967-09-01). "The Approximation of Fixed Points of a Continuous Mapping". SIAM Journal on Applied Mathematics. 15 (5): 1328–1343. doi:10.1137/0115116. ISSN 0036-1399.
  5. ^ H. Scarf found the first algorithmic proof: Voitsekhovskii, M.I. (2001) [1994], "Brouwer theorem", Encyclopedia of Mathematics, EMS Press, ISBN 1-4020-0609-8.
  6. ^ Kuhn, Harold W. (1968). "Simplicial Approximation of Fixed Points". Proceedings of the National Academy of Sciences of the United States of America. 61 (4): 1238–1242. ISSN 0027-8424.
  7. ^ "APPLICATIONS AND EXTENSIONS OF AN ALGORITHM THAT COMPUTES FIXED POINTS OFCERTAIN UPPER SEMI-CONTINUOUS POINT TO SET MAPPINGS - ProQuest". www.proquest.com. Retrieved 2023-04-13.
  8. ^ Eaves, B. Curtis (1972-12-01). "Homotopies for computation of fixed points". Mathematical Programming. 3 (1): 1–22. doi:10.1007/BF01584975. ISSN 1436-4646.
  9. ^ David Gale (1979). "The Game of Hex and Brouwer Fixed-Point Theorem". The American Mathematical Monthly. 86 (10): 818–827. doi:10.2307/2320146. JSTOR 2320146.
  10. ^ a b c d Chen, Xi; Deng, Xiaotie (2005-05-22). "On algorithms for discrete and approximate brouwer fixed points". Proceedings of the thirty-seventh annual ACM symposium on Theory of computing. STOC '05. New York, NY, USA: Association for Computing Machinery: 323–330. doi:10.1145/1060590.1060638. ISBN 978-1-58113-960-0.
  11. ^ "Iterative solution of nonlinear equations in several variables". Guide books. Retrieved 2023-04-13.
  12. ^ Kellogg, R. B.; Li, T. Y.; Yorke, J. (September 1976). "A Constructive Proof of the Brouwer Fixed-Point Theorem and Computational Results". SIAM Journal on Numerical Analysis. 13 (4): 473–483. doi:10.1137/0713041. ISSN 0036-1429.
  13. ^ Smale, Steve (1976-07-01). "A convergent process of price adjustment and global newton methods". Journal of Mathematical Economics. 3 (2): 107–120. doi:10.1016/0304-4068(76)90019-7. ISSN 0304-4068.
  14. ^ Huang, Z; Khachiyan, L; Sikorski, K (1999-06-01). "Approximating Fixed Points of Weakly Contracting Mappings". Journal of Complexity. 15 (2): 200–213. doi:10.1006/jcom.1999.0504. ISSN 0885-064X.
  15. ^ Shellman, Spencer; Sikorski, K. (2002-06-01). "A Two-Dimensional Bisection Envelope Algorithm for Fixed Points". Journal of Complexity. 18 (2): 641–659. doi:10.1006/jcom.2001.0625. ISSN 0885-064X.
  16. ^ Shellman, Spencer; Sikorski, K. (2003-09-01). "Algorithm 825: A deep-cut bisection envelope algorithm for fixed points". ACM Transactions on Mathematical Software. 29 (3): 309–325. doi:10.1145/838250.838255. ISSN 0098-3500.
  17. ^ Sikorski, K. (1984-06-01). "Optimal solution of nonlinear equations satisfying a Lipschitz condition". Numerische Mathematik. 43 (2): 225–240. doi:10.1007/BF01390124. ISSN 0945-3245.
  18. ^ Roughgarden, Tim; Weinstein, Omri (2016-10). "On the Communication Complexity of Approximate Fixed Points". 2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS): 229–238. doi:10.1109/FOCS.2016.32. {{cite journal}}: Check date values in: |date= (help)