Simplex algorithm
In mathematical optimization theory, the simplex algorithm of Dantzig is the fundamental technique for numerical solution of the linear programming problem. That is, given a collection of linear inequalities on a number n of real variables, it provides a practical way to find the optimal solution with respect to a fixed linear functional. Some further details are given on the page for linear programming.
In geometric terms we are considering a closed convex polytope P, defined by intersecting a number of half-spaces in n-dimensional Euclidean space, which lie to one side of a hyperplane. The objective is to maximise a linear functional L; if we consider the hyperplanes H(c) defined by L(x) = c as c increases, these form a parallel family. If the problem is well-posed, we want to find the largest value of C such that H(c) intersects P (if there is no such largest value of c, this isn't a reasonable question for optimization as it stands). In this case we can show that the optimum value of c is attained, on the boundary of P. Methods for finding this optimum point on P have a choice of improving a possible point by moving through the interior of P (so-called interior methods), or starting and remaining on the boundary.
The simplex algorithm falls into the latter class of method. The idea is to move along the facets of P in search of the optimum, from point to point. Note that the optimum cannot occur at a mid-point, for example in the middle of an edge lying as a line segment on the boundary of P, unless the whole relevant 'face' is parallel to H. Therefore it is possible to look solely at moves skirting the edge of a simplex, ignoring the interior. The algorithm specifies how this is to be done.
A rival, interior method for linear programming is that of Karnakar ('ellipsoid method'); but this is primarily of theoretical interest.