Solving quadratic equations with continued fractions
In mathematics, a quadratic equation is a polynomial equation of the second degree. The general form is
where a ≠ 0.
Students and teachers all over the world are familiar with the quadratic formula that can be derived by completing the square. That formula always gives the roots of the quadratic equation, but the solutions are often expressed in a form that involves a quadratic irrational number, which can only be evaluated as a fraction or as a decimal fraction by applying an additional root extraction algorithm.
There is another way to solve the general quadratic equation. This old technique obtains an excellent rational approximation to one of the roots by manipulating the equation directly. The method works in many cases, and long ago it stimulated further development of the analytical theory of continued fractions.
A simple example
Here is a simple example to illustrate the solution of a quadratic equation using continued fractions. Let's begin with the equation
and manipulate it directly. Subtracting one from both sides we obtain
This is easily factored into
from which we obtain
and finally
Now comes the crucial step. Let's substitute this expression for x back into itself, recursively, to obtain
But now we can make the same recursive substitution again, and again, and again, pushing the unknown quantity x as far down and to the right as we please, and obtaining in the limit the infinite continued fraction
By applying the fundamental recurrence formulas we may easily compute the successive convergents of this continued fraction to be 1, 3/2, 7/5, 17/12, 41/29, 99/70, 239/169, ..., where each successive convergent is formed by taking the numerator plus the denominator of the preceding term as the denominator in the next term, then adding in the preceding denominator to form the new numerator. This sequence of denominators is a particular Lucas sequence known as the Pell numbers.
An algebraic explanation
We can gain further insight into this simple example by considering the successive powers of
That sequence of successive powers is given by
and so forth. Notice how the fractions derived as successive approximants to √2 also pop out of this geometric progression.
Since 0 < ω < 1, the sequence {ωn} clearly tends toward zero, by well-known properties of the positive real numbers. This fact can be used to prove, rigorously, that the convergents discussed in the simple example above do in fact converge to √2, in the limit.
We can also find these numerators and denominators popping out of the successive powers of
Interestingly, the sequence of successive powers {ω−n} does not approach zero; it grows without limit instead. But it can still be used to obtain the convergents in our simple example.
Notice also that the set obtained by forming all the combinations a + b√2, where a and b are integers, is an example of an object known in abstract algebra as a ring, and more specifically as an integral domain. The number ω is a unit in that integral domain.
The general quadratic equation
Continued fractions are most conveniently applied to solve the general quadratic equation expressed in the form of a monic polynomial
which can always be obtained by dividing the original equation by its leading coefficient. Starting from this monic equation we see that
But now we can apply the last equation to itself recursively to obtain
If this infinite continued fraction converges at all, it must converge to one of the roots of the monic polynomial x2 + bx + c = 0. Unfortunately, this particular continued fraction does not converge to a finite number in every case. We can easily see that this is so by considering the quadratic formula. If the discriminant of our monic polynomial is negative, then both roots of the quadratic equation have imaginary parts. In particular, if b and c are real numbers and b2 - 4c < 0, all the convergents of this continued fraction "solution" will be real numbers, and they cannot possibly converge to a root of the form u + iv, which does not lie on the real number line.
A general theorem
By applying a result obtained by Euler in 1748 it can be shown that the continued fraction solution to the monic quadratic equation
converges or diverges depending on the value of the discriminant, b2 − 4c.
- If the discriminant is negative, the fraction diverges by oscillation, which means that its convergents wander around in a regular or even chaotic fashion, never approaching a finite limit.
- If the discriminant is zero the fraction converges to the single root of multiplicity two.
- If the discriminant is positive the equation has two real roots, and the continued fraction converges to the larger (in absolute value) of these. The rate of convergence depends on the absolute value of the ratio between the two roots: the farther that ratio is from unity, the more quickly the continued fraction converges.
- This theorem is true for the general monic quadratic equation with real or complex coefficients.
See also
References
- H. S. Wall, Analytic Theory of Continued Fractions, D. Van Nostrand Company, Inc., 1948 ISBN 0-8284-0207-8