Jump to content

Tridiagonal matrix algorithm/Derivation

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Ben pcc (talk | contribs) at 23:36, 13 September 2007 (Some details...). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)

The derivation of the Tridiagonal matrix algorithm involves manually performing some specialized Gaussian elimination in a generic manner.

Suppose that the unknowns are , and that the equations to be solved are:

Consider modifying the second () equation with the first equation as follows:

which would give:

and the effect is that has been eliminated from the second equation. Using a similar tactic with the modified second equation on the third equation yields:

This time was eliminated. If this procedure is repeated until the row; the (modified) equation will involve only one unknown, . This may be solved for and then used to solve the equation, and so on until all of the unknowns are solved for.

Clearly, the coefficients on the modified equations get more and more complicated if stated explicitly. By examining the procedure, the modified coefficients (notated with tildes) may be instead be defined recursively:

To further hasten the solution process, may be divided out (if there's no division by zero risk), the newer modified coefficients notated with an asterisk will be:

This gives the following system with the same unknowns and coefficients defined in terms of the original ones above:

The last equation involves only one unknown. Solving it in turn reduces the next last equation to one unknown, so that this backward substitution can be used to find all of the unknowns: