This is an old revision of this page, as edited by 134.148.85.110(talk) at 22:13, 13 October 2013(Modified x_0, r_0, and p_0 syntax in math section to fix the parse error.). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.Revision as of 22:13, 13 October 2013 by 134.148.85.110(talk)(Modified x_0, r_0, and p_0 syntax in math section to fix the parse error.)
This method is used to solve linear equations of the form
where A is an invertible and Hermitian matrix, and b is nonzero.
The conjugate residual method differs from the closely related conjugate gradient method primarily in that it involves more numerical operations and requires more storage, but the system matrix may be merely semi-definite. The conjugate residual method then computes the least-square solution of the linear problem.
Given an (arbitrary) initial estimate of the solution , the method is outlined below:
the iteration may be stopped once has been deemed converged. Note that the only difference between this and the conjugate gradient method is the calculation of and (plus the optional incremental calculation of at the end).
Preconditioning
By making a few substitutions and variable changes, a preconditioned conjugate residual method may be derived in the same way as done for the conjugate gradient method:
The preconditioner must be symmetric. Note that the residual vector here is different from the residual vector without preconditioning.
References
Yousef Saad, Iterative methods for sparse linear systems (2nd ed.), page 194, SIAM. ISBN 978-0-89871-534-7.
Jonathan Richard Shewchuck, An Introduction to the Conjugate Gradient Method Without the Agonizing Pain (edition ), pages 39–40.