This is an old revision of this page, as edited by 131.220.99.58(talk) at 16:21, 27 October 2011(Dubious statement. Simple examples prove the opposite.). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.Revision as of 16:21, 27 October 2011 by 131.220.99.58(talk)(Dubious statement. Simple examples prove the opposite.)
This method is used to solve linear equations of the form
where A is an invertible and Hermitian matrix, and b is nonzero.
The conjugate residual method differs from the closely related conjugate gradient method primarily in that it involves somewhat more computation but is applicable to problems that aren't positive definite, in fact the only requirement is that A be Hermitian (or, with real numbers, symmetric). [citation needed] This makes the conjugate residual method applicable to problems which intuitively require finding saddle points instead of minima, such as numeric optimization with Lagrange multiplier constraints.
Given an (arbitrary) initial estimate of the solution , the method is outlined below:
the iteration may be stopped once has been deemed converged. Note that the only difference between this and the conjugate gradient method is the calculation of and (plus the optional recursive calculation of at the end).
References
Yousef Saad, Iterative methods for sparse linear systems (2nd ed.), pages 181–182, SIAM. ISBN 978-0898715347.