This is an old revision of this page, as edited by 2a02:810b:4540:b70:caa0:29ea:3e56:3774(talk) at 14:36, 16 February 2024(It used to say that the algorithm can be rewritten to require only one matrix-vector multiplication per iteration. That would have been true if this was the conjugate gradient algorithm. The conjugate residual algorithm requires two matrix-vector multiplications per iteration.). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.Revision as of 14:36, 16 February 2024 by 2a02:810b:4540:b70:caa0:29ea:3e56:3774(talk)(It used to say that the algorithm can be rewritten to require only one matrix-vector multiplication per iteration. That would have been true if this was the conjugate gradient algorithm. The conjugate residual algorithm requires two matrix-vector multiplications per iteration.)
This method is used to solve linear equations of the form
where A is an invertible and Hermitian matrix, and b is nonzero.
The conjugate residual method differs from the closely related conjugate gradient method. It involves more numerical operations and requires more storage.
Given an (arbitrary) initial estimate of the solution , the method is outlined below:
the iteration may be stopped once has been deemed converged. The only difference between this and the conjugate gradient method is the calculation of and (plus the optional incremental calculation of at the end).
Note: the above algorithm can be transformed so to make only two symmetric matrix-vector multiplication in each iteration.
Preconditioning
By making a few substitutions and variable changes, a preconditioned conjugate residual method may be derived in the same way as done for the conjugate gradient method:
The preconditioner must be symmetric positive definite. Note that the residual vector here is different from the residual vector without preconditioning.