Jump to content

Conjugate gradient squared method

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by MtPenguinMonster (talk | contribs) at 23:55, 27 December 2023 (Submitting using AfC-submit-wizard). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
  • Comment: Thanks for your submission! I'm going to have to decline this for now as the draft is essentially just a statement of the algorithm with no explanation. This is not useful as an encyclopedic reference, as the significant technical language makes it difficult to read for anyone other than people familiar with the subject. Let me know if you have any questions! TechnoSquirrel69 (sigh) 20:01, 4 November 2023 (UTC)


In numerical linear algebra, the conjugate gradient squared method (CGS) is an iterative algorithm for solving systems of linear equations of the form , particularly in cases where computing the transpose is impractical.[1] The CGS method was developed as an improvement to the Biconjugate gradient method.[2][3][4]

Background

A system of linear equations consists of a known matrix and a known vector . To solve the system is to find the value of the unknown vector .[3][5] A direct method for solving a system of linear equations is to take the inverse of the matrix , then calculate . However, computing the inverse is computationally expensive. Hence, iterative methods are commonly used. Iterative methods begin with a guess , and on each iteration the guess is improved. Once the difference between successive guesses is sufficiently small, the method has converged to a solution.[6][7]

As with the conjugate gradient method, biconjugate gradient method, and similar iterative methods for solving systems of linear equations, the CGS method can be used to find solutions to multi-variable optimisation problems, such as power-flow analysis, hyperparameter optimisation, and facial recognition.[8]

The Algorithm

The algorithm is as follows:[9]

  1. Choose an initial guess
  2. Compute the residual
  3. Choose
  4. For do:
    1. If , the method fails.
    2. If :
    3. Else:
    4. Solve , where is a pre-conditioner.
    5. Solve
    6. Check for convergence: if there is convergence, end the loop and return the result

See Also

References

Category:Numerical linear algebra Category:Gradient methods

  1. ^ Noel Black; Shirley Moore. "Conjugate Gradient Squared Method". Wolfram Mathworld.
  2. ^ Mathworks. "cgs". Matlab documentation.
  3. ^ a b Henk van der Vorst (2003). "Bi-Conjugate Gradients". Iterative Krylov Methods for Large Linear Systems. Cambridge University Press. ISBN 0-521-81828-1.
  4. ^ Peter Sonneveld (1989). "CGS, A Fast Lanczos-Type Solver for Nonsymmetric Linear systems". SIAM Journal on Scientific and Statistical Computing. 10 (1): 36–52. doi:10.1137/0910004. ProQuest 921988114.
  5. ^ "Linear equations" (PDF), Matrix Analysis and Applied Linear Algebra (PDF), 3600 Market Street, 6th Floor Philadelphia, PA 19104-2688: SIAM, pp. 1–40, doi:10.1137/1.9780898719512.ch1, retrieved 2023-12-18{{citation}}: CS1 maint: location (link)
  6. ^ "Iterative Methods for Linear Systems". Mathworks.
  7. ^ Jean Gallier. "Iterative Methods for Solving Linear Systems" (PDF). UPenn.
  8. ^ Alexandra Roberts; Anye Shi; Yue Sun. "Conjugate gradient methods". Cornell University. Retrieved 2023-12-26.
  9. ^ R. Barrett; M. Berry; T. F. Chan; J. Demmel; J. Donato; J. Dongarra; V. Eijkhout; R. Pozo; C. Romine; H. Van der Vorst (1994). Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods, 2nd Edition. SIAM.