Quantitative Analysis
Parallel Processing
Numerical Analysis
C++ Multithreading
Python for Excel
Python Utilities
Services
Author
Printable PDF file
I. Basic math.
II. Pricing and Hedging.
III. Explicit techniques.
IV. Data Analysis.
V. Implementation tools.
VI. Basic Math II.
VII. Implementation tools II.
1. Calculational Linear Algebra.
A. Quadratic form minimum.
B. Method of steepest descent.
C. Method of conjugate directions.
D. Method of conjugate gradients.
E. Convergence analysis of conjugate gradient method.
F. Preconditioning.
G. Recursive calculation.
H. Parallel subspace preconditioner.
2. Wavelet Analysis.
3. Finite element method.
4. Construction of approximation spaces.
5. Time discretization.
6. Variational inequalities.
VIII. Bibliography
Notation. Index. Contents.

Preconditioning.


ccording to the proposition ( Convergence of conjugate gradient method ), the procedure of the summary ( Conjugate gradients ) converges significantly better when $\lambda_{\min}$ and $\lambda_{\max}$ are close. For this reason one may attempt to consider MATH instead of MATH for some matrix $B^{-1}$ that almost inverts $A$ . The matrix $B^{-1}A$ does not have to be symmetric or positive definite in MATH . However, it is self-adjoint and positive definite with respect to MATH : MATH MATH Therefore, it has all the necessary spectral properties and we can still apply the procedure ( Conjugate gradients ).

Another possibility is to try factorization MATH Such decomposition always exists (but not unique) if $B$ is symmetric positive definite. We have MATH We make the change MATH multiply by $E^{-T}$ and arrive to MATH Note that (by similar calculation) MATH The matrix MATH is symmetric positive-definite in MATH . The procedure ( Conjugate gradients ) then can be adapted for the equation $\left( \#\right) $ with usual tricks to keep down the number of matrix multiplications.





Notation. Index. Contents.


















Copyright 2007