Quantitative Analysis
Parallel Processing
Numerical Analysis
C++ Multithreading
Python for Excel
Python Utilities
Services
Author

I. Wavelet calculations.
II. Calculation of approximation spaces in one dimension.
III. Calculation of approximation spaces in one dimension II.
IV. One dimensional problems.
V. Stochastic optimization in one dimension.
1. Review of variational inequalities in maximization case.
2. Penalized problem for mean reverting equation.
3. Impossibility of backward induction.
4. Stochastic optimization over wavelet basis.
A. Choosing probing functions.
B. Time discretization of penalty term.
C. Implicit formulation of penalty term.
D. Smooth version of penalty term.
E. Solving equation with implicit penalty term.
F. Removing stiffness from penalized equation.
G. Mix of backward induction and penalty term approaches I.
H. Mix of backward induction and penalty term approaches I. Implementation and results.
I. Mix of backward induction and penalty term approaches II.
J. Mix of backward induction and penalty term approaches II. Implementation and results.
K. Review. How does it extend to multiple dimensions?
VI. Scalar product in N-dimensions.
VII. Wavelet transform of payoff function in N-dimensions.
VIII. Solving N-dimensional PDEs.
Downloads. Index. Contents.

Implicit formulation of penalty term.


onsider the equation ( Evolution with penalty term ) involving explicit time discretization of first order: MATH MATH The explicit formulation is selected for calculational convenience. At the initial time step $n=N-1$ , the column MATH satisfies the condition $\Omega=0$ . After evolution for one step, it falls into $\Omega\not =0$ . Under explicit formulation, it takes at least another time step for the penalty term to take affect and the solution MATH already deviated from $\Omega=0$ . Clearly, we would have to make very small time steps to keep the deviation small. Furthermore, we will pick up discrepancy on every step and accumulate it.

Therefore, we arrive to implicit formulation: MATH MATH The $\left( n,n\right) $ and MATH superscripts over $\Omega\,$ term mark discrepancy of grids. The MATH is constructed with respect to an adaptive basis selection MATH : MATH and MATH , MATH are connected similarly via $K_{n}$ . The transformation MATH adapts MATH to the grid $K_{n}$ . Hence, before performing the operation MATH , we apply such transformation and remove grid-dependent details from consideration:

MATH (Same grid reduction)
MATH MATH MATH where MATH is calculated by projecting final payoff on MATH : MATH

To see that such transformation is correct, put MATH .





Downloads. Index. Contents.


















Copyright 2007