Quantitative Analysis
Parallel Processing
Numerical Analysis
C++ Multithreading
Python for Excel
Python Utilities
Services
Author

I. Wavelet calculations.
II. Calculation of approximation spaces in one dimension.
III. Calculation of approximation spaces in one dimension II.
IV. One dimensional problems.
V. Stochastic optimization in one dimension.
1. Review of variational inequalities in maximization case.
2. Penalized problem for mean reverting equation.
3. Impossibility of backward induction.
4. Stochastic optimization over wavelet basis.
A. Choosing probing functions.
B. Time discretization of penalty term.
C. Implicit formulation of penalty term.
D. Smooth version of penalty term.
E. Solving equation with implicit penalty term.
F. Removing stiffness from penalized equation.
G. Mix of backward induction and penalty term approaches I.
H. Mix of backward induction and penalty term approaches I. Implementation and results.
I. Mix of backward induction and penalty term approaches II.
J. Mix of backward induction and penalty term approaches II. Implementation and results.
K. Review. How does it extend to multiple dimensions?
VI. Scalar product in N-dimensions.
VII. Wavelet transform of payoff function in N-dimensions.
VIII. Solving N-dimensional PDEs.
Downloads. Index. Contents.

Time discretization of penalty term.


n this section we consider time discretization for the expression MATH The $\Omega$ depends on a function MATH that comes from the equation where $\Omega$ participates. Theory of numerical methods for ODE have a similar problem and an effective solution. We review the theory.

Consider the ODE MATH The time evolution happens backwards from $T$ to $0$ . Taylor decomposition applies to $u$ and MATH in all arguments. The $u_{T}$ is an input value.

We introduce a time mesh MATH and a mesh function $y_{n}$ defined recursively MATH Thus MATH We proceed to estimate the magnitude of the difference MATH for all $n$ . We calculate MATH We subtract and use MATH : MATH

For a general time step we calculate MATH MATH MATH thus MATH Then MATH assuming that all $\Delta t_{n}$ are of generally the same magnitude.

Following recipes of Runge-Kutta technique, we introduce a better approximation $y$ as follows: MATH We seek parameters MATH that deliver the smallest difference MATH .

We calculate the evolution equation for $y$ : MATH Therefore

MATH (Evolution of y)
We use Taylor expansion for $u$ : MATH and substitute the time derivatives using the defining equation for $u$ : MATH and put all together:
MATH (Evolution of u)
By comparing the formulas ( Evolution of y ) and ( Evolution of u ), we require MATH We set MATH then MATH MATH

At initial time step $n=N$ we get MATH and then the higher order propagates.

The crucial requirement is smoothness of $f$ . In our case, see the expression for $\Omega$ , the function $F$ would jump. Hence, the expression MATH is not small for those values of $u$ and $y_{n}$ on opposite sides of the jump. Given the nature of the problem, it would be typical.

One might try to replace the operation $x_{+}$ within the function $\Omega$ with some smooth function with similar properties: MATH The function $\omega$ must be zero where $x_{+}$ is zero. But then, to be smooth, it must increase gently on the other side of the MATH area. One could argue that it would hurt the purpose of the penalty term, dampening convergence of the procedure and requiring higher $\varepsilon$ . But then, one could also argue that this is what we want: we are not certain about desired strength of the penalty term, thus, we would like to employ a graduate procedure of high order.

In the following chapter ( Smooth version of penalty term ) we will point out a reason why, in fact, we must do such modification.





Downloads. Index. Contents.


















Copyright 2007