Quantitative Analysis
Parallel Processing
Numerical Analysis
C++ Multithreading
Python for Excel
Python Utilities
Services
Author
Printable PDF file
I. Basic math.
II. Pricing and Hedging.
III. Explicit techniques.
IV. Data Analysis.
V. Implementation tools.
VI. Basic Math II.
VII. Implementation tools II.
1. Calculational Linear Algebra.
2. Wavelet Analysis.
3. Finite element method.
4. Construction of approximation spaces.
A. Finite element.
B. Averaged Taylor polynomial.
C. Stable space splittings.
D. Frames.
E. Tensor product splitting.
F. Sparse tensor product. Cure for curse of dimensionality.
a. Definition of sparse tensor product.
b. Wavelet estimates in Sobolev spaces.
c. Stability of wavelet splitting.
d. Stable splitting for tensor product of Sobolev spaces.
e. Approximation by sparse tensor product.
5. Time discretization.
6. Variational inequalities.
VIII. Bibliography
Notation. Index. Contents.

Wavelet estimates in Sobolev spaces.


e proceed to extend results of the sections ( Vanishing moments for biorthogonal wavelets ) and ( Vanishing moments of wavelet ) to Sobolev spaces MATH , see the chapter ( Sobolev spaces ). The section ( Construction of approximation spaces ) is important prerequisite.

Proposition

(Jackson inequality for wavelets) Assume the condition ( Sparse tensor product setup ). Then MATH for $m=0,1,...,n$ .

Proof

According to the proposition ( Bramble-Hilbert lemma ), MATH for some choice of $\rho$ and $x_{0}$ . The function MATH is an $\left( m-1\right) $ -degree polynomial of $z$ . Thus, it is contained in MATH .

Because MATH have finite support, if we increase the scale $d$ then, perhaps starting from some scale $d_{0}$ , we receive enough freedom to replicate polynomials on subdivisions $\Delta_{d-d_{0},k}$ separately. Hence, the result $\left( \#\right) $ applies with MATH , MATH for some constant $C$ . With substitution $\kappa=0$ into $\left( \#\right) $ , we obtain $\left( \&\right) $ .

Corollary

(Jackson inequality for wavelets 2) Assume the condition ( Sparse tensor product setup ). Then MATH for any $w\in W_{d}$ and $m=0,1,...,n$ .

Proof

For any $f_{d+1}\in V_{d+1}$ we have MATH for some MATH . Thus the proposition ( Jackson inequality for wavelets ) in such context reads MATH and we have a freedom of choice for a pair $f_{d+1},f_{d}$ . Thus the above should hold for any $f_{d}$ so MATH

Proposition

(Vanishing moments vs approximation 2) For $n\geq1$ assume that

1. MATH ,

2. the derivative MATH is bounded on $\QTR{cal}{R}$ : MATH for some $C_{0}=const$ ,

3. a function $\psi$ has compact support,

4. MATH , $k=0,1,...,n-2$ ,

5. MATH

then there exists a constant MATH such that MATH

Proof

Observe that MATH by compactness of support of $\psi$ and for $k=1,...,n-1$ we have (again by compactness of support) MATH The proposition ( Vanishing moments vs approximation ) applies with the substitutions MATH , MATH and $n:=n-1$ .

Proposition

(Vanishing moments vs approximation 3) For $n\geq1$ and $m=0,1,...,n-1$ assume that

1. a function MATH ,

2. the derivative MATH is bounded on $\QTR{cal}{R}$ : MATH for some $C_{0}=const$ .

3. a function $\psi$ has compact support,

4. MATH , $k=0,1,...,n-1-m$ ,

5. MATH

then there exists a constant MATH such that MATH

Proof

is a simple extension of the proof of the previous proposition ( Vanishing moments vs approximation 2 ).

Note that MATH

MATH (Derivative vs scale)

Proposition

(Bernstein inequality for wavelets) Assume the condition ( Sparse tensor product setup ). Then for any $v\in V_{d}$ MATH for $m=0,1,...,r$ .

Proof

First, we prove the result for $\psi$ on $\QTR{cal}{R}$ : MATH

The method of the prove is assume that MATH does not hold and arrive to contradiction using the proposition ( Vanishing moments vs approximation 3 ). If MATH does not hold then there exists an increasing sequence MATH , MATH such that MATH

Note that the scale operation does not alter the $L^{2}$ -norm, see the formula ( Property of scale and transport 2 ), hence we only need to estimate the numerator of MATH to show that, in fact, LHS of MATH cannot blow up.

In context of the proposition ( Vanishing moments vs approximation 3 ) we take the sequence MATH , MATH then MATH Note that MATH Thus, for $f_{d}$ to $H^{m}$ -approximate the $\psi_{d,0}$ the $Cr$ max-norm has to grow like MATH : MATH We also use the formula ( Derivative vs scale ): MATH or MATH This estimate is in contradiction with MATH , thus, MATH is proven.

We extend the estimate to $W_{d}$ as follows. Let MATH then MATH We apply MATH . MATH The MATH is a $d,k$ -constant: MATH . MATH We use the proposition ( Frame property 2 ). MATH

Next, we extend the result to $V_{d}$ . Let $v\in V_{d}$ , MATH then MATH

We have proven the estimate in case of $\QTR{cal}{R}$ .

To extend the result to $\Delta$ we note that the procedure in the section ( Construction of MRA and wavelets on half line or an interval ) is a finite linear combination taken within $V_{d}$ .





Notation. Index. Contents.


















Copyright 2007