Excerpt
Contents
List of Tables
ii
List of Figures
ii
1
Theoretical Aspects of Implicit Finite Difference Methods
1
1.1
Derivation of a Fully Implicit Scheme
. . . . . . . . . . . . . . . . . . . . . .
1
1.1.1
Exemplary Boundary Conditions for a European Put . . . . . . . . . .
3
1.1.2
The Fully Implicit Scheme in Matrix Notation
. . . . . . . . . . . . .
4
1.2
Solving Tridiagonal System of Linear Equations . . . . . . . . . . . . . . . . .
4
1.3
Derivation of the CrankNicolson Scheme
. . . . . . . . . . . . . . . . . . . .
6
1.3.1
Exemplary Boundary Conditions for a European Put . . . . . . . . . .
8
1.3.2
The CrankNicolson Scheme in Matrix Notation
. . . . . . . . . . . .
9
2
Numerical Application for a European Put
10
2.1
Results and Interpretations . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10
2.2
Technical Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
12
3
Truncation Error and Rate of Convergence
13
3.1
Richardson Extrapolation for the Fully Implicit Scheme . . . . . . . . . . . .
15
3.2
Richardson Extrapolation for CrankNicolson Scheme
. . . . . . . . . . . . .
17
3.3
Preliminary Efficiency Analysis . . . . . . . . . . . . . . . . . . . . . . . . . .
20
3.4
Technical Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
24
Appendix
25
Spurious oscillations for CrankNicolson . . . . . . . . . . . . . . . . . . . . . . . .
25
Problems of CrankNicolson for nonsmooth Boundaries . . . . . . . . . . . . . . .
26
Continued Efficiency Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
26
References
29
i
List of Tables
1
Results for Richardson Extrapolation Implicit Scheme . . . . . . . . . . . . .
16
2
Results for Richardson Extrapolation CN Scheme . . . . . . . . . . . . . . . .
18
List of Figures
1
Errors of numerical solutions (U
 u
BSM
(S, = T )) depending on S . . . . .
10
2
3D Plot for Error of Numerical Solutions depending on S and
. . . . . . .
11
3
Errors of numerical solutions (U
 u
BSM
(S, = T )) depending on S (zoom) .
11
4
Approximation Error for Implicit Scheme and Richardson Extrapolation . . .
16
5
Comparison of Approximation Error for Implicit Scheme . . . . . . . . . . . .
17
6
Approximation Error for CN Scheme and Richardson Extrapolation
. . . . .
18
7
Comparison of Approximation Error for CN Scheme . . . . . . . . . . . . . .
19
8
Comparison of Approximation Error for both Schemes with RE . . . . . . . .
19
9
Influence of Total Number of grid points using relation N =
1
72
M
2
. . . . . .
20
10
Maximum Absolute Error as a Function of Computational Time with N =
1
72
M
2
20
11
Influence of Total Number of grid points using relation N =
1
1.2
M
. . . . . .
21
12
Maximum Absolute Error as a Function of Computational Time with N =
1
1.2
M 21
13
Comparison of Maximum Absolute Error as a Function of Computational Time 22
14
Comparison of the Effect of varying Grid Points on Approximation Error
. .
23
15
Comparison of Maximum Absolute for a fixed M = 120 . . . . . . . . . . . .
23
16
Spurious Oscillation of CN method for M = 2000 and N = 50 . . . . . . . . .
25
17
Spurious Oscillation of CN method for M = 2000 and N = 50 3D plot . . . .
25
18
RighthandBoundary Problem of CN method M = 500 and N = 1000 3D plot
26
19
Approximation error for fixed N and varying M
. . . . . . . . . . . . . . . .
27
20
Convergence of the Approximation Error for N = 100 and varying M
. . . .
27
21
Approximation error for fixed M and varying N
. . . . . . . . . . . . . . . .
28
ii
1
Theoretical Aspects of Implicit Finite Difference Methods
Using an explicit scheme for an application of finite difference methods may lead to stability
issues.
If one wants to increase the accuracy by raising the number of spatial grid points, the
number
of time intervals have to be increased to a certain extent in order to sustain a
converging
behavior. As for quite accurate results ridiculously many grid points in time are
needed,
the practical use of the explicit scheme is rather limited due to high computational
effort.
Implicit methods for finite difference methods are designed to overcome these stability
limitations
imposed by the already mentioned convergence restrictions. Since such methods
are
unconditionally stable, both accuracy and limited computational effort can be combined.
1.1
Derivation of a Fully Implicit Scheme
As a starting point, assume that the price of an asset follows a geometric Brownian motion:
dS
t
= (r
 q)S
t
dt + S
t
dW
t
.
(1)
According to Hirsa (2013, p. 115), the value of a derivative on this financial asset specified
in (1) must satisfy the BlackScholesMerton PDE subject to a given terminal condition at
every point in time:
1
2
2
S
2
F
SS
(S, t) + (r
 q)SF
S
(S, t)
 rF (S, t) + F
t
(S, t) = 0
s.t. F (S, T ) = f (S, T ) .
(2)
In the PDE, is the socalled diffusion coefficient of the underlying Brownian motion, S the
price of the underlying asset, i.e. a stock, r the riskfree interest rate and q the continuously
paid dividend rate.
In order to obtain a numerical solution for the above PDE, the initial variables should be
transformed. Furthermore, the continuous pricing problem has to be converted into a discrete
one by dividing the feasible time and space domain into a rectangular grid and substituting the
partial derivatives with differential quotients. Following this idea, a new set of independent
variables is introduced:
x = S = ih
i [0, M]
= T
 t = nk n [0, N]
where i and n are index numbers for the discretized grid points, and h and k are step sizes
in space and time direction, respectively. Consequently, the rectangular grid reads:
x
[0, x
max
]
with x
max
= M
· h
[0, T ] with T = N · k
Subsequently, the financial derivative and the terminal condition are also transformed
1
:
F (S, t) = u(x, )
f (S, T ) = f (x, 0)
1
By the definition of , the terminal condition becomes an initial condition.
1
As a final step, the PDE as given in (2) is discretized with the newly introduced variables
x = ih and = nk
2
:
a(i, n)u
xx
(i, n) + b(i, n)u
x
(i, n) + c(i, n)u(i, n)
 u
(i, n) = 0
s.t. u(x, 0) = f (x, 0)
(3)
By comparing (2) and (3) one may simply define
3
:
a(i, n) = a(i) = 0.5 i
2
h
2
b(i, n) = b(i) = (r
 q)ih c(i, n) = c = r
(4)
Since the functional form of the underlying partial derivatives in (3) is not known, they are
approximated by differences using Taylor series expansions. For the implementation of the
fully implicit scheme, the following approximations around x = ih and = (n + 1)k are used:
· u
xx
(i, n + 1) =
1
h
2
(U
n+1
i+1
 2U
n+1
i
+ U
n+1
i
1
) + O(h
2
)
· u
x
(i, n + 1) =
1
2h
(U
n+1
i+1
 U
n+1
i
1
) + O(h
2
)
· u
(i, n + 1) =
1
k
(U
n+1
i
 U
n
i
) + O(k)
· u(i, n + 1) = U
n+1
i
+ O(k)
In contrast to the explicit scheme, the fully implicit scheme employs a backward approxima
tion for the partial time derivative. Plugging these approximations into (3) and dropping the
error terms leads to
a
h
2
(
U
n+1
i+1
 2U
n+1
i
+ U
n+1
i
1
)
+
b
2h
(
U
n+1
i+1
 U
n+1
i
1
)
+ cU
n+1
i

1
k
(
U
n+1
i
 U
n
i
n
)
= 0 ,
which can be rearranged for an implicit solution of the values at the next time level:
d
1
(i, n + 1)U
n+1
i
1
+ d
2
(i, n + 1)U
n+1
i
+ d
3
(i, n + 1)U
n+1
i+1
= d
4
(i, n + 1) = U
n
i
(5)
where
d
1
(i, n + 1) = d
1
(i) =
a
k
h
2
+ b
k
2h
=

1
2
2
i
2
k +
1
2
(r
 q)ik
d
2
(i, n + 1) = d
2
(i) = a
2k
h
2
 ck + 1 = 1 + rk +
2
i
2
k
d
3
(i, n + 1) = d
3
(i) =
a
k
h
2
 b
k
2h
=

1
2
2
i
2
k

1
2
(r
 q)ik
In contrast to an explicit scheme, it is not possible to compute U
n+1
i
directly from the already
known values because three unknown values are linked to only one known value. However,
since (5) has to hold for all i = 1, ..., M
1, a system of M 1 equations emerges for the M +1
unknown values. Appropriate boundary conditions yield the two missing values for each time
step and the terminal conditions give the values in the first time layer. Thus, after adding
two additional equations (vonNeumann boundaries) or fixing two values in an appropriate
manner (Dirichlet boundary), the N
 1 linear equation systems can be solved recursively in
order to calculate the corresponding derivative price at = T .
2
In the functional formulation, the step sizes h and k will be omitted as they are the same for all values.
The index numbers i and n determine the underlying dynamics.
3
Note that all newly introduced coefficients are independent of the time index n.
2
1.1.1
Exemplary Boundary Conditions for a European Put
The choice of boundary conditions can be crucial for ensuring the accuracy of the pricing
algorithm given in (5).
This is true as any error on a boundary is propagated through
the ongoing finite difference scheme. This makes the entire scheme quite sensitive to the
boundary conditions (Hirsa, 2013, p. 121). Hence, a sensible and complete formulation
of these conditions at i = 0 and i = M is the key to a successful implementation of the
underlying pricing problem.
If a European put is the underlying derivative for the recursive pricing algorithm given in
(5), one can simply specify the starting point for the implicit scheme by taking the terminal
values for a plainvanilla European put:
f (S, 0) = u(x, 0) = max(E
 S, 0)
=
U
0
i
= max(E
 ih, 0) i [0, M] .
(6)
For the leftsided boundary, i.e. the stock price is 0, a Dirichlet boundary can be used since
it can be assumed that the option value just equals the discounted strike price:
U
n
0
= Ee
r(nk)
n [1, N] .
(7)
Hence for i = 1, the equation system in (5) reads
d
2
(1, n + 1)U
n+1
1
+ d
3
(1, n + 1)U
n+1
2
= d
4
(1, n + 1)
 d
1
(1, n + 1)Ee
r(n+1)k
d
4
(1,n+1)
.
(8)
As there is no "natural" bound on the right side of the grid, an artificial one needs to be
introduced. It can be assumed that for really high stock prices, the option value does not
change if the price of the underlying stock changes slightly, i.e. the Delta of the European
put is equal to 0 if i = M . Thus, the central approximation for the first derivative around
i = M can be used and set equal to zero
U
n+1
M
x
=
1
2h
(
U
n+1
M +1
 U
n+1
M
1
)
!
= 0 .
(9)
Additionally, (5) is also valid at i = M
d
1
(M, n + 1)U
n+1
M
1
+ d
2
(M, n + 1)U
n+1
M
+ d
3
(M, n + 1)U
n+1
M +1
= d
4
(M, n + 1) .
(10)
Solving (9) for U
n+1
M +1
= U
n+1
M
1
gives the possibility to eliminate U
n+1
M +1
in (10) in order to
obtain
(d
1
(M, n + 1) + d
3
(M, n + 1))
d
1
(M,n+1)
U
n+1
M
1
+ d
2
(M, n + 1)U
n+1
M
= U
n
M
= d
4
(M, n + 1)
(11)
The expression in (11) adds an additional equation to the linear system of equations such
that there are now M equations for M unknowns at each time layer n
4
. At first glance,
it seems that the implicit scheme requires much more computational effort compared to an
explicit scheme as a system of equations needs to be solved for every n. However, if the
implicit scheme is written in matrix notation, it exhibits a quite nice structure. As shown
below, the matrix is tridiagonal which simplifies the solution of the system dramatically.
4
Note that the value U
n+1
0
is fixed via a Dirichlet boundary and is therefore not an unknown anymore.
3
1.1.2
The Fully Implicit Scheme in Matrix Notation
The M equations given in (5), (8) and (11) can be written in compact form using matrix
notation:
A
M
×M
× U
n+1
M
×1
=
d
4
M
×1
(12)
d
2
(1)
d
3
(1)
d
1
(2)
d
2
(2)
d
3
(2)
0
. ..
0
d
1
(M
 1) d
2
(M
 1) d
3
(M
 1)
d
1
(M )
d
2
(M )
×
U
n+1
1
U
n+1
2
..
.
U
n+1
M
1
U
n+1
M
=
d
4
(1)
d
4
(2)
..
.
d
4
(M
 1)
d
4
(M )
The straightforward method to solve (12) for U
n+1
involves taking the inverse of A. However,
in practice, there are far more efficient solution techniques than matrix inversion. The matrix
A has the property that it is tridiagonal, which means that only the diagonal, the super
diagonal and the subdiagonal elements are nonzero. For such special systems of equations,
two efficient methods are outlined in the next Section.
1.2
Solving Tridiagonal System of Linear Equations
In order to solve (12) for U
n+1
, the tridiagonal structure of the underlying system can be
exploited. The first proposed method involves a matrix decomposition into an upper and
lower matrix which makes it possible to subdivide the given problem into two minor problems.
Assume A
× U
n+1
= d
4
is decomposed into
A
× U
n+1
= (L
× R)U
n+1
= L(R
× U
n+1
) = L
× y = d
4
(13)
This means that the first step employs the socalled Choleskydecomposition according to
A = L
× R
5
:
d
2
d
3
d
1
d
2
d
3
. ..
d
1
d
2
d
3
d
1
d
2
A
=
1
l
1
2
l
2
3
. ..
l
M
1
M
L
×
1
r
1
1
r
2
. ..
1
r
M
1
1
R
. (14)
From (14) it follows for the first values of and r
d
2
(1) =
1
1
= d
2
(1)
(15)
d
3
(1) =
1
r
1
r
1
=
d
3
(1)
1
=
d
3
(1)
d
2
(1)
5
In order to keep it wellarranged, the spatial indices are omitted. However, it should be kept in mind
that the values for d
1
, d
2
and d
3
are different in every row of A.
4
and then in general
r
i
1
=
d
3
(i
 1)
i
1
i = 2, ..., M
i
= d
2
(i)
 d
1
(i)r
i
1
i = 2, ..., M
(16)
l
i
= d
1
(i)
i = 2, ..., M
.
Due to the fact that all values in A are timeinvariant, this decomposition only needs to be
done once as it is the same for every time step
6
. Subsequently, L
× y = d
4
can be solved by
forward iteration. As d
4
is timedependent, this step has to be done for every n.
1
l
1
2
l
2
3
. ..
l
M
1
M
L
×
y
1
y
2
..
.
y
M
1
y
M
y
=
d
4
(1)
d
4
(2)
..
.
d
4
(M
 1)
d
4
(M )
d
4
(17)
According to (17), it follows directly that
1
y
1
= d
4
(1)
y
1
=
d
4
(1)
1
(18)
y
i
=
1
i
(d
4
(i)
 d
1
(i)y
i
1
)
i = 2, ..., M
(19)
which can be used for a backward iterative solution of R
× U
n+1
= y
1
r
1
1
r
2
. ..
1
r
M
1
1
R
×
U
n+1
1
U
n+1
2
..
.
U
n+1
M
1
U
n+1
M
U
n+1
=
y
1
y
2
..
.
y
M
1
y
M
y
.
(20)
Using (20), one can finally solve for the unknown values U
n+1
U
n+1
M
= y
M
(21)
U
n+1
i
= y
i
 r
i
U
n+1
i+1
= y
i

d
3
(i)
(i)
U
n+1
i+1
i = M
 1, ..., 1
(22)
Although this workaround seems quite timeconsuming, solving a tridiogonal system of equa
tion with the above algorithm requires only a little bit more computational effort compared to
an explicit scheme, to be exact twice as many operations per time step (Wilmott et al., 1995,
p. 144). Yet it can be shown that the implicit scheme is unconditionally stable which means
that it is not necessary to increase N to really large numbers for a finer spatial grid
7
. This
more than compensates the loss in time for solving the above linear system. Thus, implicit
methods can be considered superior to explicit methods.
6
It should be noted that actually only (15) and (16) are needed for later steps. Hence, only these values
must be stored somewhere.
7
A proof of the unconditional stability of the implicit scheme is given by Brandimarte (2006, p. 311).
5
For the sake of completeness, an algorithm for a Gauss elimination is given as a second
method in the following. As for the first method, there are some calculations which do not
involve d
4
and are therefore timeinvariant:
d
1
(i) :=
d
1
(i)
d
2
(i
 1)
i = 2, ..., M
(23)
d
2
(i) := d
2
(i)
 d
1
(i)d
3
(i
 1)
i = 2, ..., M .
(24)
As a next step, one needs to do a forward iteration according to
d
4
(i) := d
4
(i)
 d
1
(i)d
4
(i
 1)
i = 2, ..., M
(25)
and finally a backward iteration which results in
U
n+1
M
=
d
4
(M )
d
2
(M )
(26)
U
n+1
i
=
d
4
(i)
 d
3
(i)U
n+1
i+1
d
2
(i)
i = M
 1, ..., 1 .
(27)
As for the first method, the forward and backward iterations have to be done on each time
layer n. The implementation of both methods using MATLAB and a comparison with respect
to computational time are given below.
1.3
Derivation of the CrankNicolson Scheme
So far, the explicit and the implicit scheme were derived using a forward or a backward
approximation in time, respectively. As a central approximation for the first derivative in
space was proven to be more accurate than a forward or backward iteration, one could follow
the same line of argumentation and simply combine them to a central approximation with
respect to time. However, such approximations with step size k in each direction usually
result in very unstable solutions which make them defective for practical use (Wilmott et al.,
1995, p. 138).
This gave rise to the CrankNicolson (CN) method by Crank & Nicolson (1947) who managed
to combine a forward and backward approximation in a feasible way in order to make use of a
more accurate central approximation in time. The basic idea is to consider a grid point which
is actually not on the underlying grid but positioned halfway between two time layers
8
. This
means that (2) is discretized around x = ih and = (n +
1
2
)k. For a discretization around
this point, there is no other possibility than using a central approximation since a forward or
backward approximation would involve a differential quotient with a grid point outside the
actual grid. For a derivation of this central approximation, one needs to consider a Taylor
series expansion in time direction using an appropriate time step. Obviously the appropriate
time step is
k
2
:
u
(
x,
±
k
2
)
= u(x, )
±
k
2
u
+
1
2
(
k
2
)
2
u
±
1
6
(
k
2
)
3
u
+ . . . .
(28)
8
For the original CN method the grid point is halfway between n and n + 1. However, there are also other
methods which vary the positioning between two time layers.
6
Excerpt out of 32 pages
 Quote paper
 Pascal Sturm (Author), 2016, Finite difference methods with an implicit scheme, Munich, GRIN Verlag, https://www.grin.com/document/373359
Publish now  it's free
Comments