Saturday, October 8, 2011

Implicit methods.


We've already done a pretty simple example of a finite difference method. Remember that the main idea was to replace $y'(x_0)$ with
\begin{align}
\frac{y(x_0 + h) - y(x_0)}{h}
\end{align}
for some step size $h$. This is equivalent to saying, "the slope at point $x_0$ can be approximated by looking forward $h$ units." This was nice and gave us pretty good looking results. But... well... it doesn't always work.

Let's think about this really simple initial value problem.
\begin{align}
y'(x) &= -5 y(x) \\
y(0)&=1
\end{align}
This has a solution of $y(x) = e^{-5x}$, which you can verify.

Let's throw that earlier finite difference method at this problem:
\begin{align}
y_{i+1} = -5 y_i.
\end{align}
with $y_0 = 1$.
Here's the Python code to do this

import numpy as np
import matplotlib.pylab as p

h = 0.1

x = np.arange(0,1+h,h)
y = np.empty(len(x))

y[0] = 1

for i in xrange(0,len(x) - 1):
    y[i+1] = -5*y[i]

p.plot(x,y)
p.savefig("bad.png", size=(4,3)) 

The code generates a graph that looks like this.
Note that the x-axis goes as high as $10^7$. This is not a graph of $y(x) = e^{-5x}$, like we wanted. Just look at that recursion relation from before:
\begin{align}
y_{i+1} = -5 y_i.
\end{align}
We could write the general form of this as
\begin{align}
y_i = (-5)^i.
\end{align}

The reason this doesn't work very well is because the finite difference method we used is what's known as unstable. This is called the Forward Euler Method, an explicit method. Now, there is another finite difference method called the Backwards Euler Method, an implicit method, which is stable. It works in a similar fashion to the Forward Euler Method. We just replace $y'(x0)$ with this instead:
\begin{align}
\frac{y(x_0) - y(x_0 - h)}{h}.
\end{align}
 This is just like saying "the slope at $x_0$ can be approximated by looking backwards by $h$ units."

Let's make the replacement:
\begin{align}
\frac{y(x_0) - y(x_0 - h)}{h} = -5y(x_0)
\end{align}
which can be simplified to say
\begin{align}
y_i = \frac{y_{i-1}}{1 + 5h}.
\end{align}

The new code looks essentially the same except for the difference in the iterations.

for i in xrange(0,len(x) - 1):
    y[i+1] = y[i]/(1 - 5*h)

So this gives us a new graph.

Well, that's looking much better. For comparison, let's plot this next to the actual solution.

Approximation with $h = 0.1$.
So we're certainly closer than before. What if we refine $h$ to 1/100?

Approximation with $h = 0.01$.
Hey, that's excellent! Just like before, we seem to be getting convergence by reducing $h$.

Remember how easy our problem was? This was a first order linear ODE. What if it was nonlinear? How do we separate that $y_{i+1}$ term? Consider this initial value problem:
\begin{align}
y'(x) &= \log(y(x)) \\
y(x) &= 1
\end{align}
We'll make the Backwards Euler Method substitution
\begin{align}
y_{i+1} = y_i + h \log(y_{i+1}).
\end{align}
Uh, oh. We don't know $\log(y_{i+1})$ because we don't know $y_{i+1}$. This is where the term implicit comes up. We need $y_{i+1}$ in order to calculate $y_{i+1}$? That seems impossible! Well, as it turns out, it's not. Let's call $u = y_{i+1}$ and make the replacement.
\begin{align}
u = y_i + h\log(u)
\end{align}
or equivalently
\begin{align}
0 = y_i + h\log(u) - u.
\end{align}
Let's say we call some function
\begin{align}
f(u) = y_i + h\log(u) - u.
\end{align}
Now, if we can find $u$ to make $f(u) = 0$, then we've found $y_{i+1}$! Since this function is nonlinear, we'll use some numerical method to find $u$. Let's use Newton's Method. The Python code Newton's Method is short and sweet because we can treat functions just like objects!

def newton(x0, f, fprime, tol = 1e-10):
    x = x0
    while f(x) > tol:
        x = x - f(x) / fprime(x)
    return x


We just have to define the functions $f(x)$ and $f'(x)$ and give this some initial condition and it will find the zero for us. Now, since $f(x)$ will depend on $y_i$, we need to change it at every iteration. We'll use Lambda Functions, an inline function declaration, to define our functions.

Here's what the iteration section of our code looks like.

for i in range(0,len(x) - 1):
    f      = lambda x: y[i] - x + h*np.log(x)
    fprime = lambda x: -1 + h*(x*np.log(x) - x)
    y[i+1] = newton(y[i], f, fprime)

And here's our graph.
It looks pretty reasonable, but is it close to the actual solution? Well, the actual solution is pretty complicated, so I'll let you look at the Wolfram Alpha solution.

No comments:

Post a Comment