Saturday, October 8, 2011

Implicit methods.


We've already done a pretty simple example of a finite difference method. Remember that the main idea was to replace $y'(x_0)$ with
\begin{align}
\frac{y(x_0 + h) - y(x_0)}{h}
\end{align}
for some step size $h$. This is equivalent to saying, "the slope at point $x_0$ can be approximated by looking forward $h$ units." This was nice and gave us pretty good looking results. But... well... it doesn't always work.

Let's think about this really simple initial value problem.
\begin{align}
y'(x) &= -5 y(x) \\
y(0)&=1
\end{align}
This has a solution of $y(x) = e^{-5x}$, which you can verify.

Let's throw that earlier finite difference method at this problem:
\begin{align}
y_{i+1} = -5 y_i.
\end{align}
with $y_0 = 1$.
Here's the Python code to do this

import numpy as np
import matplotlib.pylab as p

h = 0.1

x = np.arange(0,1+h,h)
y = np.empty(len(x))

y[0] = 1

for i in xrange(0,len(x) - 1):
    y[i+1] = -5*y[i]

p.plot(x,y)
p.savefig("bad.png", size=(4,3)) 

The code generates a graph that looks like this.
Note that the x-axis goes as high as $10^7$. This is not a graph of $y(x) = e^{-5x}$, like we wanted. Just look at that recursion relation from before:
\begin{align}
y_{i+1} = -5 y_i.
\end{align}
We could write the general form of this as
\begin{align}
y_i = (-5)^i.
\end{align}

The reason this doesn't work very well is because the finite difference method we used is what's known as unstable. This is called the Forward Euler Method, an explicit method. Now, there is another finite difference method called the Backwards Euler Method, an implicit method, which is stable. It works in a similar fashion to the Forward Euler Method. We just replace $y'(x0)$ with this instead:
\begin{align}
\frac{y(x_0) - y(x_0 - h)}{h}.
\end{align}
 This is just like saying "the slope at $x_0$ can be approximated by looking backwards by $h$ units."

Let's make the replacement:
\begin{align}
\frac{y(x_0) - y(x_0 - h)}{h} = -5y(x_0)
\end{align}
which can be simplified to say
\begin{align}
y_i = \frac{y_{i-1}}{1 + 5h}.
\end{align}

The new code looks essentially the same except for the difference in the iterations.

for i in xrange(0,len(x) - 1):
    y[i+1] = y[i]/(1 - 5*h)

So this gives us a new graph.

Well, that's looking much better. For comparison, let's plot this next to the actual solution.

Approximation with $h = 0.1$.
So we're certainly closer than before. What if we refine $h$ to 1/100?

Approximation with $h = 0.01$.
Hey, that's excellent! Just like before, we seem to be getting convergence by reducing $h$.

Remember how easy our problem was? This was a first order linear ODE. What if it was nonlinear? How do we separate that $y_{i+1}$ term? Consider this initial value problem:
\begin{align}
y'(x) &= \log(y(x)) \\
y(x) &= 1
\end{align}
We'll make the Backwards Euler Method substitution
\begin{align}
y_{i+1} = y_i + h \log(y_{i+1}).
\end{align}
Uh, oh. We don't know $\log(y_{i+1})$ because we don't know $y_{i+1}$. This is where the term implicit comes up. We need $y_{i+1}$ in order to calculate $y_{i+1}$? That seems impossible! Well, as it turns out, it's not. Let's call $u = y_{i+1}$ and make the replacement.
\begin{align}
u = y_i + h\log(u)
\end{align}
or equivalently
\begin{align}
0 = y_i + h\log(u) - u.
\end{align}
Let's say we call some function
\begin{align}
f(u) = y_i + h\log(u) - u.
\end{align}
Now, if we can find $u$ to make $f(u) = 0$, then we've found $y_{i+1}$! Since this function is nonlinear, we'll use some numerical method to find $u$. Let's use Newton's Method. The Python code Newton's Method is short and sweet because we can treat functions just like objects!

def newton(x0, f, fprime, tol = 1e-10):
    x = x0
    while f(x) > tol:
        x = x - f(x) / fprime(x)
    return x


We just have to define the functions $f(x)$ and $f'(x)$ and give this some initial condition and it will find the zero for us. Now, since $f(x)$ will depend on $y_i$, we need to change it at every iteration. We'll use Lambda Functions, an inline function declaration, to define our functions.

Here's what the iteration section of our code looks like.

for i in range(0,len(x) - 1):
    f      = lambda x: y[i] - x + h*np.log(x)
    fprime = lambda x: -1 + h*(x*np.log(x) - x)
    y[i+1] = newton(y[i], f, fprime)

And here's our graph.
It looks pretty reasonable, but is it close to the actual solution? Well, the actual solution is pretty complicated, so I'll let you look at the Wolfram Alpha solution.

Thursday, October 6, 2011

A simple example to get us started.

Python is great. It's an incredibly handy, easy to use programming language. What's more, it's loaded with functionality. It's used by web servers, Youtube, NASA, even high performance computing.

I picked it up a few years back, and found it quickly replacing all of my other once-beloved programming languages. It's intuitive, easy to read, powerful, and has "batteries included". I even found it replacing my beloved Octave / MATLAB shell.

But this isn't about Python. This is about finite difference methods. Well, it's also about Python. We'll be implementing some ordinary differential equation solvers in Python. So what's a finite difference method?

Suppose we want to solve the initial value problem
\begin{align} \label{simple}
y'(x) - x y(x) &= e^x \\
y(0) &= 1 \\
\end{align}
This is a nonhomogeneous first-order linear ordinary differential equation.
If you need to freshen up, I suggest reviewing the Paul's Notes section on this. Since this has non-constant coefficients, this problem is more difficult to solve than it appears at first glance. To solve this analytically, we could certainly crank out a pen and pad of paper. But time is precious and CPU cycles are cheap!

How do we compute this when we don't know $y(x)$ and we don't know $y'(x)$? Well, perhaps we can estimate $y'(x)$ in terms of $y(x)$. Indeed, at some point $x_0$, we can estimate $y'(x_0)$ by looking at the nearby $y(x_0 + h)$, like in this following graph.

A linear approximation of some function $f(x) = xln(x) -x$.

We've taken some function $xln(x) - x$ at $x_0$ and just looked ahead $h$ units. Then we drew a line threw those two points and decided, "this probably looks like the slope at $x_0$". This is called a linear approximation of a finite difference method.

This specific method approximates $y'(x)$ by doing
\begin{align} \label{fd}
y'(x) \approx \frac{y(x_0 + h) - y(x_0)}{h}.
\end{align}
In this example, even with a pretty large $h$, we still have a decent approximation of the slope at $x_0$. Now, this is only one very simple finite difference method. There are many others which produce even more accurate results.

Let's go back to our original problem. We can replace the $y'(x)$ in the problem we want to solve with the simple finite difference equation!. Then, we have this new equation
\begin{align} \label{fdm}
\frac{y(x_0 + h) - y(x_0)}{h} - x y(x_0) = e^x
\end{align}
Let's call $y_i$ the approximation from $y(x_i)$, which is $y(x_0 + ih)$. This is so just that we can clean our notation up into
\begin{align}
\frac{y_{i+1} - y_i}{h} - x y_i = e^x
\end{align}
We can rewrite this as an iteration for $y_{i+1}$:
\begin{align}
y_{i+1} = he^x + (hx +1)y_i
\end{align}

This is remarkably simple to implement in Python. I'll be using my two favorite Python modules, NumPy and matplotlib.
import numpy as np
import matplotlib.pylab as p

Now we want to set our step size $h$ to be something sort of "small", right? How about 1/10?
h = 0.1

Now I need to create the $x$ line (we'll go between 0 and 5)
x = np.arange(0,5+h,h)

So we step in sizes of $h$ along $x$ between 0 and 5. Great! Finally, I'll set up the $y$ vector, where we'll store our solution.
y = np.empty(len(x))

Don't forget our initial condition!
y[0] = 1

Now for the interesting part. We'll iterate through every $x$ step (remember, it's partitioned in sizes of $h$).
for i in range(0,len(x) - 1):
    y[i+1] = h*np.exp(x[i]) +(h*x[i] + 1)*y[i]

There you have it! Let it run, and plot it with
p.plot(x,y)
p.show()

Here's what we've got.

Approximation with step size $h = 0.1$.

I'm still amazed at how easy that was. But is this the right solution? For comparison, the function $y(x)$ that we are trying to approximate is
\begin{align}
y(x) = ce^{x^2/2}+\sqrt{\frac{pi}{2}} e^{1/2 (x^2+1)} \text{erf}\left(\frac{x-1}{\sqrt{2}}\right)
\end{align}
where erf($x$) is the error function (a pretty complicated function) and with our initial condition, $c \approx 2.41$. Let's graph our approximation next to the actual function.

Approximation with $h = 0.1$.

It's close when we start off, but it doesn't seem to grow fast enough after $x = 4$ or so. What if we decreased the step size $h$ to something like 1/100?

Approximation using step size $h = 0.01$.

Wow, that's way closer! Reducing the step size gave a much closer approximation. In fact, you'll find that as you decrease the step size, the numerical approximations will usually be closer to the actual solution. But this is not always the case! We'll talk about convergence and stability sometime in the future.

That code in full is right here.
import numpy as np
import matplotlib.pylab as p

# set the step size 
h = 0.01

# create our initial vectors
x = np.arange(0,5+h,h)
y = np.empty(len(x))

# initial condition
y[0] = 1

# iterate using the finite difference method
for i in range(0,len(x) - 1):
    y[i+1] = h*np.exp(x[i]) + (h*x[i] + 1)*y[i]

# plot the result
p.plot(x,y, 'r', label="Approximation")
p.legend(loc="best")
p.savefig("fig.png", size=(4,3))