I picked it up a few years back, and found it quickly replacing all of my other once-beloved programming languages. It's intuitive, easy to read, powerful, and has "batteries included". I even found it replacing my beloved Octave / MATLAB shell.
But this isn't about Python. This is about finite difference methods. Well, it's also about Python. We'll be implementing some ordinary differential equation solvers in Python. So what's a finite difference method?
Suppose we want to solve the initial value problem
\begin{align} \label{simple}
y'(x) - x y(x) &= e^x \\
y(0) &= 1 \\
\end{align}
This is a nonhomogeneous first-order linear ordinary differential equation.
If you need to freshen up, I suggest reviewing the Paul's Notes section on this. Since this has non-constant coefficients, this problem is more difficult to solve than it appears at first glance. To solve this analytically, we could certainly crank out a pen and pad of paper. But time is precious and CPU cycles are cheap!
How do we compute this when we don't know $y(x)$ and we don't know $y'(x)$? Well, perhaps we can estimate $y'(x)$ in terms of $y(x)$. Indeed, at some point $x_0$, we can estimate $y'(x_0)$ by looking at the nearby $y(x_0 + h)$, like in this following graph.
![]() |
| A linear approximation of some function $f(x) = xln(x) -x$. |
We've taken some function $xln(x) - x$ at $x_0$ and just looked ahead $h$ units. Then we drew a line threw those two points and decided, "this probably looks like the slope at $x_0$". This is called a linear approximation of a finite difference method.
This specific method approximates $y'(x)$ by doing
\begin{align} \label{fd}
y'(x) \approx \frac{y(x_0 + h) - y(x_0)}{h}.
\end{align}
In this example, even with a pretty large $h$, we still have a decent approximation of the slope at $x_0$. Now, this is only one very simple finite difference method. There are many others which produce even more accurate results.
Let's go back to our original problem. We can replace the $y'(x)$ in the problem we want to solve with the simple finite difference equation!. Then, we have this new equation
\begin{align} \label{fdm}
\frac{y(x_0 + h) - y(x_0)}{h} - x y(x_0) = e^x
\end{align}
Let's call $y_i$ the approximation from $y(x_i)$, which is $y(x_0 + ih)$. This is so just that we can clean our notation up into
\begin{align}
\frac{y_{i+1} - y_i}{h} - x y_i = e^x
\end{align}
We can rewrite this as an iteration for $y_{i+1}$:
\begin{align}
y_{i+1} = he^x + (hx +1)y_i
\end{align}
This is remarkably simple to implement in Python. I'll be using my two favorite Python modules, NumPy and matplotlib.
import numpy as np import matplotlib.pylab as p
Now we want to set our step size $h$ to be something sort of "small", right? How about 1/10?
h = 0.1
Now I need to create the $x$ line (we'll go between 0 and 5)
x = np.arange(0,5+h,h)
So we step in sizes of $h$ along $x$ between 0 and 5. Great! Finally, I'll set up the $y$ vector, where we'll store our solution.
y = np.empty(len(x))
Don't forget our initial condition!
y[0] = 1
Now for the interesting part. We'll iterate through every $x$ step (remember, it's partitioned in sizes of $h$).
for i in range(0,len(x) - 1): y[i+1] = h*np.exp(x[i]) +(h*x[i] + 1)*y[i]
There you have it! Let it run, and plot it with
p.plot(x,y) p.show()
Here's what we've got.
![]() |
| Approximation with step size $h = 0.1$. |
I'm still amazed at how easy that was. But is this the right solution? For comparison, the function $y(x)$ that we are trying to approximate is
\begin{align}
y(x) = ce^{x^2/2}+\sqrt{\frac{pi}{2}} e^{1/2 (x^2+1)} \text{erf}\left(\frac{x-1}{\sqrt{2}}\right)
\end{align}
where erf($x$) is the error function (a pretty complicated function) and with our initial condition, $c \approx 2.41$. Let's graph our approximation next to the actual function.
![]() |
| Approximation with $h = 0.1$. |
It's close when we start off, but it doesn't seem to grow fast enough after $x = 4$ or so. What if we decreased the step size $h$ to something like 1/100?
![]() |
| Approximation using step size $h = 0.01$. |
That code in full is right here.
import numpy as np
import matplotlib.pylab as p
# set the step size
h = 0.01
# create our initial vectors
x = np.arange(0,5+h,h)
y = np.empty(len(x))
# initial condition
y[0] = 1
# iterate using the finite difference method
for i in range(0,len(x) - 1):
y[i+1] = h*np.exp(x[i]) + (h*x[i] + 1)*y[i]
# plot the result
p.plot(x,y, 'r', label="Approximation")
p.legend(loc="best")
p.savefig("fig.png", size=(4,3))




No comments:
Post a Comment