Chapter 7 : Numerical Approximations for ODEs

March 5, 2020

In [3]:
import numpy as np
import matplotlib.pyplot as plt

%matplotlib inline

Content Summary

So we want to be able to solve a first order ODE (apprently these methods can be extrapolated to higher order ODEs through a system of equations - but this part of the derivation didn't totally make sense to me . . go back to explore eventually). To start with a general ODE, let's say we want to solve for $y(x)$ given $$ \frac{dy}{dx} = f(x, y)$$ The general numerical approach is to solve for $y(x)$ by finding some realtionship between $y(x)$ and $y(x+h)$, the function updated a discrete step forward by $h$, given some starting point or boundary conditions. Assuming $y(x)$ is differentiable at $x$ for all $x$, then we can use a taylor series to easily expand $y(x+h)$ around the arbitrary step size, $h$ $$ y(x+h) = y(x) + (x + h - x)\frac{dy}{dx} + \frac{1}{2}(x+h-x)^2\frac{d^2y}{dx^2} + \frac{1}{6}(x+h-x)^3\frac{d^3y}{dx^3} + \mathcal{O}\{h^4\} $$ then noting that our differential equation gives the slope of y at x, $y(x+h)$ can be simplified to $$ y(x+h) = y(x) + hf(x, y) + \frac{h^2}{2}\left(\frac{\partial f}{\partial x} + f(x, y)\frac{\partial f}{\partial y}\right) + \mathcal{O}\{h^3\} $$ The solution can be approximated to $\tilde y$, the simplest only paying attention to first order terms - known as Euler's method $$ \tilde y(x + h) = y(x) + hf(x, y) $$ But this acculumlates errors quickly and blows up easily

TO DO : explore plotting some examples

This method can be improved upon by noting that we have information about the slope of the function, $f(x, y)$, so the estimate for $y(x+h)$ could be updated with more "recent" info than $f(x,y)$ by using another Euler step to find finding $f(x+\frac{h}{2}, y(x)+\frac{h}{2}f(x, y(x)))$, the slope half way between two steps. {{ there's a cool plot that could be drawn here to show the relationship of the slope of th eactual point }}

We can do another Taylor expansion with respect to h to find a funciton for the midpoint slope $$ $$

7.1

What is the second-order approximation error of the Heun method, which averages the slope at the beginning and the end of the interval? $$ \tilde y(x + h) = y(x) - \frac{h}{2}\{f(x, y(x)) + f[x+h, y(x)+hf(x, y(x))]\} $$

Heun's method is a simple predictor-corrector algorirthm - it uses Euler's method, $\tilde y(x+h) = y(x) + hf(x, y)$, as the "predictor" step to initially determine $\tilde y(x+h)$ and then the trapeziod rule, $\tilde y(x+h) = y(x) + \frac{1}{2}h\{f(x, y) + f(x+h, y(x+h))\}$, as the "corrector" method to adjust the result.

We can find the second-order approximation error by taking a Taylor series of the approximation correction, $f[x+h, y(x)+hf(x, y(x))]$, around $h$.

$$ f(x+h, y(x) + hf(x, y)) = f(x, y) + h \frac{d}{dh}\left[f(x+h, y(x) + hf(x, y))\right] + \mathcal{O}(h^2) $$$$ = f(x, y) + h\left(\frac{\partial f}{\partial x} + f(x, y)\frac{\partial f}{\partial y}\right) + \mathcal{O}(h^2)$$

Then plugging this in to the Heun method $$ \tilde y(x + h) = y(x) - \frac{h}{2}\left(2f(x, y) + h{\partial x} + f(x, y)\frac{\partial f}{\partial y} + \mathcal{O}(h^2)\right)$$ $$ \tilde y(x+h) = y(x) + hf(x, y) + \frac{h^2}{2}\frac{\partial f}{\partial x} + \frac{h}{2}f(x, y)\frac{\partial f}{\partial y} + \mathcal{O}(h^3)$$ The approximation error is the difference between the Heun method approximation, $\tilde y$, and the true function given by the Taylor expansion, $y$ $$ \tilde y(x+h) - y(x+h) = \frac{1}{2}(h - h^2)f(x, y)\frac{\partial f}{\partial y}$$

7.2

For a simple harmoic oscillator $\ddot y+y=0$, with inital conditions $y(0)=1$, $\dot y(0)=0$, find $y(t)$ from $t=0$ to $100\pi$. Use an Euler method and a fixed step fourth order Runge-Kutta method. For each method check how the average local error, and the error in the final value and slope depend on the step size.

In [150]:
def euler(y, x, func, h):
    return y + h*func(x, y)
    
def runge_kutta(y, x, func, h):
    # y starting points
    # x starting points
    # f : function to evaluate at each k 
    # h : step size
    k1 = h*func(x, y)
    k2 = h*func(x+0.5*h, y+0.5*k1)
    k3 = h*func(x+0.5*h, y+0.5*k1)
    k4 = h*func(x+h, y+k3)
    y = y+(1/3)*(0.5*k1 + k2 + k3 + 0.5*k4)
    return y
In [176]:
# SIMPLE HARMONIC MOTION PLOTS

def shm(x, y):
    return np.array([y[1], -y[0]])

def plot_euler(Y, step, t):
    y = np.array([Y[0]])
    err = np.array([])
    for i in range(len(t) - 1):
        init_err = euler(Y+step, 0, shm, step)
        Y = euler(Y, 0, shm, step)
        y = np.append(y, Y[0])
        err = np.append(err, init_err[0] - Y[0])
    return y, err

        
def plot_rk(Y, step, t):
    y = np.array([Y[0]])
    err = np.array([])
    for i in range(len(t) - 1):
        half_1 = runge_kutta(Y, 0, shm, step/2)
        half_2 = runge_kutta(half_1, 0, shm, step/2)
        Y = runge_kutta(Y, 0, shm, step)
        y = np.append(y, Y[0])
        err = np.append(err, Y[0] - half_2[0])
    return y, err
In [187]:
# define initial conditions
y_0 = 1
dy_0 = 0
step = 0.1
max_t = 10

t = np.arange(0, max_t, step)

Y = np.array([y_0, dy_0])
y_euler, eul_err = plot_euler(Y, step, t)

Y = np.array([y_0, dy_0])
y_rk, rk_err = plot_rk(Y, step, t)

## true analytical solution
if y_0 == 1:
    y_true = np.cos(t)
if y_0 == 0:
    y_true = np.sin(t)

    
fig = plt.figure(figsize=(20, 20))
# plot estimates of y
ax1 = fig.add_subplot(311)
ax1.plot(t, y_true, label="true $y(t)$")
ax1.plot(t, y_euler, 'co', label="euler method")
ax1.plot(t, y_rk, 'mo', label="runge-kutta method")
# format ax1
ax1.set_title("approximations of $y(t)$")
ax1.legend()
ax1.set_xlabel("t")
ax1.set_ylabel("y(t)")
ax1.text(0, 0, "step size: %s " % step)

# plot euler errors
ax2 = fig.add_subplot(312)
ax2.plot(t, y_euler - y_true, 'c', label="Euler cumulative error")
#format ax2
ax2.set_title("Euler Cumulative error")
ax2.set_xlabel("t")
ax2.set_ylabel("$y_{eul}(t) - y_{true}(t)$")

# plot runge-kutta errors
ax3 = fig.add_subplot(313)
ax3.plot(t, y_rk - y_true, 'm', label="Runge-Kutta cumulative error")
#format ax3
ax3.set_title("Runge-Kutta Cumulative error")
ax3.set_xlabel("t")
ax3.set_ylabel("$y_{rk}(t) - y_{true}(t)$")
Out[187]:
Text(0, 0.5, '$y_{rk}(t) - y_{true}(t)$')
In [191]:
# define initial conditions
y_0 = 1
dy_0 = 0
step = 0.1
max_t = 100*np.pi

t = np.arange(0, max_t, step)
Y = np.array([y_0, dy_0])

Y = np.array([y_0, dy_0])
y_euler, eul_err = plot_euler(Y, step, t)
    
## true analytical solution
if y_0 == 1:
    y_true = np.cos(t)
if y_0 == 0:
    y_true = np.sin(t)
   

fig = plt.figure(figsize=(20, 20))
# plot estimates of y
ax1 = fig.add_subplot(211)
ax1.plot(t, y_true, label="true $y(t)$")
ax1.plot(t, y_euler, 'co', label="euler method")
# format ax1
ax1.set_title("approximations of $y(t)$")
ax1.legend()
ax1.set_xlabel("t")
ax1.set_ylabel("y(t)")
ax1.text(0, -3000000, "step size: %s " % step)

# plot euler errors
ax2 = fig.add_subplot(212)
ax2.plot(t, y_euler - y_true, 'c', label="Euler cumulative error")
#format ax2
ax2.set_title("Euler Cumulative error")
ax2.set_xlabel("t")
ax2.set_ylabel("$y_{eul}(t) - y_{true}(t)$")
Out[191]:
Text(0, 0.5, '$y_{eul}(t) - y_{true}(t)$')
In [193]:
# define initial conditions
y_0 = 1
dy_0 = 0
step = 0.1
max_t = 100*np.pi

t = np.arange(0, max_t, step)
Y = np.array([y_0, dy_0])   

Y = np.array([y_0, dy_0])
y_rk, rk_err = plot_rk(Y, step, t)

## true analytical solution
if y_0 == 1:
    y_true = np.cos(t)
if y_0 == 0:
    y_true = np.sin(t)

    
fig = plt.figure(figsize=(20, 20))
# plot estimates of y
ax1 = fig.add_subplot(211)
ax1.plot(t, y_true, label="true $y(t)$")
ax1.plot(t, y_rk, 'mo', label="runge-kutta method")
# format ax1
ax1.set_title("approximations of $y(t)$")
ax1.legend()
ax1.set_xlabel("t")
ax1.set_ylabel("y(t)")
ax1.text(0, 1.2, "step size: %s " % step)

# plot runge-kutta errors
ax3 = fig.add_subplot(212)
ax3.plot(t, y_rk - y_true, 'm', label="Runge-Kutta cumulative error")
#format ax3
ax3.set_title("Runge-Kutta Cumulative error")
ax3.set_xlabel("t")
ax3.set_ylabel("$y_{rk}(t) - y_{true}(t)$")
Out[193]:
Text(0, 0.5, '$y_{rk}(t) - y_{true}(t)$')

7.3

Write a fourth-order Runge-Kutta adaptive stepper for the preceding problem, and check how the average step size that is finds depends on the desired local error.

For a constant step size, how does the predicted error change as a function of t?

In [196]:
# define initial conditions
y_0 = 1
dy_0 = 0
step = 0.1
max_t = 10*np.pi

t = np.arange(0, max_t, step)
Y = np.array([y_0, dy_0])   

Y = np.array([y_0, dy_0])
y_rk, rk_err = plot_rk(Y, step, t)

## true analytical solution
if y_0 == 1:
    y_true = np.cos(t)
if y_0 == 0:
    y_true = np.sin(t)

    
fig = plt.figure(figsize=(20, 20))
# plot estimates of y
ax1 = fig.add_subplot(211)
ax1.plot(t, y_true, label="true $y(t)$")
ax1.plot(t, y_rk, 'mo', label="runge-kutta method")
# format ax1
ax1.set_title("approximations of $y(t)$")
ax1.legend()
ax1.set_xlabel("t")
ax1.set_ylabel("y(t)")
ax1.text(0, 1.2, "step size: %s " % step)

# plot runge-kutta errors
ax3 = fig.add_subplot(212)
ax3.plot(t[:-1], rk_err, 'm', label="Runge-Kutta")
#format ax3
ax3.set_title("Runge-Kutta Estimated Local Error")
ax3.set_xlabel("t")
ax3.set_ylabel("$local error$")
Out[196]:
Text(0, 0.5, '$local error$')

Adaptive stepper

In [204]:
def plot_rk_adapt(Y, t, max_err, min_err, init_step):
    y = np.array([Y[0]])
    step = init_step
    errs = np.array([])
    for i in range(len(t) - 1):
        half_1 = runge_kutta(Y, 0, shm, step/2)
        half_2 = runge_kutta(half_1, 0, shm, step/2)
        Y = runge_kutta(Y, 0, shm, step)
        err = Y[0] - half_2[0]
        if np.absolute(err) > max_err:
            step = step*1.1
            Y = runge_kutta(Y, 0, shm, step)
        if np.absolute(err) < min_err:
            step = step*0.9
            Y = runge_kutta(Y, 0, shm, step)
        y = np.append(y, Y[0])
            
        errs = np.append(errs, err)
    return y, errs


# define initial conditions
y_0 = 1
dy_0 = 0
max_t = 10*np.pi

init_step = 0.1
max_err = 0.00004
min_err = 0.000001

t = np.arange(0, max_t, step)
Y = np.array([y_0, dy_0])   

Y = np.array([y_0, dy_0])
y_rk, rk_err = plot_rk_adapt(Y, t, max_err, min_err, init_step)

## true analytical solution
if y_0 == 1:
    y_true = np.cos(t)
if y_0 == 0:
    y_true = np.sin(t)

    
fig = plt.figure(figsize=(20, 20))
# plot estimates of y
ax1 = fig.add_subplot(311)
ax1.plot(t, y_true, label="true $y(t)$")
ax1.plot(t, y_rk, 'mo', label="runge-kutta method")
# format ax1
ax1.set_title("approximations of $y(t)$")
ax1.legend()
ax1.set_xlabel("t")
ax1.set_ylabel("y(t)")
ax1.text(0, 1.2, "step size: %s " % step)

# plot runge-kutta errors
ax3 = fig.add_subplot(312)
ax3.plot(t[:-1], rk_err, 'm', label="Runge-Kutta local error prediction")
#format ax3
ax3.set_title("Runge-Kutta Estimated Local Error")
ax3.set_xlabel("t")
ax3.set_ylabel("local error")

# plot runge-kutta errors
ax3 = fig.add_subplot(313)
ax3.plot(t, y_rk - y_true, 'm', label="Runge-Kutta cumulative error")
#format ax3
ax3.set_title("Runge-Kutta Cumulative error")
ax3.set_xlabel("t")
ax3.set_ylabel("$y_{rk}(t) - y_{true}(t)$")
/Users/cblackburn/.pyenv/versions/3.6.6/lib/python3.6/site-packages/ipykernel_launcher.py:10: RuntimeWarning: overflow encountered in multiply
  # Remove the CWD from sys.path while we load stuff.
/Users/cblackburn/.pyenv/versions/3.6.6/lib/python3.6/site-packages/ipykernel_launcher.py:11: RuntimeWarning: overflow encountered in multiply
  # This is added back by InteractiveShellApp.init_path()
/Users/cblackburn/.pyenv/versions/3.6.6/lib/python3.6/site-packages/ipykernel_launcher.py:13: RuntimeWarning: invalid value encountered in add
  del sys.path[0]
/Users/cblackburn/.pyenv/versions/3.6.6/lib/python3.6/site-packages/ipykernel_launcher.py:9: RuntimeWarning: overflow encountered in multiply
  if __name__ == '__main__':
Out[204]:
Text(0, 0.5, '$y_{rk}(t) - y_{true}(t)$')