(7.1)

What is the second-order approximation error of the Heun method, which averages the slope at the beginning and the end of the interval?

$y(x+h) = y(x) + \frac{h}{2}\{f(x,y(x))+f[x+h,y(x)+hf(x,y(x))]\}$

$y(x+h) = y(x) + \frac{h}{2}f(x,y(x)) + \frac{h}{2}f[x+h, y(x) + hf(x,y(x))]$

$\frac{h}{2}f[x+h, y(x) + hf(x,y(x))] = f(x,y(x)) + h \frac{d}{dh} f[x+h, y(x)+hf(x,y(x))]|_{h=0} + \mathcal{O}(h^2)$

$= f(x,y(x)) + h[1\frac{\partial f}{\partial x} + f(x, y(x)) \frac{\partial f}{\partial y}] + \mathcal{O}(h^2)$

$\implies y(x+h) = y(x) + \frac{h}{2}f(x,y(x)) + \frac{h}{2}\{f(x,y(x)) + h[\frac{\partial f}{\partial x} + f(x,y(x))\frac{\partial f}{\partial y}] + \mathcal{O}(h^2)\}$

$y(x+h) = y(x) + hf(x,y(x)) + \frac{h^2}{2}[\frac{\partial f}{\partial x} + f(x,y(x))\frac{\partial f}{\partial y}] + \mathcal{O}(h^3)$

We can see that this is identical to the actual Taylor expansion of the problem, and therefore, there is no difference in the first terms. The error lies in the other higher terms, giving us an error $\mathcal{O}(h^3)$.

(7.2)

For a simple harmonic oscillator $\ddot{y}+y=0$, with initial conditions $y(0)=1,\ \dot{y}(0)=0$, find $y(t)$ from $t=0$ to $100\pi$. Use an Euler method and a fixed-step fourth-order Runge-Kutta method. For each method check how the average local error, and the error in the final value and slope, depend on the step size.

In [26]:
# Adapted from Neil's euler.c code

import numpy as np

steps = [1, 0.1, 0.01, 0.001, 0.0001, 0.00001]

for h in steps:
    t_max = 100*np.pi
    t = 0
    N = int(t_max / h)
    y0 = 1
    y1 = 0
    error = 0

    while (t < t_max):
        new_y0 = y0 + h*y1
        new_y1 = y1 - h*y0
        y0 = new_y0
        y1 = new_y1
        t = t + h
        error = error + np.absolute(np.cos(t) - y0)

    error = error/N
    final_err = np.absolute(np.cos(t) - y0)
    
    print("Step Size: %f Error: %e Final Error: %e Slope: %e\n" % (h, error, final_err, y1))
Step Size: 1.000000 Error: 1.551488e+45 Final Error: 1.826877e+47 Slope: -1.826877e+47

Step Size: 0.100000 Error: 2.470401e+05 Final Error: 3.321023e+06 Slope: 5.176277e+06

Step Size: 0.010000 Error: 9.076946e-01 Final Error: 3.809890e+00 Slope: 4.683408e-02

Step Size: 0.001000 Error: 5.272470e-02 Final Error: 1.700892e-01 Slope: -7.370638e-04

Step Size: 0.000100 Error: 5.026287e-03 Final Error: 1.583198e-02 Slope: -3.412568e-05

Step Size: 0.000010 Error: 5.002620e-04 Final Error: 1.572031e-03 Slope: -4.637828e-06

In [27]:
# Adapted from Neil's rk.c code

import numpy as np

steps = [1, 0.1, 0.01, 0.001, 0.0001, 0.00001]

for h in steps:
    t_max = 100*np.pi
    t = 0
    N = int(t_max / h)
    y0 = 1
    y1 = 0
    error = 0
    
    while (t < t_max):
        A0 = h*y1
        A1 = -h*y0
        B0 = h*(y1 + A1/2.0)
        B1 = -h*(y0 + A0/2.0)
        C0 = h*(y1 + B1/2.0)
        C1 = -h*(y0 + B0/2.0)
        D0 = h*(y1 + C1)
        D1 = -h*(y0 + C0)
        y0 = y0 + (A0 + 2.0*B0 + 2.0*C0 + D0)/6.0
        y1 = y1 + (A1 + 2.0*B1 + 2.0*C1 + D1)/6.0
        t = t + h
        error = error + np.absolute(np.cos(t) - y0)
        
    error = error / N
    final_err = np.absolute(np.cos(t) - y0)
    
    print("Step Size: %f Error: %e Final Error: %e Slope: %e\n" % (h, error, final_err, y1))
Step Size: 1.000000 Error: 4.583837e-01 Final Error: 5.782189e-01 Slope: 1.156681e-01

Step Size: 0.100000 Error: 8.338424e-05 Final Error: 1.118341e-05 Slope: -4.046181e-02

Step Size: 0.010000 Error: 8.314404e-09 Final Error: 1.990389e-10 Slope: -7.346148e-04

Step Size: 0.001000 Error: 2.024684e-10 Final Error: 3.756995e-13 Slope: -7.346410e-04

Step Size: 0.000100 Error: 2.092032e-09 Final Error: 2.479128e-13 Slope: -3.464102e-05

Step Size: 0.000010 Error: 2.003457e-08 Final Error: 1.837419e-13 Slope: -4.641021e-06

(7.3)

Write a fourth-order Runge-Kutta adaptive stepper for the preceding problem, and check how the average step size that it finds depends on the desired local error.

In [25]:
# Adapted from Neil's rkstep.c code

import numpy as np

def step(y0, y1, h):
    A0 = h*y1
    A1 = -h*y0
    B0 = h*(y1 + A1/2.0)
    B1 = -h*(y0 + A0/2.0)
    C0 = h*(y1 + B1/2.0)
    C1 = -h*(y0 + B0/2.0)
    D0 = h*(y1 + C1)
    D1 = -h*(y0 + C0)
    y0_final = y0 + (A0 + 2.0*B0 + 2.0*C0 + D0)/6.0
    y1_final = y1 + (A1 + 2.0*B1 + 2.0*C1 + D1)/6.0
    return y0_final, y1_final
    
target_error = [1, 0.1, 0.01, 0.001, 0.0001, 0.00001, 0.000001, 0.0000001, 0.00000001, 0.000000001, 0.0000000001]

for target in target_error:
    t_max = 100*np.pi
    t = 0
    N = 0
    y0 = 1
    y1 = 0
    error = 0
    err_avg = 0
    h = 0.1
    h_avg = 0
    Y1_0 = 0
    Y1_1 = 0
    Y2_0 = 0
    Y2_1 = 0
    Y3_0 = 0
    Y3_1 = 0
    
    while (t < t_max):
        N = N + 1
        Y1_0, Y1_1 = step(y0, y1, (h/2.0))
        Y2_0, Y2_1 = step(Y1_0, Y1_1, (h/2.0))
        Y3_0, Y3_1 = step(y0, y1, h)
        local_err = Y3_0 - Y2_0
        if (np.absolute(local_err) > target):
            h = h / 1.3
            continue
        y0 = Y2_0
        y1 = Y2_1
        t = t + h
        err_avg = err_avg + np.absolute(local_err)
        error = error + np.absolute(np.cos(t) - y0)
        h_avg = h_avg + h
        if (np.absolute(local_err) < (target/10.0)):
            h = 1.2 * h
            
    error = error / N
    h_avg = h_avg / N
    err_avg = err_avg / N
    final_err = np.absolute(np.cos(t) - y0)
    
    print("Target Error: %e Avg. Step Size: %f Avg. Error: %e \n Final Error: %e Avg. Local Error: %e\n" % (target, h_avg, error, final_err, err_avg))
Target Error: 1.000000e+00 Avg. Step Size: 3.022392 Avg. Error: 4.157922e-01 
 Final Error: 9.961353e-01 Avg. Local Error: 2.004020e-01

Target Error: 1.000000e-01 Avg. Step Size: 1.276114 Avg. Error: 1.847182e-01 
 Final Error: 2.959546e-02 Avg. Local Error: 2.643274e-02

Target Error: 1.000000e-02 Avg. Step Size: 0.930514 Avg. Error: 4.379828e-02 
 Final Error: 2.261834e-02 Avg. Local Error: 4.560705e-03

Target Error: 1.000000e-03 Avg. Step Size: 0.577158 Avg. Error: 7.156474e-03 
 Final Error: 2.871883e-03 Avg. Local Error: 4.250934e-04

Target Error: 1.000000e-04 Avg. Step Size: 0.357798 Avg. Error: 1.140976e-03 
 Final Error: 6.769975e-04 Avg. Local Error: 3.970704e-05

Target Error: 1.000000e-05 Avg. Step Size: 0.224816 Avg. Error: 1.862271e-04 
 Final Error: 1.660416e-05 Avg. Local Error: 3.749813e-06

Target Error: 1.000000e-06 Avg. Step Size: 0.147316 Avg. Error: 3.345782e-05 
 Final Error: 4.099719e-07 Avg. Local Error: 4.203484e-07

Target Error: 1.000000e-07 Avg. Step Size: 0.091916 Avg. Error: 5.441627e-06 
 Final Error: 6.130512e-07 Avg. Local Error: 3.893344e-08

Target Error: 1.000000e-08 Avg. Step Size: 0.059073 Avg. Error: 9.443277e-07 
 Final Error: 6.648633e-08 Avg. Local Error: 3.935519e-09

Target Error: 1.000000e-09 Avg. Step Size: 0.038424 Avg. Error: 1.624655e-07 
 Final Error: 7.123475e-09 Avg. Local Error: 4.359935e-10

Target Error: 1.000000e-10 Avg. Step Size: 0.025163 Avg. Error: 2.945216e-08 
 Final Error: 1.614550e-09 Avg. Local Error: 5.213435e-11

(7.4)

Numerically solve the differential equation found in Problem 5.3:

$l\ddot{\theta}+(g+\ddot{z})sin\theta=0$

Take the motion of the platform to be periodic, and interactively explore the dynamics of the pendulum as a function of the amplitude and frequency of the excitation.

See Neil's pendulum.html example of this in motion.