NMM 2020 Jake Read does PSET 4

7.2

For a simple harmonic oscillator $\ddot{y} + y = 0$ with initial conditions $y(0) = 1, \dot{y}(0) = 0$, find $y(t)$ from $t = 0$ to $100\pi$

Using a euler method

From (7.8)
$y(x + h) = (1 + hA)y(x)$

Best to understand this physically...
$\ddot{y} + y = 0$
$\ddot{y} = - y$
$\ddot{y} = accel = - y = position$

at startup,
$y = position = 1$
$\dot{y} = velocity = 0$

to increment (have position, vel @ start, so)
$accel = - position$
$position \mathrel{+}= velocity * h $
$velocity \mathrel{+}= accel * h $

In [2]:
import numpy as np
import math
import matplotlib.pyplot as plt
In [4]:
# ok, to do errors I want this formatted more like:

def runEuler(stepSize):
    steps = np.arange(0, 100*math.pi, stepSize) 
    posns = np.zeros(len(steps))
    vels = np.zeros(len(steps))
    accels = np.zeros(len(steps))
    posns[0] = 1 
    vels[0] = 0 
    
    for i in range(1, len(steps)):
        # in order, get accel from last position, 
        accels[i] = -posns[i - 1] 
        # increment next vel by last and accel, 
        vels[i] = vels[i-1] + accels[i]*stepSize 
        # increment next pos by last and vel, 
        posns[i] = posns[i - 1] + vels[i]*stepSize 
    
    return(steps, posns)

euSet = runEuler(0.4)
plt.plot(euSet[0], euSet[1])
Out[4]:
[<matplotlib.lines.Line2D at 0x2036ab59d88>]

The Complete Solution

The diffy-q $\ddot{y} = y$ resolves to $y = cos(x)$

In [8]:
# want a similarely shaped fn to retrieve the correct vals per step size 

def runActual(stepSize):
    steps = np.arange(0,100*math.pi, stepSize)
    actuals = np.zeros(len(steps))
    for i in range(0, len(steps)):
        actuals[i] = math.cos(steps[i])
        
    return(steps, actuals)

actualSet = runActual(0.4)
plt.plot(actualSet[0], actualSet[1] - euSet[1]) ## bb's first np-arr maths, bless up 
Out[8]:
[<matplotlib.lines.Line2D at 0x2036e09ee48>]

Runge-Kutta Fourth Order

Above is the error w/r/t the Euler method.
Now for the Fourth Order Runge-Kutta (RK4)... explainer explainer

for RK4,
$y_{i+1} = y_i + \frac{1}{6}(k_1 + 2k_2 + 2k_3 + k_4)$

$k_1 = f(x_i, y_i)$ slope of the fn at $x_i$ - just euler's method
$k_2 = f(x_i + \frac{1}{2}h, y_i + \frac{1}{2}k_1h)$ slope at half step w/ $k_1$
$k_3 = f(x_i + \frac{1}{2}h, y_i + \frac{1}{2}k_2h)$ slope at half step w/ $k_2$
$k_4 = f(x_i, y_i + k_3h)$ slope at full step w/ $k_3$

where $f(x_i, y_i)$ returns, for us, one new position given a current position and velocity.

In [11]:
# now a fn for rk4, 

def runRK4(stepSize):
    steps = np.arange(0, 100*math.pi, stepSize)
    posns = np.zeros(len(steps))
    vels = np.zeros(len(steps))
    accels = np.zeros(len(steps))
    posns[0] = 1 
    vels[0] = 0 
    def f(y, v):
        return np.array([v, -y])
    for i in range(1, len(steps)):
        k1 = stepSize*f(posns[i-1], vels[i-1])
        k2 = stepSize*f(posns[i-1] + k1[0]/2, vels[i-1] + k1[1]/2)
        k3 = stepSize*f(posns[i-1] + k2[0]/2, vels[i-1] + k2[1]/2)
        k4 = stepSize*f(posns[i-1] + k3[0], vels[i-1] + k3[1])
        posns[i] = posns[i-1] + (k1[0] + 2*k2[0] + 2*k3[0] + k4[0])/6
        vels[i] = vels[i-1] + (k1[1] + 2*k2[1] + 2*k3[1] + k4[1])/6
    
    return(steps, posns)

rkResults = runRK4(0.4)
plt.plot(rkResults[0], rkResults[1])
plt.plot(rkResults[0], actualSet[1] - rkResults[1])
Out[11]:
[<matplotlib.lines.Line2D at 0x2036e1e0988>]

Comparing Results, 0.4 Step

In [13]:
plt.plot(actualSet[0], actualSet[1] - rkResults[1])
plt.plot(actualSet[0], actualSet[1] - euSet[1])
Out[13]:
[<matplotlib.lines.Line2D at 0x2036e1ddbc8>]

Differences in Errors

I find that w/ a timestep around 0.1 (originally) the differences between RK and Euler were v. small. W/ a timestep near 0.4, RK is much better, as noted above. I've reformatted some code, so I should be able to plot errs over a band of step sizes...

In [50]:
stepRange = np.arange(0.025, 2, 0.025) 
euErrAvgs = np.zeros(len(stepRange))
euFinalErrs = np.zeros(len(stepRange))
rkErrAvgs = np.zeros(len(stepRange))
rkFinalErrs = np.zeros(len(stepRange))

for i in range(0, len(stepRange)):
    aRes = runActual(stepRange[i])
    euRes = runEuler(stepRange[i])
    rkRes = runRK4(stepRange[i])
    # compute the final errors, just last elements:
    euFinalErrs[i] = aRes[1][len(aRes[0]) - 1] - euRes[1][len(aRes[0]) - 1]
    rkFinalErrs[i] = aRes[1][len(aRes[0]) - 1] - rkRes[1][len(aRes[0]) - 1]
    # compute the average errors, 
    euErrSum = 0 
    rkErrSum = 0 
    for j in range(0, len(aRes[0])):
        euErrSum += abs(aRes[1][j] - euRes[1][j])
        rkErrSum += abs(aRes[1][j] - rkRes[1][j])
    euErrAvgs[i] = euErrSum / len(aRes[0])
    rkErrAvgs[i] = rkErrSum / len(aRes[0])

Final Errors

In [51]:
plt.plot(stepRange, euFinalErrs)
plt.plot(stepRange, rkFinalErrs)
Out[51]:
[<matplotlib.lines.Line2D at 0x20370a4f3c8>]

Average Errors

In [52]:
plt.plot(stepRange, euErrAvgs)
plt.plot(stepRange, rkErrAvgs)
Out[52]:
[<matplotlib.lines.Line2D at 0x20370aa9508>]

7.3

Write a fourth-order Runge-Kutta adaptive stepper, and check how the average step size that it finds depends on local error.

the difference between a single step and two half-steps can be used to guide an automatic stepper routine.

In [58]:
# I'm going to re-write this,

def runRK4LE(stepSize):
    steps = np.arange(0, 10*math.pi, stepSize)
    posns = np.zeros(len(steps))
    lerrs = np.zeros(len(steps))
    vels = np.zeros(len(steps))
    accels = np.zeros(len(steps))
    posns[0] = 1 
    vels[0] = 0 
    def f(y, v):
        return np.array([v, -y])
    for i in range(1, len(steps)):
        k1 = stepSize*f(posns[i-1], vels[i-1])
        k2 = stepSize*f(posns[i-1] + k1[0]/2, vels[i-1] + k1[1]/2)
        k3 = stepSize*f(posns[i-1] + k2[0]/2, vels[i-1] + k2[1]/2)
        k4 = stepSize*f(posns[i-1] + k3[0], vels[i-1] + k3[1])
        posns[i] = posns[i-1] + (k1[0] + 2*k2[0] + 2*k3[0] + k4[0])/6
        # local err is the RK4 estimate (assumed to be better) 
        # against the K1-alone estimate (assumed to be worse)
        lerrs[i] = posns[i] - (posns[i-1] + k1[0])
        vels[i] = vels[i-1] + (k1[1] + 2*k2[1] + 2*k3[1] + k4[1])/6
    
    return(steps, posns, lerrs)

rkLErrRes = runRK4LE(0.2)

plt.plot(rkLErrRes[0], rkLErrRes[1])
plt.plot(rkLErrRes[0], rkLErrRes[2])
Out[58]:
[<matplotlib.lines.Line2D at 0x20370d93dc8>]

Neat

What I expect was that the local error is higher near the peaks of the oscillator, this looks more or less correct... the orange plot is my local error calc, blue is the RK4 output. To do better I would measure this again against the actual cos(x).

In [80]:
# Now I should try to make a stepping rule, and pick an error threshold.
# lower bound is where errors are small enough, step size can increase 
# upper bound is where errors are too large, step size should decrease 

def runRK4AS(lBnd, uBnd, iterLen):
    # pre-allocate hella zeros, 
    steps = np.zeros(iterLen)
    stepSizes = np.zeros(iterLen)
    posns = np.zeros(iterLen)
    vels = np.zeros(iterLen)
    posns[0] = 1 
    vels[0] = 0 
    # begin w/ 
    stepSize = 0.2 
    t = 0 
    def f(y, v):
        return np.array([v, -y])
    for i in range(1, iterLen):
        def take():
            k1 = stepSize*f(posns[i-1], vels[i-1])
            k2 = stepSize*f(posns[i-1] + k1[0]/2, vels[i-1] + k1[1]/2)
            k3 = stepSize*f(posns[i-1] + k2[0]/2, vels[i-1] + k2[1]/2)
            k4 = stepSize*f(posns[i-1] + k3[0], vels[i-1] + k3[1])
            posns[i] = posns[i-1] + (k1[0] + 2*k2[0] + 2*k3[0] + k4[0])/6
            # local err is the RK4 estimate (assumed to be better) 
            # against the K1-alone estimate (assumed to be worse)
            vels[i] = vels[i-1] + (k1[1] + 2*k2[1] + 2*k3[1] + k4[1])/6
            # return local err for this step, 
            return (posns[i] - (posns[i-1] + k1[0]))
        # test
        lerr = take()
        # too good? increase step size until just-ok 
        if lerr < lBnd:
            # small steps up, 
            stepSize += 0.005
            lerr = take() 
        elif lerr > uBnd:
            # big steps back, 
            stepSize -= 0.01 # these should probably be set w/r/t uBnd, lBnd vals 
            lerr = take() 
        # assuming we passed those guards, we r gucci 2 conn 
        t += stepSize 
        stepSizes[i] = stepSize
        steps[i] = t 
    return(steps, posns, stepSizes)

rkASRes = runRK4AS(0.01, 0.05, 1000)
print('fin')
plt.plot(rkASRes[0], rkASRes[1])
plt.plot(rkASRes[0], rkASRes[2])
fin
Out[80]:
[<matplotlib.lines.Line2D at 0x20374076608>]

Neat

I'm pretty sure this is blind luck, but this looks a lot like it works:

In [81]:
cut = 100
plt.plot(rkASRes[0][0:cut], rkASRes[1][0:cut])
plt.plot(rkASRes[0][0:cut], rkASRes[2][0:cut])
Out[81]:
[<matplotlib.lines.Line2D at 0x20373f4b4c8>]

7.4

Numerically solve: $l\ddot{\theta{}} + (g + \ddot{z})sin\theta{} = 0$
With Euler, I think I can just take $\ddot{z}$ as an input, assume $\ddot{z}, \dot{z}$ and $z$ to be $= 0$ at startup, take some $\theta{}$ at startup (not zero, this is a kind of singularity), and then step with:
$\ddot{\theta{}} = (g + \ddot{z})sin\theta{} / l$

In [93]:
# z, vertical pendulum-pivot position, is driving, so we can do:
tInc = 0.01 
tSteps = np.arange(0, 20, tInc) # 10s, 0.01 steps (euler bad, be careful)
def writeZddot(times, amp, freq):
    zddot = np.zeros(len(times))
    for t in range(0, len(times)):
        zddot[t] = amp*math.sin(freq*times[t])
    return zddot 

zddot = writeZddot(tSteps, 2.2, 1)
plt.plot(tSteps, zddot)

# start with
tddot = np.zeros(len(tSteps))
tdot = np.zeros(len(tSteps))
theta = np.zeros(len(tSteps))
tddot[0] = 0 # angular accel zero, 
tdot[0] = 0 # angular vel zero 
theta[0] = math.pi / 3 # some angular position, 

for i in range(1, len(tSteps)):
    ## calc each from last, 
    tddot[i] = ((-9.8 + zddot[i-1])*math.sin(theta[i-1]))/1.1
    tdot[i] = tdot[i-1] + tddot[i-1]*tInc 
    theta[i] = theta[i-1] + tdot[i -1]*tInc 
    
plt.plot(tSteps, theta)
Out[93]:
[<matplotlib.lines.Line2D at 0x203752f7b48>]

Neat

ok, this looks like it qualitatively does stuff (an important scientific metric), so I'm going to tool around w/ it in JS. of note: that runaway condition is not actually trouble, it's the thing swinging up due to properly timed excitation. neato.

In [ ]: