Chapter 6 - Random systems¶

(6.1) (a) Workout the first three cumulants $C_1, C_2, C_3$¶

Expanding the exponential in the RHS of $(6.25)$, we have $$$$1 + \sum_{n=1}^{\infty}\frac{(ik)^n}{n!}C_n + \sum_{n=1}^{\infty}\frac{(ik)^{2n}}{2n!}C_n^2 + \sum_{n=1}^{\infty}\frac{(ik)^{3n}}{6n!}C_n^3 + \dots = 1 + \sum_{n=1}^{\infty}\frac{(ik)^n}{n!}\langle x^n \rangle$$$$

or $$$$\big(ikC_1 + \frac{(ik)^2}{2!}C_2 + \frac{(ik)^3}{3!}C_3 + \dots \big) + \frac{1}{2}\big( (ik)C_1 + \frac{(ik)^2}{2!}C_2 +\dots \big)^2 + \dots = ik\langle x\rangle + \frac{(ik)^2}{2!}\langle x^2 \rangle + \frac{(ik)^3}{3!}\langle x^3 \rangle +\dots$$$$

or (grouping terms by order of $k$) $$$$(ik)[C_1] + \frac{(ik)^2}{2!}[C_2 + C_1^2] + \frac{(ik)^3}{3!}[C_3 + 3C_1C_2 + C_1^3] + \dots = ik\langle x\rangle + \frac{(ik)^2}{2!}\langle x^2 \rangle + \frac{(ik)^3}{3!}\langle x^3 \rangle +\dots$$$$

Asserting that the terms with the same power of $k$ are equal in the left and right side of the equation, we obtain \begin{align} C_1 &= \langle x \rangle \\ C_2 &= \langle x^2 \rangle - C_1^2 = \langle x^2 \rangle - \langle x \rangle^2 \\ C_3 &= \langle x^3 \rangle - 3C_1C_2 - C_1^3 =\langle x^3 \rangle - 3\langle x \rangle \langle x^2 \rangle - \langle x \rangle^3 \end{align}

(b) Evaluate the first three cumulants for a Gaussian distribution¶

For a Gaussian distribution with mean $\mu$ and variance $\sigma^2$ we have $p(x) = \frac{1}{\sqrt{2\pi\sigma^2}}e^{-(x-\mu)^2/2\sigma^2}$. Then, \begin{align} \langle x \rangle &= \int_{-\infty}^{+\infty}xp(x)dx = \dots = \mu \\ \langle x^2 \rangle &= \int_{-\infty}^{+\infty}x^2p(x)dx = \dots = \mu^2 + \sigma^2 \\ \langle x^3 \rangle &= \int_{-\infty}^{+\infty}x^3p(x)dx = \dots =\mu^3 + 3\mu\sigma^2. \end{align}

Plugging the above to the cumulant equations calculated in (a) \begin{align} C_1 &= \mu \\ C_2 &= (\mu^2 + \sigma^2) - \mu^2 = \sigma^2 \\ C_3 &= (\mu^3 +3\mu\sigma^2) - 3\mu\sigma^2 - \mu^3 = 0 \end{align}

TBD

TBD

(6.3) (a) For an order 4 maximal LFSR write down the bit sequence.¶

For and order $4$ maximal LFSR, we see from Table (6.1) that the taps are $1$ and $4$, and thus the recursion is $$$$x_n = x_{n-1} + x_{n-4}.$$$$

The following python script calculates the bit sequence.

In [11]:
from tabulate import tabulate
# x_prev = [0, 0, 0, 0] oops actually this is forbidden
x_prev = [0, 0, 0, 1]
x = 0
code = []
while (1):
x = (x_prev[0] + x_prev[3]) % 2
code.append([x] + x_prev)
x_prev.pop()
x_prev.insert(0,x)

if x_prev == [0, 0, 0, 1]:
break
code.append(code[0])
print("Bit sequence for order 4 maximal LFSR:")
print(tabulate(code, headers=["n", "n-1", "n-2", "n-3", "n-4"],
tablefmt="rst", numalign="center", stralign="center"))

Bit sequence for order 4 maximal LFSR:
===  =====  =====  =====  =====
n    n-1    n-2    n-3    n-4
===  =====  =====  =====  =====
1     0      0      0      1
1     1      0      0      0
1     1      1      0      0
1     1      1      1      0
0     1      1      1      1
1     0      1      1      1
0     1      0      1      1
1     0      1      0      1
1     1      0      1      0
0     1      1      0      1
0     0      1      1      0
1     0      0      1      1
0     1      0      0      1
0     0      1      0      0
0     0      0      1      0
1     0      0      0      1
===  =====  =====  =====  =====


(b) If an LFSR has a clock rate of $1 GHz$, how long must the register be for the time between repeats to be the age of the universe ($∼10^{10}$ years)?¶

The number of steps before a repeat for a register with $L$ steps is $2^L-1$. Assuming each addition takes exactly $1$ clock cycle (e.g. in a RISC architecture), then we want \begin{align} (2^L-1)\cdot 10^{-9}\text{ seconds} &\simeq 10^{10}years \cdot 365 \frac{days}{year}\cdot 24 \frac{hours}{day} \cdot 3600\frac{seconds}{hour} \\ 2^L &\simeq 32\cdot 10^{25} \sim 2^5 \cdot (1000)^8 \sim 2^5 \cdot 2^{80} \Rightarrow \\ L &= 85 \end{align}

(6.4) (a) Use a Fourier transform to solve the diffusion equation (6.57) (assume that the initial condition is a normalized delta function at the origin).¶

For the Fourier transform (and the inverse) of a function $p(x,t)$ we know \begin{align} \mathcal{F}\{p(x,t)\} = \int_{-\infty}^{+\infty}e^{i\omega x}p(x,t)dx = P(\omega, t) \\ \mathcal{F}^{-1}\{P(\omega,t)\} = \frac{1}{2\pi}\int_{-\infty}^{+\infty}e^{-i\omega x}P(\omega,t)d\omega = p(x, t). \end{align}

Applying the Fourier transform in equation (6.57) yields

\begin{align} \int_{-\infty}^{+\infty}e^{i\omega x}\frac{\partial p(x,t)}{\partial t}dx &= \int_{-\infty}^{+\infty}e^{i\omega x}D \frac{\partial^2 p(x,t)}{\partial x^2}dx \\ \frac{\partial P(\omega,t)}{\partial t} &= D \mathcal{F}\{\ddot{p}(x,t)\} = D (-\omega^2) P(\omega,t) \\ \frac{\partial P}{\partial t} + \omega^2DP &= 0 \end{align}

Which is a known differential equation, easily solved by plugging in the Ansatz $P = Ae^{-\lambda t}$

\begin{align} -\lambda A e^{-\lambda t} + \omega^2DAe^{-\lambda t} &=0 \\ Ae^{-\lambda t}[-\lambda + \omega^2 D] &= 0 \Rightarrow (\text{as } A, e^{-\lambda t} \neq 0)\\ \lambda &= \omega^2 D \end{align}

Thus, $P = Ae^{-\omega^2 Dt}$. To find the value of the arbitrary constant $A$ we will use the initial condition. We have,

$$$$p(x,0) = \delta(x, 0) \Rightarrow P(\omega, 0) = 1 \Rightarrow A = 1,$$$$

which results to, $P(\omega, t) = e^{-\omega^2Dt}$.

To switch back to the spatial domain, we will use the Fourier transform pair of a normalized Gaussian from Derpanis, 2005, i.e. $$G(\omega) = e^{\frac{-\omega^2\sigma^2}{2}} \to g(x) = \frac{1}{\sigma\sqrt{2\pi}}e^\frac{-x^2}{2\sigma^2}$$

In our case, comparing $G(\omega)$ with $P(\omega,t)$, we have $\sigma^2 = 2Dt$.

Conclusively, substituting in the pair above, the solution of the differential equation is $$p(x,t) = \frac{1}{\sqrt{4\pi Dt}}e^{\frac{-x^2}{4Dt}}$$

(b) What is the variance as a function of time?¶

As calculated in (a), the variance is $\sigma^2 = 2Dt$.

From equation (6.71) we can connect the mean of squared position with the viscosity $$\langle x^2 \rangle = \frac{kT}{3\pi \mu a}t$$

Calculating the mean of squares from the solution we found in (a), by using the result of (6.1b), we have \begin{align} \langle x^2 \rangle = \langle x \rangle^2 + \sigma^2 = 2Dt \end{align}

The $RHS$ of the above equations should be equal, thus we have $$D = \frac{kT}{6\pi\mu a}$$

(d) Write a program (including the random number generator) to plot the position as a function of time of a random walker in $1D$ that at each time step has an equal probability of making a step of $\pm 1$. Plot an ensemble of $10$ trajectories, each $1000$ points long, and overlay error bars of width $3\sigma(t)$ on the plot.¶

To calculate the width of the error bar we need to calculate the standard deviation with respect to the step probability.

From (6.57) and the definition of the diffusion coefficient, we have $$D = \frac{\langle\delta^2\rangle}{2\tau}$$.

In our case, $\delta$ is a $Bernoulli(1/2)$, and thus $\langle\delta^2\rangle = 1^2\cdot 1/2 + (-1)^2 \cdot 1/2 = 1$. Therefore, since $\tau =1$ we have $D=\frac{1}{2}$.

Now, from (a), we know that $\sigma^2 = \sqrt{2Dt}$, which if we plug in $D$ from above, gives us that the standard deviation is $\sigma = \sqrt{t}$.

Eventually, we have to plot error bars with width $3\sqrt{t}$, i.e. from $-1.5\sqrt{t}\text{ to } +1.5\sqrt{t}$.

In [49]:
import matplotlib.pyplot as plt
%matplotlib inline
import random
import time
import numpy as np
N_TRAJEC = 10
N_POINTS = 1000

def random_sign():
random.seed(time.time())
r = random.uniform(1,1000000)
if r > 500000:
return -1
else:
return 1
ax = plt.axes()
for i in range(N_TRAJEC):
x = [0,]
for j in range(N_POINTS):
x.append(x[-1] + random_sign())

ax.plot(x)
sigma = np.sqrt(range(N_POINTS+1))
ax.errorbar(range(N_POINTS+1), np.zeros_like(x), yerr=1.5*sigma, alpha=.2)
fig = plt.gcf()
_ = fig.suptitle("10 random walks")


(e) What fraction of the trajectories should be contained in the error bars?¶

We can find the fraction of the distribution inside the error bars by integrating the distribution within $$\int_{-1.5\sqrt{t}}^{1.5\sqrt{t}}\frac{1}{\sqrt{2\pi t}}e^{\frac{-x^2}{2t}}dx = \text{ (computing in python below) } = 0.866385597462284$$

In [81]:
# Computing the integral above using SymPy
import sympy as sm
x, t = sm.symbols('x t', positive=True, real=True)
# positive and real have to be true to simplify square roots!!!
p = 1/sm.sqrt(2*sm.pi*t)*sm.exp((-x**2)/(2*t))
sm.simplify(sm.integrate(p, (x, -1.5*sm.sqrt(t), 1.5*sm.sqrt(t)))).evalf()

Out[81]:
0.866385597462284