(14.1)

Find the first five diagonal Pade approximants $[1/1],\dots,[5/5]$ to $e^x$ around the origin. Remember that the numerator and denominator can be multiplied by a constant to make the numbers as convenient as possible. Evaluate the approximations at $x = 1$ and compare with the correct value of $e = 2.718281828459045$. How is the error improving with the order? How does that compare to the polynomial error?

In [2]:
# From Neil's pade.py code

import numpy as np
np.set_printoptions(precision=1, suppress=True)

for N in range(1, 6):
    M = N
    c = 1/np.hstack([1, np.cumprod(np.arange(1, N + M + 1))])
    C = c[N + 1 - 1:N + 1 - M - 1:-1].reshape(1, M)
    
    for l in range(N + 2, N + M + 1):
        C = np.vstack([C, c[l - 1:l - M - 1:-1]])
        
    b = -np.linalg.inv(C)@c[N + 1:N + M + 1]
    
    b = np.hstack([1, b])
    
    a = np.zeros(N + 1)
    a[0] = c[0]
    for n in range(1, N + 1):
        a[n] = c[n]
        for m in range(1, n + 1):
            a[n] += b[m]*c[n-m]
            
    x = 1
    xp = x**(np.arange(N + M + 1))
    y = np.dot(c, xp)
    poly_error = np.exp(x) - y
    xp = x**(np.arange(N + 1))
    y = np.dot(a, xp)/np.dot(b, xp)
    pade_error = np.exp(x) - y
    
    print("Order: ", N)
    print("  Polynomial Error: %.3g" % poly_error)
    print("  Pade Error: %.3g" % pade_error)
    print("  a: ", a/np.min(np.abs(a)))
    print("  b: ", b/np.min(np.abs(a)))
Order:  1
  Polynomial Error: 0.218
  Pade Error: -0.282
  a:  [2. 1.]
  b:  [ 2. -1.]
Order:  2
  Polynomial Error: 0.00995
  Pade Error: 0.004
  a:  [12.  6.  1.]
  b:  [12. -6.  1.]
Order:  3
  Polynomial Error: 0.000226
  Pade Error: -2.8e-05
  a:  [120.  60.  12.   1.]
  b:  [120. -60.  12.  -1.]
Order:  4
  Polynomial Error: 3.06e-06
  Pade Error: 1.1e-07
  a:  [1680.  840.  180.   20.    1.]
  b:  [1680. -840.  180.  -20.    1.]
Order:  5
  Polynomial Error: 2.73e-08
  Pade Error: -2.77e-10
  a:  [30240. 15120.  3360.   420.    30.     1.]
  b:  [ 30240. -15120.   3360.   -420.     30.     -1.]

The error is improving faster than exponentially, and faster than the polynomial expansion.

(14.2)

Take as a data set $x = \{-10, -9, \dots, 9, 10\}$, and $y(x) = 0$ if $x \le 0$ and $y(x) = 1$ if $x \gt 0$.

(a) Fit the data with a polynomial with 5, 10, and 15 terms, using a pseudo-inverse of the Vandermonde matrix.

In [11]:
# From Neil's fit.py code

import matplotlib.pyplot as plt
import numpy as np

fit_range = [5, 10, 15]
x_fit = np.arange(-10, 11).reshape((-1, 1))
y_fit = 1*(x_fit > 0)
x_cross = np.arange(-10.5, 11).reshape((-1, 1))
y_cross = 1*(x_cross > 0)
x_test = np.arange(-10.5, 10.6, 0.1).reshape((-1, 1))
plt.ion()

poly_error = []
plt.plot(x_fit, y_fit, '.-', markersize=10)

for n_fit in fit_range:
    A = np.ones(x_fit.shape)
    for i in range(1, n_fit):
        A = np.hstack([A, x_fit**i])
    coeff = np.linalg.pinv(A)@y_fit
    
    A = np.ones(x_cross.shape)
    for i in range(1, n_fit):
        A = np.hstack([A, x_cross**i])
    poly_error.append(np.std(y_cross - A@coeff))
    
    A = np.ones(x_test.shape)
    for i in range(1, n_fit):
        A = np.hstack([A, x_test**i])
        
    plt.plot(x_test, A@coeff, markersize=10, label=str(n_fit)+' terms')
    
plt.xlim(-11, 11)
plt.ylim(-0.5, 1.5)
plt.title("Polynomial Fit")
plt.legend()
plt.show()

(b) Fit the data with 5, 10, and 15 $r^3$ RBFs uniformly distributed between $x = -10$ and $x = 10$.

In [12]:
rbf_error = []
plt.plot(x_fit, y_fit, '.-', markersize=10)

for n_fit in fit_range:
    centers = -10 + 20*np.arange(n_fit)/(n_fit - 1)
    
    r2 = (x_fit - centers[0])**2
    A = r2.reshape(-1, 1)
    for i in range(1, n_fit):
        r2 = (x_fit - centers[i])**2
        A = np.hstack([A, r2**(3/2)])
    coeff = np.linalg.pinv(A)@y_fit
    
    r2 = (x_cross - centers[0])**2
    A = r2.reshape(-1, 1)
    for i in range(1, n_fit):
        r2 = (x_cross - centers[i])**2
        A = np.hstack([A, r2**(3/2)])
    rbf_error.append(np.std(y_cross - A@coeff))
    
    r2 = (x_test - centers[0])**2
    A = r2.reshape(-1, 1)
    for i in range(1, n_fit):
        r2 = (x_test - centers[i])**2
        A = np.hstack([A, r2**(3/2)])
        
    plt.plot(x_test, A@coeff, markersize=10, label=str(n_fit)+' terms')
    
plt.xlim(-11, 11)
plt.ylim(-0.5, 1.5)
plt.title("RBF Fit")
plt.legend()
plt.show()

(c) Using the coefficients found for these six fits, evaluate the total out-of-sample error at $x = \{-10.5, -9.5, \dots, 9.5, 10.5 \}$.

In [13]:
plt.plot(fit_range, poly_error, '.-', markersize=10, label='polynomial')
plt.plot(fit_range, rbf_error, '.-', markersize=10, label='RBF')
plt.title("Out-of-Sample Error")
plt.legend()
plt.show()

(d) Using a 15th-order polynomial, fit the data with the curvature regularizer in equation (14.53), plot the fits, and compare the out-of-sample errors for $\lambda = 0, 0.01, 0.1, 1$ (this part is harder than the others.

In [27]:
M = 15
lambd = [0, 0.01, 0.1, 1]

plt.plot(x_fit, y_fit, '.-', markersize=10)

for i in range(len(lambd)):
    A = np.zeros((M, 1))
    for l in range(M):
        A[l] = np.sum(y_fit*(x_fit/10)**l)
        
    B = np.zeros((M, M))
    for l in range(M):
        for m in range(M):
            B[l,m] = np.sum((x_fit/10)**(m + l))
            
    C = np.zeros((M, M))
    for l in range(2, M):
        for m in range(2, M):
            C[l,m] = lambd[i]*l*(l - 1)*m*(m - 1)*(1 - (-1)**(m + l - 3))/(m + l - 3)
            
    coeff = np.linalg.inv(B - C)@A
    y_out = np.zeros(x_cross.shape)
    for m in range(M):
        y_out += coeff[m]*(x_cross/10)**m
        
    error = np.std(y_out - y_cross)
    y_pred = np.zeros(x_test.shape)
    for m in range(M):
        y_pred += coeff[m]*(x_test/10)**m
        
    plt.plot(x_test, y_pred, label='lambda '+str(lambd[i])+' error %.2f'%error)

plt.xlim(-11, 11)
plt.ylim(-0.5, 1.5)
plt.title("Regularized Fit")
plt.legend()
plt.show()

(14.3)

Train a neural network on the output from an order 4 maximal LFSR (Problem 6.3) and learn to reproduce it. How do the results depend on the network depth and architecture? For extra credit, try learning a longer sequence (Table 6.1).

In [32]:
# From Neil's lfsrdnn.py

import numpy as py

M = 4
taps = [1, 4]
network = [4, 20, 20, 1]

alpha = 0.01
n_steps = 100

lfsr = np.ones(M + 1)

def lfsr_step():
    global lfsr
    lfsr[1:] = lfsr[0:-1]
    lfsr[0] = 0
    for j in taps:
        lfsr[0] += lfsr[j]
    lfsr[0] = lfsr[0]%2
    
n_pts = 2**M
inputs = np.zeros((n_pts, M))
outputs = np.zeros((n_pts, 1))
for i in range(2**M):
    lfsr_step()
    inputs[i][:] = lfsr[1:]
    outputs[i][0] = lfsr[0]
    
def g(x):
    return np.tanh(x)

def dg(x):
    return 1/(np.cosh(x)**2)

def init():
    global x, y, network, weights
    
    x = []
    y = []
    for layer in range(len(network)):
        x.append(np.zeros(network[layer]))
        y.append(np.zeros(network[layer]))
        
    weights = [[]]
    for layer in range(1, len(network)):
        weights.append(2*np.random.rand(network[layer], network[layer-1]) - 1)
        
def forward(n):
    global x, y, network, weights, inputs
    
    x[0][:] = inputs[n]
    for layer in range(1, len(network)):
        y[layer][:] = np.zeros(network[layer])
        for i in range(network[layer]):
            y[layer][i] = np.sum(weights[layer][i][:]*x[layer-1][:])
        x[layer][:] = g(y[layer][:])
        
def backward(n):
    global outputs
    
    layer = len(network) - 1
    delta = np.zeros(network[layer])
    for i in range(network[layer]):
        delta[i] = 2*(x[layer][i] - outputs[n][i])*dg(y[layer][i])
        weights[layer][i][:] -= alpha*delta[i]*x[layer-1][:]
        
    for layer in range(len(network) - 2, 0, -1):
        last_delta = np.copy(delta)
        delta = np.zeros(network[layer])
        for j in range(network[layer]):
            for i in range(network[layer+1]):
                delta[j] += weights[layer+1][i][j]*last_delta[i]
            delta *= dg(y[layer][j])
            weights[layer][j][:] -= alpha*delta[j]*x[layer-1][:]
            
def update():
    for n in range(n_pts):
        forward(n)
        backward(n)
        
def error():
    global outputs
    
    err = 0
    for n in range(n_pts):
        forward(n)
        err += np.abs(np.sum(x[-1] - outputs[n]))
        print(f"{x[-1][0]:.2f}:{outputs[n][0]:.0f}")
    return err/n_pts

init()
print("output:")
print(f"average error: {error():.3f}")
for i in range(n_steps):
    update()
    print("output:")
    print(f"average error: {error():.3f}")
output:
-0.99:0
-0.54:1
-0.96:0
-0.97:1
-0.20:1
-1.00:0
0.80:0
-0.87:1
-1.00:0
0.28:0
0.66:0
-1.00:1
-0.96:1
-0.98:1
-0.78:1
-0.99:0
average error: 1.312
output:
-0.93:0
0.22:1
-0.80:0
-0.95:1
0.72:1
-1.00:0
0.96:0
-0.52:1
-1.00:0
0.59:0
0.91:0
-1.00:1
-0.91:1
-0.93:1
0.12:1
-0.93:0
average error: 1.147
output:
-0.55:0
0.76:1
-0.29:0
-0.88:1
0.91:1
-0.99:0
0.99:0
0.29:1
-1.00:0
0.72:0
0.97:0
-1.00:1
-0.85:1
-0.81:1
0.67:1
-0.55:0
average error: 0.934
output:
0.07:0
0.85:1
0.20:0
-0.78:1
0.96:1
-0.98:0
0.99:0
0.49:1
-1.00:0
0.79:0
0.97:0
-0.99:1
-0.68:1
-0.47:1
0.85:1
0.07:0
average error: 0.802
output:
0.42:0
0.88:1
0.39:0
-0.63:1
0.97:1
-0.93:0
0.99:0
0.46:1
-1.00:0
0.86:0
0.97:0
-0.99:1
-0.41:1
0.15:1
0.92:1
0.42:0
average error: 0.789
output:
0.55:0
0.90:1
0.48:0
-0.34:1
0.97:1
-0.79:0
0.99:0
0.46:1
-0.99:0
0.90:0
0.95:0
-0.99:1
-0.14:1
0.49:1
0.92:1
0.55:0
average error: 0.746
output:
0.62:0
0.92:1
0.54:0
0.14:1
0.96:1
-0.44:0
0.99:0
0.50:1
-0.99:0
0.92:0
0.92:0
-0.98:1
0.09:1
0.65:1
0.91:1
0.62:0
average error: 0.680
output:
0.63:0
0.93:1
0.57:0
0.48:1
0.94:1
-0.02:0
0.99:0
0.51:1
-0.98:0
0.94:0
0.87:0
-0.95:1
0.25:1
0.72:1
0.88:1
0.63:0
average error: 0.616
output:
0.53:0
0.92:1
0.55:0
0.59:1
0.92:1
0.15:0
0.98:0
0.51:1
-0.97:0
0.93:0
0.80:0
-0.92:1
0.34:1
0.70:1
0.83:1
0.53:0
average error: 0.598
output:
0.38:0
0.90:1
0.51:0
0.66:1
0.88:1
0.24:0
0.97:0
0.51:1
-0.96:0
0.92:0
0.71:0
-0.87:1
0.39:1
0.65:1
0.74:1
0.38:0
average error: 0.576
output:
0.27:0
0.88:1
0.51:0
0.72:1
0.84:1
0.36:0
0.95:0
0.53:1
-0.94:0
0.91:0
0.58:0
-0.77:1
0.45:1
0.62:1
0.64:1
0.27:0
average error: 0.555
output:
0.23:0
0.86:1
0.53:0
0.79:1
0.80:1
0.49:0
0.93:0
0.57:1
-0.91:0
0.89:0
0.43:0
-0.58:1
0.51:1
0.63:1
0.55:1
0.23:0
average error: 0.532
output:
0.24:0
0.85:1
0.57:0
0.84:1
0.76:1
0.62:0
0.90:0
0.63:1
-0.85:0
0.87:0
0.27:0
-0.28:1
0.58:1
0.65:1
0.47:1
0.24:0
average error: 0.503
output:
0.26:0
0.83:1
0.62:0
0.87:1
0.74:1
0.70:0
0.86:0
0.69:1
-0.78:0
0.85:0
0.13:0
0.05:1
0.63:1
0.67:1
0.42:1
0.26:0
average error: 0.471
output:
0.25:0
0.80:1
0.65:0
0.88:1
0.75:1
0.73:0
0.82:0
0.72:1
-0.70:0
0.82:0
0.06:0
0.24:1
0.68:1
0.69:1
0.41:1
0.25:0
average error: 0.444
output:
0.22:0
0.76:1
0.67:0
0.87:1
0.77:1
0.73:0
0.78:0
0.74:1
-0.61:0
0.78:0
0.03:0
0.34:1
0.72:1
0.70:1
0.41:1
0.22:0
average error: 0.421
output:
0.19:0
0.70:1
0.69:0
0.86:1
0.79:1
0.72:0
0.73:0
0.75:1
-0.52:0
0.73:0
0.01:0
0.40:1
0.75:1
0.71:1
0.43:1
0.19:0
average error: 0.399
output:
0.16:0
0.65:1
0.71:0
0.85:1
0.80:1
0.71:0
0.68:0
0.77:1
-0.41:0
0.67:0
0.01:0
0.45:1
0.78:1
0.71:1
0.45:1
0.16:0
average error: 0.379
output:
0.14:0
0.59:1
0.72:0
0.83:1
0.82:1
0.69:0
0.63:0
0.79:1
-0.31:0
0.60:0
0.01:0
0.49:1
0.80:1
0.71:1
0.46:1
0.14:0
average error: 0.359
output:
0.12:0
0.54:1
0.74:0
0.82:1
0.83:1
0.67:0
0.57:0
0.80:1
-0.22:0
0.53:0
0.01:0
0.52:1
0.81:1
0.72:1
0.48:1
0.12:0
average error: 0.341
output:
0.10:0
0.50:1
0.74:0
0.82:1
0.84:1
0.65:0
0.53:0
0.82:1
-0.15:0
0.46:0
0.02:0
0.55:1
0.82:1
0.72:1
0.50:1
0.10:0
average error: 0.324
output:
0.09:0
0.48:1
0.74:0
0.81:1
0.85:1
0.62:0
0.49:0
0.83:1
-0.10:0
0.40:0
0.03:0
0.57:1
0.82:1
0.72:1
0.51:1
0.09:0
average error: 0.309
output:
0.08:0
0.47:1
0.74:0
0.81:1
0.86:1
0.59:0
0.45:0
0.84:1
-0.06:0
0.35:0
0.04:0
0.59:1
0.82:1
0.73:1
0.53:1
0.08:0
average error: 0.296
output:
0.06:0
0.47:1
0.73:0
0.81:1
0.86:1
0.55:0
0.42:0
0.85:1
-0.05:0
0.32:0
0.05:0
0.61:1
0.82:1
0.73:1
0.55:1
0.06:0
average error: 0.284
output:
0.05:0
0.47:1
0.72:0
0.81:1
0.86:1
0.52:0
0.39:0
0.86:1
-0.03:0
0.29:0
0.05:0
0.63:1
0.81:1
0.73:1
0.57:1
0.05:0
average error: 0.273
output:
0.05:0
0.47:1
0.71:0
0.82:1
0.86:1
0.48:0
0.36:0
0.87:1
-0.03:0
0.26:0
0.06:0
0.64:1
0.81:1
0.74:1
0.59:1
0.05:0
average error: 0.262
output:
0.04:0
0.48:1
0.70:0
0.82:1
0.86:1
0.45:0
0.33:0
0.88:1
-0.02:0
0.24:0
0.06:0
0.66:1
0.80:1
0.74:1
0.60:1
0.04:0
average error: 0.252
output:
0.03:0
0.49:1
0.69:0
0.83:1
0.86:1
0.41:0
0.30:0
0.89:1
-0.02:0
0.23:0
0.06:0
0.67:1
0.80:1
0.75:1
0.62:1
0.03:0
average error: 0.242
output:
0.03:0
0.50:1
0.67:0
0.84:1
0.86:1
0.38:0
0.27:0
0.89:1
-0.02:0
0.21:0
0.06:0
0.68:1
0.79:1
0.75:1
0.63:1
0.03:0
average error: 0.233
output:
0.02:0
0.51:1
0.66:0
0.84:1
0.86:1
0.35:0
0.24:0
0.90:1
-0.02:0
0.20:0
0.06:0
0.69:1
0.78:1
0.76:1
0.65:1
0.02:0
average error: 0.224
output:
0.02:0
0.52:1
0.64:0
0.85:1
0.86:1
0.33:0
0.21:0
0.90:1
-0.02:0
0.19:0
0.06:0
0.70:1
0.78:1
0.76:1
0.66:1
0.02:0
average error: 0.215
output:
0.01:0
0.53:1
0.62:0
0.85:1
0.86:1
0.30:0
0.19:0
0.91:1
-0.01:0
0.18:0
0.05:0
0.71:1
0.77:1
0.77:1
0.67:1
0.01:0
average error: 0.207
output:
0.01:0
0.55:1
0.60:0
0.86:1
0.86:1
0.28:0
0.17:0
0.91:1
-0.01:0
0.18:0
0.05:0
0.72:1
0.77:1
0.77:1
0.69:1
0.01:0
average error: 0.199
output:
0.01:0
0.56:1
0.58:0
0.86:1
0.86:1
0.26:0
0.14:0
0.91:1
-0.01:0
0.17:0
0.05:0
0.73:1
0.76:1
0.78:1
0.70:1
0.01:0
average error: 0.192
output:
0.00:0
0.57:1
0.57:0
0.86:1
0.86:1
0.25:0
0.12:0
0.92:1
-0.01:0
0.17:0
0.05:0
0.74:1
0.76:1
0.78:1
0.71:1
0.00:0
average error: 0.185
output:
-0.00:0
0.58:1
0.55:0
0.87:1
0.86:1
0.23:0
0.11:0
0.92:1
-0.01:0
0.16:0
0.04:0
0.75:1
0.76:1
0.79:1
0.72:1
-0.00:0
average error: 0.179
output:
-0.00:0
0.60:1
0.53:0
0.87:1
0.86:1
0.22:0
0.09:0
0.92:1
-0.01:0
0.16:0
0.04:0
0.75:1
0.75:1
0.79:1
0.74:1
-0.00:0
average error: 0.173
output:
-0.00:0
0.61:1
0.51:0
0.87:1
0.86:1
0.21:0
0.08:0
0.93:1
-0.01:0
0.15:0
0.04:0
0.76:1
0.75:1
0.80:1
0.75:1
-0.00:0
average error: 0.168
output:
-0.01:0
0.62:1
0.49:0
0.88:1
0.86:1
0.20:0
0.06:0
0.93:1
-0.01:0
0.15:0
0.04:0
0.77:1
0.75:1
0.80:1
0.76:1
-0.01:0
average error: 0.163
output:
-0.01:0
0.63:1
0.47:0
0.88:1
0.86:1
0.19:0
0.05:0
0.93:1
-0.01:0
0.15:0
0.04:0
0.77:1
0.75:1
0.81:1
0.77:1
-0.01:0
average error: 0.158
output:
-0.01:0
0.64:1
0.45:0
0.88:1
0.86:1
0.19:0
0.04:0
0.93:1
-0.01:0
0.14:0
0.04:0
0.78:1
0.75:1
0.81:1
0.77:1
-0.01:0
average error: 0.154
output:
-0.01:0
0.65:1
0.43:0
0.88:1
0.86:1
0.18:0
0.03:0
0.94:1
-0.01:0
0.14:0
0.04:0
0.78:1
0.75:1
0.82:1
0.78:1
-0.01:0
average error: 0.149
output:
-0.01:0
0.66:1
0.41:0
0.88:1
0.86:1
0.17:0
0.02:0
0.94:1
-0.00:0
0.13:0
0.04:0
0.79:1
0.75:1
0.82:1
0.79:1
-0.01:0
average error: 0.145
output:
-0.01:0
0.67:1
0.39:0
0.88:1
0.85:1
0.17:0
0.02:0
0.94:1
-0.00:0
0.13:0
0.04:0
0.79:1
0.75:1
0.83:1
0.80:1
-0.01:0
average error: 0.141
output:
-0.02:0
0.68:1
0.38:0
0.89:1
0.85:1
0.16:0
0.01:0
0.94:1
-0.00:0
0.12:0
0.04:0
0.80:1
0.75:1
0.83:1
0.81:1
-0.02:0
average error: 0.138
output:
-0.02:0
0.69:1
0.36:0
0.89:1
0.85:1
0.16:0
0.01:0
0.94:1
-0.00:0
0.12:0
0.04:0
0.80:1
0.75:1
0.83:1
0.81:1
-0.02:0
average error: 0.134
output:
-0.02:0
0.69:1
0.34:0
0.89:1
0.85:1
0.16:0
0.00:0
0.95:1
-0.00:0
0.11:0
0.04:0
0.80:1
0.75:1
0.84:1
0.82:1
-0.02:0
average error: 0.131
output:
-0.02:0
0.70:1
0.33:0
0.89:1
0.85:1
0.15:0
-0.00:0
0.95:1
0.00:0
0.11:0
0.04:0
0.81:1
0.75:1
0.84:1
0.83:1
-0.02:0
average error: 0.128
output:
-0.02:0
0.71:1
0.31:0
0.89:1
0.85:1
0.15:0
-0.01:0
0.95:1
0.00:0
0.11:0
0.04:0
0.81:1
0.75:1
0.84:1
0.83:1
-0.02:0
average error: 0.126
output:
-0.02:0
0.72:1
0.30:0
0.89:1
0.85:1
0.14:0
-0.01:0
0.95:1
0.00:0
0.10:0
0.04:0
0.81:1
0.76:1
0.85:1
0.84:1
-0.02:0
average error: 0.123
output:
-0.02:0
0.72:1
0.29:0
0.89:1
0.85:1
0.14:0
-0.01:0
0.95:1
0.00:0
0.10:0
0.04:0
0.82:1
0.76:1
0.85:1
0.84:1
-0.02:0
average error: 0.121
output:
-0.02:0
0.73:1
0.28:0
0.89:1
0.85:1
0.14:0
-0.01:0
0.95:1
0.01:0
0.09:0
0.04:0
0.82:1
0.76:1
0.85:1
0.85:1
-0.02:0
average error: 0.118
output:
-0.02:0
0.73:1
0.26:0
0.89:1
0.86:1
0.13:0
-0.01:0
0.95:1
0.01:0
0.09:0
0.04:0
0.82:1
0.76:1
0.85:1
0.85:1
-0.02:0
average error: 0.116
output:
-0.02:0
0.74:1
0.25:0
0.89:1
0.86:1
0.13:0
-0.01:0
0.96:1
0.01:0
0.09:0
0.04:0
0.83:1
0.76:1
0.86:1
0.86:1
-0.02:0
average error: 0.114
output:
-0.02:0
0.74:1
0.24:0
0.90:1
0.86:1
0.13:0
-0.02:0
0.96:1
0.01:0
0.08:0
0.04:0
0.83:1
0.76:1
0.86:1
0.86:1
-0.02:0
average error: 0.112
output:
-0.02:0
0.75:1
0.23:0
0.90:1
0.86:1
0.12:0
-0.02:0
0.96:1
0.01:0
0.08:0
0.04:0
0.83:1
0.77:1
0.86:1
0.86:1
-0.02:0
average error: 0.110
output:
-0.02:0
0.75:1
0.22:0
0.90:1
0.86:1
0.12:0
-0.02:0
0.96:1
0.01:0
0.08:0
0.04:0
0.83:1
0.77:1
0.86:1
0.87:1
-0.02:0
average error: 0.107
output:
-0.02:0
0.76:1
0.21:0
0.90:1
0.86:1
0.12:0
-0.02:0
0.96:1
0.01:0
0.07:0
0.04:0
0.84:1
0.77:1
0.87:1
0.87:1
-0.02:0
average error: 0.105
output:
-0.02:0
0.76:1
0.21:0
0.90:1
0.86:1
0.12:0
-0.02:0
0.96:1
0.01:0
0.07:0
0.04:0
0.84:1
0.77:1
0.87:1
0.87:1
-0.02:0
average error: 0.104
output:
-0.02:0
0.77:1
0.20:0
0.90:1
0.86:1
0.11:0
-0.02:0
0.96:1
0.01:0
0.07:0
0.03:0
0.84:1
0.78:1
0.87:1
0.88:1
-0.02:0
average error: 0.102
output:
-0.02:0
0.77:1
0.19:0
0.90:1
0.86:1
0.11:0
-0.02:0
0.96:1
0.01:0
0.07:0
0.03:0
0.84:1
0.78:1
0.87:1
0.88:1
-0.02:0
average error: 0.100
output:
-0.02:0
0.78:1
0.18:0
0.90:1
0.86:1
0.11:0
-0.02:0
0.96:1
0.01:0
0.06:0
0.03:0
0.84:1
0.78:1
0.87:1
0.88:1
-0.02:0
average error: 0.098
output:
-0.02:0
0.78:1
0.18:0
0.90:1
0.86:1
0.10:0
-0.02:0
0.96:1
0.01:0
0.06:0
0.03:0
0.85:1
0.78:1
0.88:1
0.89:1
-0.02:0
average error: 0.096
output:
-0.02:0
0.78:1
0.17:0
0.90:1
0.86:1
0.10:0
-0.02:0
0.96:1
0.01:0
0.06:0
0.03:0
0.85:1
0.78:1
0.88:1
0.89:1
-0.02:0
average error: 0.095
output:
-0.02:0
0.79:1
0.16:0
0.90:1
0.86:1
0.10:0
-0.02:0
0.96:1
0.01:0
0.06:0
0.03:0
0.85:1
0.79:1
0.88:1
0.89:1
-0.02:0
average error: 0.093
output:
-0.01:0
0.79:1
0.16:0
0.90:1
0.86:1
0.10:0
-0.02:0
0.97:1
0.01:0
0.05:0
0.03:0
0.85:1
0.79:1
0.88:1
0.89:1
-0.01:0
average error: 0.091
output:
-0.01:0
0.79:1
0.15:0
0.90:1
0.86:1
0.09:0
-0.01:0
0.97:1
0.01:0
0.05:0
0.03:0
0.85:1
0.79:1
0.88:1
0.89:1
-0.01:0
average error: 0.090
output:
-0.01:0
0.80:1
0.15:0
0.90:1
0.86:1
0.09:0
-0.01:0
0.97:1
0.02:0
0.05:0
0.03:0
0.85:1
0.79:1
0.88:1
0.90:1
-0.01:0
average error: 0.088
output:
-0.01:0
0.80:1
0.14:0
0.90:1
0.86:1
0.09:0
-0.01:0
0.97:1
0.02:0
0.05:0
0.03:0
0.86:1
0.79:1
0.89:1
0.90:1
-0.01:0
average error: 0.087
output:
-0.01:0
0.80:1
0.14:0
0.90:1
0.87:1
0.09:0
-0.01:0
0.97:1
0.02:0
0.05:0
0.03:0
0.86:1
0.80:1
0.89:1
0.90:1
-0.01:0
average error: 0.086
output:
-0.01:0
0.81:1
0.13:0
0.91:1
0.87:1
0.08:0
-0.01:0
0.97:1
0.02:0
0.05:0
0.03:0
0.86:1
0.80:1
0.89:1
0.90:1
-0.01:0
average error: 0.084
output:
-0.01:0
0.81:1
0.13:0
0.91:1
0.87:1
0.08:0
-0.01:0
0.97:1
0.02:0
0.04:0
0.02:0
0.86:1
0.80:1
0.89:1
0.90:1
-0.01:0
average error: 0.083
output:
-0.01:0
0.81:1
0.12:0
0.91:1
0.87:1
0.08:0
-0.01:0
0.97:1
0.02:0
0.04:0
0.02:0
0.86:1
0.80:1
0.89:1
0.91:1
-0.01:0
average error: 0.082
output:
-0.01:0
0.81:1
0.12:0
0.91:1
0.87:1
0.08:0
-0.01:0
0.97:1
0.02:0
0.04:0
0.02:0
0.86:1
0.80:1
0.89:1
0.91:1
-0.01:0
average error: 0.080
output:
-0.01:0
0.82:1
0.12:0
0.91:1
0.87:1
0.07:0
-0.01:0
0.97:1
0.02:0
0.04:0
0.02:0
0.87:1
0.80:1
0.89:1
0.91:1
-0.01:0
average error: 0.079
output:
-0.01:0
0.82:1
0.11:0
0.91:1
0.87:1
0.07:0
-0.01:0
0.97:1
0.02:0
0.04:0
0.02:0
0.87:1
0.81:1
0.89:1
0.91:1
-0.01:0
average error: 0.078
output:
-0.01:0
0.82:1
0.11:0
0.91:1
0.87:1
0.07:0
-0.01:0
0.97:1
0.02:0
0.04:0
0.02:0
0.87:1
0.81:1
0.90:1
0.91:1
-0.01:0
average error: 0.077
output:
-0.01:0
0.82:1
0.11:0
0.91:1
0.87:1
0.07:0
-0.01:0
0.97:1
0.02:0
0.04:0
0.02:0
0.87:1
0.81:1
0.90:1
0.91:1
-0.01:0
average error: 0.076
output:
-0.01:0
0.83:1
0.10:0
0.91:1
0.87:1
0.07:0
-0.01:0
0.97:1
0.02:0
0.04:0
0.02:0
0.87:1
0.81:1
0.90:1
0.91:1
-0.01:0
average error: 0.075
output:
-0.01:0
0.83:1
0.10:0
0.91:1
0.87:1
0.06:0
-0.01:0
0.97:1
0.02:0
0.03:0
0.02:0
0.87:1
0.81:1
0.90:1
0.92:1
-0.01:0
average error: 0.074
output:
-0.01:0
0.83:1
0.10:0
0.91:1
0.87:1
0.06:0
-0.01:0
0.97:1
0.02:0
0.03:0
0.02:0
0.87:1
0.82:1
0.90:1
0.92:1
-0.01:0
average error: 0.073
output:
-0.01:0
0.83:1
0.09:0
0.91:1
0.87:1
0.06:0
-0.01:0
0.97:1
0.02:0
0.03:0
0.02:0
0.87:1
0.82:1
0.90:1
0.92:1
-0.01:0
average error: 0.072
output:
-0.01:0
0.83:1
0.09:0
0.91:1
0.88:1
0.06:0
-0.01:0
0.97:1
0.02:0
0.03:0
0.02:0
0.88:1
0.82:1
0.90:1
0.92:1
-0.01:0
average error: 0.071
output:
-0.01:0
0.84:1
0.09:0
0.91:1
0.88:1
0.06:0
-0.01:0
0.97:1
0.02:0
0.03:0
0.02:0
0.88:1
0.82:1
0.90:1
0.92:1
-0.01:0
average error: 0.070
output:
-0.01:0
0.84:1
0.09:0
0.91:1
0.88:1
0.06:0
-0.01:0
0.97:1
0.02:0
0.03:0
0.02:0
0.88:1
0.82:1
0.90:1
0.92:1
-0.01:0
average error: 0.069
output:
-0.01:0
0.84:1
0.08:0
0.91:1
0.88:1
0.05:0
-0.01:0
0.97:1
0.02:0
0.03:0
0.02:0
0.88:1
0.82:1
0.90:1
0.92:1
-0.01:0
average error: 0.068
output:
-0.01:0
0.84:1
0.08:0
0.91:1
0.88:1
0.05:0
-0.01:0
0.97:1
0.02:0
0.03:0
0.02:0
0.88:1
0.82:1
0.91:1
0.92:1
-0.01:0
average error: 0.067
output:
-0.01:0
0.84:1
0.08:0
0.91:1
0.88:1
0.05:0
-0.01:0
0.97:1
0.02:0
0.03:0
0.01:0
0.88:1
0.83:1
0.91:1
0.92:1
-0.01:0
average error: 0.066
output:
-0.01:0
0.85:1
0.08:0
0.91:1
0.88:1
0.05:0
-0.01:0
0.97:1
0.02:0
0.03:0
0.01:0
0.88:1
0.83:1
0.91:1
0.93:1
-0.01:0
average error: 0.066
output:
-0.01:0
0.85:1
0.08:0
0.92:1
0.88:1
0.05:0
-0.01:0
0.97:1
0.02:0
0.03:0
0.01:0
0.88:1
0.83:1
0.91:1
0.93:1
-0.01:0
average error: 0.065
output:
-0.01:0
0.85:1
0.07:0
0.92:1
0.88:1
0.05:0
-0.01:0
0.97:1
0.02:0
0.03:0
0.01:0
0.88:1
0.83:1
0.91:1
0.93:1
-0.01:0
average error: 0.064
output:
-0.01:0
0.85:1
0.07:0
0.92:1
0.88:1
0.05:0
-0.01:0
0.98:1
0.02:0
0.03:0
0.01:0
0.88:1
0.83:1
0.91:1
0.93:1
-0.01:0
average error: 0.063
output:
-0.01:0
0.85:1
0.07:0
0.92:1
0.88:1
0.04:0
-0.01:0
0.98:1
0.02:0
0.02:0
0.01:0
0.89:1
0.83:1
0.91:1
0.93:1
-0.01:0
average error: 0.063
output:
-0.01:0
0.85:1
0.07:0
0.92:1
0.88:1
0.04:0
-0.01:0
0.98:1
0.02:0
0.02:0
0.01:0
0.89:1
0.83:1
0.91:1
0.93:1
-0.01:0
average error: 0.062
output:
-0.01:0
0.85:1
0.07:0
0.92:1
0.88:1
0.04:0
-0.01:0
0.98:1
0.02:0
0.02:0
0.01:0
0.89:1
0.84:1
0.91:1
0.93:1
-0.01:0
average error: 0.061
output:
-0.00:0
0.86:1
0.06:0
0.92:1
0.88:1
0.04:0
-0.01:0
0.98:1
0.02:0
0.02:0
0.01:0
0.89:1
0.84:1
0.91:1
0.93:1
-0.00:0
average error: 0.061
output:
-0.00:0
0.86:1
0.06:0
0.92:1
0.89:1
0.04:0
-0.01:0
0.98:1
0.02:0
0.02:0
0.01:0
0.89:1
0.84:1
0.91:1
0.93:1
-0.00:0
average error: 0.060
output:
-0.00:0
0.86:1
0.06:0
0.92:1
0.89:1
0.04:0
-0.01:0
0.98:1
0.02:0
0.02:0
0.01:0
0.89:1
0.84:1
0.91:1
0.93:1
-0.00:0
average error: 0.059
output:
-0.00:0
0.86:1
0.06:0
0.92:1
0.89:1
0.04:0
-0.01:0
0.98:1
0.02:0
0.02:0
0.01:0
0.89:1
0.84:1
0.91:1
0.93:1
-0.00:0
average error: 0.059
output:
-0.00:0
0.86:1
0.06:0
0.92:1
0.89:1
0.04:0
-0.01:0
0.98:1
0.02:0
0.02:0
0.01:0
0.89:1
0.84:1
0.91:1
0.93:1
-0.00:0
average error: 0.058
output:
-0.00:0
0.86:1
0.06:0
0.92:1
0.89:1
0.04:0
-0.01:0
0.98:1
0.02:0
0.02:0
0.01:0
0.89:1
0.84:1
0.92:1
0.94:1
-0.00:0
average error: 0.057