Week 9 Transforms

Yada Pruksachatkun

Notes

For a complex matrix the adjoint is the complex conjugate of the transpose if the adjoint is the inverse then the matrix M is unitary.

Rows and columns are orthonormal. If M is a unitary matrix, if you transform a vecot rx with M, the norm of the vector is rotected. . It rotates a data point to a new location, but doesn’t change its distance from the origin. This means that it can rearrange the points but not do something as nasty as make some of them disappear.

Fourier transforms are dissecting sine waves into a constant and the sum of a series, that consist of a cosine and sin with various frequencies.

Wavelets are great for when you want to find the temporal and sptial ordering. This certainly need not be true, and doesn’t even apply to a set of measurements that have no particular temporal or spatial ordering

With wavelet, you get tree of coefficient, starting with good time and bad frequency and ending with vice ersa. It will give you what you get for the space-time transform. What are the wavelet visualization?

Prove that the DFT is unitary.

In [54]:
from IPython.display import Image
from IPython.display import FileLink, FileLinks
Image(filename='files/wk91a.jpg')
Out[54]:
In [55]:
Image(filename='files/wk91b.jpg')
Out[55]:

Consider a measurement of a three-component vector ⃗x, with x1 and x2 being drawn independently from a Gaussian distribution with zero mean and unit vari- ance, and x3 = x1 + x2.

In [ ]:
import numpy as np
In [25]:
def gen_vector():
    mu, sigma = 0, 0.1 # mean and standard deviation
    x1 = np.random.normal(mu, sigma, 1000)
    x2 = np.random.normal(mu, sigma, 1000)
    x3 = x1 + x2
    return np.array([x1, x2, x3])
    
#!/usr/bin/env python
from __future__ import division
from numpy import *
from numpy.random import randn
import numpy.linalg as la


N = 1000000
x = randn(2,N)
x = vstack((x,x[0:1]+x[1:]))
print("now")
print(x)
Cx = np.cov(x)
w,v = la.eig(Cx)
print('Cx\n',Cx)
print('Cx evals\n',w)
print('Cx evecs\n',v)
now
[[ 1.76061252  0.49142166  1.16595048 ..., -0.69747121  1.37271216
   0.24718334]
 [ 0.66191783  0.95323639 -1.13456773 ..., -0.03420168 -0.32579999
  -0.29999921]
 [ 2.42253035  1.44465805  0.03138275 ..., -0.73167289  1.04691217
  -0.05281588]]
Cx
 [[  9.99479536e-01   6.59954445e-04   1.00013949e+00]
 [  6.59954445e-04   9.99858371e-01   1.00051833e+00]
 [  1.00013949e+00   1.00051833e+00   2.00065782e+00]]
Cx evals
 [  3.00098678e+00   9.99008945e-01  -9.23789361e-16]
Cx evecs
 [[ -4.08132405e-01  -7.07173675e-01  -5.77350269e-01]
 [ -4.08364165e-01   7.07039869e-01  -5.77350269e-01]
 [ -8.16496570e-01  -1.33806216e-04   5.77350269e-01]]

2) Consider a measurement of a three-component vector ⃗x, with x1 and x2 being

drawn independently from a Gaussian distribution with zero mean and unit vari- ance, and x3 = x1 + x2. (a) Analytically calculate the covariance matrix of ⃗x. (b) What are the eigenvalues? (c) Numerically verify these results by drawing a data set from the distribution and computing the covariance matrix and eigenvalues.

In [84]:
import numpy as np

def gen_C(x):
    d = (x*x).reshape(-1,1,N)
    res = np.array([ [sum(d[0])/N,0, 0],[0, sum(d[0])/N,0], [0,0, sum(d[0])/N]])
    # kfigure uot cov(x,y) andother than the diagonal. 
    return res

N = 1000000
x = randn(2,N)
x = vstack((x,x[0:1]+x[1:]))
print(x)
print(x.shape)
Cx = np.cov(x) # covariance 
print(Cx.shape)
print(Cx)
w,v = np.linalg.eig(Cx)
print("c") 
print(Cx) # eigenvalues
[[-0.03583061  0.37604084 -0.14603525 ..., -0.98719307  0.15343735
  -0.61251388]
 [ 0.95923716  1.23331501 -0.74079371 ...,  0.66995878  0.33621834
   0.43890469]
 [ 0.92340655  1.60935585 -0.88682896 ..., -0.31723428  0.48965569
  -0.17360919]]
(3, 1000000)
(3, 3)
[[  9.98266619e-01  -1.57321403e-03   9.96693405e-01]
 [ -1.57321403e-03   1.00038935e+00   9.98816136e-01]
 [  9.96693405e-01   9.98816136e-01   1.99550954e+00]]
c
[[  9.98266619e-01  -1.57321403e-03   9.96693405e-01]
 [ -1.57321403e-03   1.00038935e+00   9.98816136e-01]
 [  9.96693405e-01   9.98816136e-01   1.99550954e+00]]
In [59]:
from __future__ import division
import numpy as np
from matplotlib import pyplot as plt
In [61]:
N = 1000
dplot = 5
s1 = np.random.rand(2, N)
plt.figure()
plt.title("Initial Uniform Random variables")
plt.plot(s1[0,::dplot],s1[1,::dplot],linestyle='',marker='.')
plt.show()
In [62]:
A = np.array([[1,2],[3,1]])
x = np.dot(A,s1)
plt.figure()
plt.title("After mixing with A")
plt.plot(x[0,::dplot],x[1,::dplot],linestyle='',marker='.')
plt.show()
In [82]:
def make_mean_zero(x):
    mean = sum(x[0,])/N
    x[0, ] = x[0, ] - mean
    mean = sum(x[1,])/N
    x[1, ] = x[1, ] - mean
    return x


x = make_mean_zero(x) #make zero mean
w,v = np.linalg.eig(np.cov(x))
M = (v/sqrt(w)).T
unit_x = dot(M,x) #diagonalize with unit variance
print(y)
plt.figure()
plt.title("After mixing with A")
plt.plot(unit_x[0,::dplot],unit_x[1,::dplot],linestyle='',marker='.')
plt.gca().set_aspect(1.)


#ICA
[[ 3.59241289  0.61697889  1.30595983 ..., -3.72081094  1.94049573
   1.16015576]
 [ 0.38286615  1.00591137  0.23708407 ...,  0.07924328 -0.40933364
   1.98436068]]
In [83]:
#ICA
def gen_expect(x):
    mean1 = sum(x[0,])/N
    mean2 = sum(x[1,])/N
    return np.array([[mean1],[mean2]])

df = lambda x: tanh(x)
ddf = lambda x: 1/cosh(x)**2
def ica(w):
    i=0
    while True:
        i+=1  
        g1 = gen_expect(df(dot(w,unit_x))*unit_x)
        g2 = gen_expect(ddf(dot(w,unit_x)))
        g1 = np.array([g1[0, 0], g1[1,0]])
        g2 = np.array([g2[0, 0], g2[1,0]])
        w_new = g1 - g2
        w_new /= np.linalg.norm(w_new)
        if abs(np.linalg.norm(w_new-w))<0.000001: 
            break
        w = w_new
    return w

w1 = ica(np.random.rand(2))
w2 = array([-w1[1],w1[0]]) # orthog

print("Least-Gaussian mixture ")
print(w1, w2)

plt.title("After decomposing with ICA")
plt.plot([0,w1[0]],[0,w1[1]],marker='.',lw=4,c='r')
plt.plot([0,w2[0]],[0,w2[1]],marker='.',lw=4,c='r')
plt.gca().set_aspect(1.)

plt.show()
Least-Gaussian mixture 
[ 0.83603278  0.5486795 ] [-0.5486795   0.83603278]

Important intuition

ICA - creating the axis in which it is indpeendnet. PCA says "is it uncorrelated?" but iCA says - this is how it's uncorrelated. There, using central limit theorem to se nongaussianity to find indpenendet components. the search to find the trnasform that maximizes the data look interesting. least random. Look into ICAs in EEGs. Give me a sepearable repesentation. This is linear ICA, but nonlinear ICA is deep learning. PCA is used for dimensionality reduciton - http://ufldl.stanford.edu/tutorial/unsupervised/PCAWhitening/

In [ ]: