Yada Pruksachatkun
For a complex matrix the adjoint is the complex conjugate of the transpose if the adjoint is the inverse then the matrix M is unitary.
Rows and columns are orthonormal. If M is a unitary matrix, if you transform a vecot rx with M, the norm of the vector is rotected. . It rotates a data point to a new location, but doesn’t change its distance from the origin. This means that it can rearrange the points but not do something as nasty as make some of them disappear.
Fourier transforms are dissecting sine waves into a constant and the sum of a series, that consist of a cosine and sin with various frequencies.
Wavelets are great for when you want to find the temporal and sptial ordering. This certainly need not be true, and doesn’t even apply to a set of measurements that have no particular temporal or spatial ordering
With wavelet, you get tree of coefficient, starting with good time and bad frequency and ending with vice ersa. It will give you what you get for the space-time transform. What are the wavelet visualization?
from IPython.display import Image
from IPython.display import FileLink, FileLinks
Image(filename='files/wk91a.jpg')
Image(filename='files/wk91b.jpg')
import numpy as np
def gen_vector():
mu, sigma = 0, 0.1 # mean and standard deviation
x1 = np.random.normal(mu, sigma, 1000)
x2 = np.random.normal(mu, sigma, 1000)
x3 = x1 + x2
return np.array([x1, x2, x3])
#!/usr/bin/env python
from __future__ import division
from numpy import *
from numpy.random import randn
import numpy.linalg as la
N = 1000000
x = randn(2,N)
x = vstack((x,x[0:1]+x[1:]))
print("now")
print(x)
Cx = np.cov(x)
w,v = la.eig(Cx)
print('Cx\n',Cx)
print('Cx evals\n',w)
print('Cx evecs\n',v)
drawn independently from a Gaussian distribution with zero mean and unit vari- ance, and x3 = x1 + x2. (a) Analytically calculate the covariance matrix of ⃗x. (b) What are the eigenvalues? (c) Numerically verify these results by drawing a data set from the distribution and computing the covariance matrix and eigenvalues.
import numpy as np
def gen_C(x):
d = (x*x).reshape(-1,1,N)
res = np.array([ [sum(d[0])/N,0, 0],[0, sum(d[0])/N,0], [0,0, sum(d[0])/N]])
# kfigure uot cov(x,y) andother than the diagonal.
return res
N = 1000000
x = randn(2,N)
x = vstack((x,x[0:1]+x[1:]))
print(x)
print(x.shape)
Cx = np.cov(x) # covariance
print(Cx.shape)
print(Cx)
w,v = np.linalg.eig(Cx)
print("c")
print(Cx) # eigenvalues
from __future__ import division
import numpy as np
from matplotlib import pyplot as plt
N = 1000
dplot = 5
s1 = np.random.rand(2, N)
plt.figure()
plt.title("Initial Uniform Random variables")
plt.plot(s1[0,::dplot],s1[1,::dplot],linestyle='',marker='.')
plt.show()
A = np.array([[1,2],[3,1]])
x = np.dot(A,s1)
plt.figure()
plt.title("After mixing with A")
plt.plot(x[0,::dplot],x[1,::dplot],linestyle='',marker='.')
plt.show()
def make_mean_zero(x):
mean = sum(x[0,])/N
x[0, ] = x[0, ] - mean
mean = sum(x[1,])/N
x[1, ] = x[1, ] - mean
return x
x = make_mean_zero(x) #make zero mean
w,v = np.linalg.eig(np.cov(x))
M = (v/sqrt(w)).T
unit_x = dot(M,x) #diagonalize with unit variance
print(y)
plt.figure()
plt.title("After mixing with A")
plt.plot(unit_x[0,::dplot],unit_x[1,::dplot],linestyle='',marker='.')
plt.gca().set_aspect(1.)
#ICA
#ICA
def gen_expect(x):
mean1 = sum(x[0,])/N
mean2 = sum(x[1,])/N
return np.array([[mean1],[mean2]])
df = lambda x: tanh(x)
ddf = lambda x: 1/cosh(x)**2
def ica(w):
i=0
while True:
i+=1
g1 = gen_expect(df(dot(w,unit_x))*unit_x)
g2 = gen_expect(ddf(dot(w,unit_x)))
g1 = np.array([g1[0, 0], g1[1,0]])
g2 = np.array([g2[0, 0], g2[1,0]])
w_new = g1 - g2
w_new /= np.linalg.norm(w_new)
if abs(np.linalg.norm(w_new-w))<0.000001:
break
w = w_new
return w
w1 = ica(np.random.rand(2))
w2 = array([-w1[1],w1[0]]) # orthog
print("Least-Gaussian mixture ")
print(w1, w2)
plt.title("After decomposing with ICA")
plt.plot([0,w1[0]],[0,w1[1]],marker='.',lw=4,c='r')
plt.plot([0,w2[0]],[0,w2[1]],marker='.',lw=4,c='r')
plt.gca().set_aspect(1.)
plt.show()
ICA - creating the axis in which it is indpeendnet. PCA says "is it uncorrelated?" but iCA says - this is how it's uncorrelated. There, using central limit theorem to se nongaussianity to find indpenendet components. the search to find the trnasform that maximizes the data look interesting. least random. Look into ICAs in EEGs. Give me a sepearable repesentation. This is linear ICA, but nonlinear ICA is deep learning. PCA is used for dimensionality reduciton - http://ufldl.stanford.edu/tutorial/unsupervised/PCAWhitening/