Skip to content

Commit

Permalink
all
Browse files Browse the repository at this point in the history
  • Loading branch information
Rohin arora authored and Rohin arora committed Sep 26, 2019
1 parent 28e2005 commit a7dcd74
Show file tree
Hide file tree
Showing 24 changed files with 195 additions and 3 deletions.
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
meta.md
.DS_Store
Binary file added Handout.pdf
Binary file not shown.
6 changes: 6 additions & 0 deletions Hw1/.ipynb_checkpoints/HW1-checkpoint.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
{
"cells": [],
"metadata": {},
"nbformat": 4,
"nbformat_minor": 2
}
Binary file added Hw1/EECE5644_2019Fall_Homework1Questions.pdf
Binary file not shown.
107 changes: 107 additions & 0 deletions Hw1/HW1.ipynb

Large diffs are not rendered by default.

48 changes: 48 additions & 0 deletions Hw1/HW1.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
#### Answer 1.

1. Variance(x) <-> $Var(x)$

$Var(x)$
=$E[(x-\mu)^2]$
=$E[x^2+\mu^2-2\mu*x]$
=$E[x^2]+E[\mu^2]-2*E[\mu*x]$ (by linearity of expectation)
=$E[x^2]+\mu^2-2*\mu*E[x]$ ($\mu^2=E[\mu^2]$ and $\mu*E[x]=E[\mu*x]$ as $\mu$ is a constant)
=$E[x^2]+\mu^2-2*\mu^2$ ($\mu=E[x]$ -> defination of mean)
=$E[x^2]-\mu^2$


<div align="right">
<b>
QED
</b>
</div>


2. Variance($\vec x$)<-> $Var(\vec x)$

$Var(\vec x)$
= $E[(\vec x-\mu)(\vec x-\mu)^T]$
= $E[\vec x*\vec x^T -\vec x*\mu^T-\mu*\vec x^T+\mu*\mu^T]$
= $E[\vec x*\vec x^T] -E[\vec x*\mu^T]-E[\mu*\vec x^T] +E[\mu*\mu^T]$ (by linearity of expectation)
= $E[\vec x*\vec x^T] -E[\vec x]*\mu^T-\mu*E[\vec x^T] +\mu*\mu^T$
= $E[\vec x*\vec x^T] -\mu*\mu^T-\mu*\mu^T +\mu*\mu^T$
= $E[\vec x*\vec x^T] -\mu*\mu^T$





<div align="right">
<b>
QED
</b>
</div>

##### References

1. https://en.wikipedia.org/wiki/Variance
2. https://www.wolframalpha.com/
3. https://atom.io
4. https://www.python.org/
5. https://www.scipy.org/
6. https://jupyter.org/
Binary file not shown.
Binary file added L1/L01_Sup_TheMatrixCookbook_v20121115.pdf
Binary file not shown.
7 changes: 4 additions & 3 deletions L1.md → L1/L1.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@
* Consider a square matrix A of size mxm. It has m eigenvalues.
* Sum of eigenvalues = trace(A)= sum of diagonal elements
* Product of eigenvalues = det(A)
* All vectors are assumed coloumn vectors by default

![](pics/one.png)
* If A is square and A=A(t) (transpose), then its called symmetric matrix
Expand All @@ -33,9 +34,9 @@
* Properties involving multiple matrices
* Trace of sum of matrices is sum of traces
* Det of (product of matrices)= Product of determinants
* $(A*B)^{-1}=B^{-1} * A^{-1}$
* $(A*B)^{T}=B^{T} * A^{T}$
* $(A^{-1})^{T}=(A^{T})^{-1}$
* (A*B)^{-1} = B^{-1} * A^{-1}
* (A*B)^{T} = B^{T} * A^{T}
* Transpose of A inverse = inverse of A transpose


![](pics/yourscanfromsnelllibrary1/image0000.jpg)
Expand Down
Binary file added L2/L01_ProbabilityAndLinearAlgebraReview.pdf
Binary file not shown.
Binary file added L2/L02_ProbabilityReview.pdf
Binary file not shown.
3 changes: 3 additions & 0 deletions L2/L2.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
https://stats.stackexchange.com/questions/86487/what-is-the-meaning-of-the-density-of-a-distribution-at-a-point

https://math.stackexchange.com/questions/1412015/intuitive-meaning-of-the-probability-density-function-at-a-point
Binary file not shown.
15 changes: 15 additions & 0 deletions L3/L3.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
* Think of gradient as a first derivative of simple function like parabola. And hessian as second derivative of the simple function. The intuition directly transfers when x is a vector values complex function, first derivative is a gradient with column vector, and hessian is second derivative. The roots of
* Global and local minima
* Equality constraints are almost always active. They are inactive in the rare case when Equality constraint surface passes through unconstraint optimum value.
* Inequality constraints may or may not be active. In general, and constraint is active if it plays an "active" role in preventing the solution to reach unconstraint minima
* All constraints become inactive in EM algorithm (to fit GMM)
* When training SVM, some constraints active. Most inactive
* For active constraints, langrange multiplier is positive. Else 0.
* Can look at values of langrange multiplier and decide if constraints could be relaxed
* 2nd order constraints for constraint optimization skipped
##### PCA as a optimization problem with eqaulity constraints

* "eigenfaces"
* The n principle components form a new basis for the original vector $\vec{x}$

* LDA - next lecture. optimization problem without constraints
Binary file added L3/Numerical_Optimization.pdf
Binary file not shown.
Binary file added L3/boosting.pdf
Binary file not shown.
Binary file added L3/cs229-cvxopt.pdf
Binary file not shown.
Binary file added L3/cs229-cvxopt2.pdf
Binary file not shown.
Binary file added L3/cvx4ml.pdf
Binary file not shown.
Binary file added L3/section_convex_optimization2.ps
Binary file not shown.
9 changes: 9 additions & 0 deletions L4.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
* decision boundary given by SVM, Fisher LDA, logistic regression.
* generalized eigenvalue/eigenvector
* 2nd derivative to find maxima
* how to choose $\gamma$.
* do we not need any distance metric in the error function?
* want-
* min $\sigma_1^{_2}$ and $\sigma_2^{_2}$
* same as min $\sigma_1^{_2}+\sigma_1^{_2}$
* same as max $(1/\sigma_1^{_2}+\sigma_1^{_2})$
1 change: 1 addition & 0 deletions L6.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
* Put the assignment code on github. Write the assignment on overleaf
Binary file added ToDo/cs229-notes1.pdf
Binary file not shown.
Binary file removed pics/.DS_Store
Binary file not shown.

0 comments on commit a7dcd74

Please sign in to comment.