-
Notifications
You must be signed in to change notification settings - Fork 3
Math for Machine Learning
-
Principal Component Analysis: When dealing with high dimensions, but the information can be captured in lower dimensions i.e. data has low intrinsic dimensionality which makes the algorithm scalable and lifts the curse of high dimensions. PCA is useful for eliminating higher dimensions.
Intuitions: Principal components are the directions in which data has the highest variance. The direction of the first principal component is given by the first eigenvector of the covariance matrix. PCA is not selecting some characteristics and discarding the others. Instead, it constructs some new characteristics that turn out to summarize our list
-
Single Value Decomposition (SVD)
-
Eigen Values and Eigen-Vectors: Eigen-Vectors can be used as a bases for representing any vector (for eg) in N dimension space. For a general setting, we have a system, which takes N dim input "x", applies some Linear Transformation i.e. "A", we are interested in knowing output y = Ax. Now matrix multiplication is computationally intensive. How to deal with it. Eigen-Vectors to the rescue. Eigen-Vectors are the vectors which do not change direction when a Linear Transformation "A" is applied to it. Thus, A. (Eigen-Vector) = Lambda (some constant) x Eigen-Vector. Thus Eigen-Vector i.e. V is only scaled. Now for any system, we V is the bases, any input x can be represented as linear combination of V. x = bV. So, y = Ax --> A (bV) --> b (AV) --> b . Lambda . V. Thus easier to calculate the transformation. Useful Links For Intution: 1 2
-
Euclidean Distance
-
Covariance Matrix -