Principal Component Analysis - A Geometric Approach
Overview
Principal Component Analysis is a neat statistical trick, with a simple bit of linear algebra backing it up. Imagine you've got a dependent, or response variable, $Y$ and a large number of independent, a.k.a. explanatory, variables $X_1, ... X_p$. You also have $n$ measurements of each, giving you a matrix
$$
\begin{align}
X = &
\begin{pmatrix}
x_{11} & x_{12} & ... & x_{1n}\\
\vdots & & & \\
x_{p1} & x_{p2} & ... & x_{pn}\\
\end{pmatrix} \\
= &
\begin{pmatrix}
-\mathbb{x_1}- \\
\vdots \\
-\mathbb{x_p}- \\
\end{pmatrix} \\
\end{align}
$$
You want to build a model explaining how $Y$ depends on the $X_i$, perhaps a linear model like
$$
Y = \beta_0 + \sum_{i=1}^p \beta_i X_i
$$
but first you need to check that the $X_i$ are more or less independent of each other, otherwise there's no way of uniquely setting the $\beta_i$ values and the stats package is likely to produce an unreliable output. In general the $X_i$ are not independent, which presents a problem. This is the problem that PCA seeks to solve, and it does it by rotating the $X_i$ variables in p-space to produce new variables $X_i'$ which are completely independent of eachother. In the process it provides a mechanism for reducing the number of explanatory variables, enabling a more parsimonious model.
Variance
\sigma^2 = \frac{1}{n}\sum_{j=1}^n{\left|
\begin{pmatrix}
x_{1j} \\
\vdots \\
x_{pj}\\
\end{pmatrix}
\right|^2
}
$$
This can be rewritten using the per variable variances to get
$$
\begin{align}
\sigma^2 &= \frac{1}{n}\sum_{i,j}{x_{ij}^2} \\
&= \sigma_1^2 + ... + \sigma_p^2 \\
&= p
\end{align}
$$
Covariance
Rotation
C \mathbb{e_i} = \lambda_i \mathbb{e_i}
$$
from which it follows that whenever $i \ne j$
$$
\mathbb{e_j}^TXX^T \mathbb{e_i} = 0
$$
But the LHS above is just the dot product of $\mathbb{x_j}'$ and $\mathbb{x_i}'$ defined by the rule
$$
\mathbb{x_i}' = \mathbb{e_i}^T X
$$
Hey presto! We've defined a rotation of the $X_i$ which results in what we wanted: independent explanatory variables. (I'm calling it a rotation because just like with rotations in 3D space, the matrix defined by the $\mathbb{e_i}$ is inverse to its transpose.) These transformed variables are referred to as Principal Components.
Parsimony
$$
\sigma'^2 = \sigma^2 = p
$$
It turns out that the $\lambda_i$ are the same as the per-principal component variances since
$$
\begin{align}
\sigma_i'^2 &= \frac{1}{n}\mathbb{x_i'}\mathbb{x_i'}^T \\
&= \frac{1}{n} \mathbb{e_i}^T X X^T\mathbb{e_i} \\
&=\mathbb{e_i}^T C\mathbb{e_i} \\
&=\lambda_i \\
\end{align}
$$
This tells us that the eigenvalues (variances) must all be positive and sum to $p$. Remember, we're looking at exactly the same data, just rotated in p-space. Thus, if any eigenvalues happen to be near zero, this says that we can ditch the corresponding principal component. In general we arrange the eigenvalues in decreasing order and name the corresponding principal components PC1, PC2, and so on, and then we ditch the ones at the end that contribute less than 5% of the variance between them. This is equivalent to projecting the observations onto a hypersurface in such a way as to lose the least amount of information. Very often just two principal components suffice to cover most of the variance enabling the creation of a simple scatterplot which can help identify modal groups.
Example
# Print real numbers, not scientific notation. options(scipen=999) # Produce 1000 x,y,z,t displacement observations for photons emitted by a torch n <- 1000 x <- rnorm(n, mean=10, sd=1) y <- rnorm(n, mean=8, sd=0.5) z <- rnorm(n, mean=12, sd=1.5) t <- (x^2 + y^2 + z^2)^(1/2) # Combine into dataframe df <- data.frame(x=x, y=y, z=z, t=t) # Show how t is not independent of x,y,z library(car) scatterplotMatrix(df) # Do a PCA pca <- prcomp(df, scale=T, center=T) summary(pca) print(pca$rotation)
The dependency between the original variables can be seen by the scatterplot matrix the script generates:
The text output shows that the PCA does in fact detect this
> summary(pca) Importance of components: PC1 PC2 PC3 PC4 Standard deviation 1.3934 1.0327 0.9950 0.04177 Proportion of Variance 0.4854 0.2666 0.2475 0.00044 Cumulative Proportion 0.4854 0.7520 0.9996 1.00000
The above output shows that half the variation is covered by PC1 on it's own (corresponding to an eigenvalue of 2 where $p =4$). This is because R has noticed that if you scale each of $x, y$, and $z$ and add them together you get a first order approximation for $t$ as well. The final PC adds almost nothing to the variance, as expected, and can be dropped.
> print(pca$rotation)
PC1 PC2 PC3 PC4
x 0.2746490 0.89112242 -0.08053689 -0.3521116
y 0.1857603 -0.02987745 0.97020834 -0.1526310
z 0.6142378 -0.44980563 -0.22700113 -0.6073362
t 0.7160817 0.05179825 -0.02607824 0.6956033
Comments
Post a Comment