Principal Component Analysis, or PCA (see [77,87]), is widely used in signal processing, statistics, and neural computing. In some application areas, this is also called the (discrete) Karhunen-Loève transform, or the Hotelling transform.
The basic idea in PCA is to find the components
s1,s2,...,sn so
that they explain the maximum amount of variance possible by nlinearly transformed components.
PCA can be defined in an intuitive way using a
recursive formulation. Define the
direction of the first principal component, say ,
by
(4) |
The basic goal in PCA is to reduce the dimension of the data. Thus one usually chooses n<<m. Indeed, it can be proven that the representation given by PCA is an optimal linear dimension reduction technique in the mean-square sense [77]. Such a reduction in dimension has important benefits. First, the computational overhead of the subsequent processing stages is reduced. Second, noise may be reduced, as the data not contained in the n first components may be mostly due to noise. Third, a projection into a subspace of a very low dimension, for example two, is useful for visualizing the data. Note that often it is not necessary to use the n principal components themselves, since any other orthonormal basis of the subspace spanned by the principal components (called the PCA subspace) has the same data compression or denoising capabilities.
A simple illustration of PCA is found in Fig. 1, in which the first principal component of a two-dimensional data set is shown.
|