Next: Definitions of linear independent
Up: Independent Component Analysis
Previous: Independent Component Analysis
To begin with, we shall recall some basic definitions needed. Denote
by
y_{1},y_{2},...,y_{m} some random variables with joint density
f(y_{1},...,y_{m}). For simplicity, assume that the variables are
zeromean. The variables y_{i} are (mutually)
independent, if the density function can be factorized [122]:
f(y_{1},...,y_{m})=f_{1}(y_{1})f_{2}(y_{2})...f_{m}(y_{m})

(7) 
where f_{i}(y_{i}) denotes the marginal density of y_{i}. To distinguish
this form of independence from other concepts of independence, for
example, linear independence, this property is sometimes called
statistical independence.
Independence must be distinguished from uncorrelatedness, which means
that

(8) 
Independence is in general a much stronger requirement than
uncorrelatedness. Indeed, if the y_{i} are independent, one has

(9) 
for any^{2}
functions g_{1} and g_{2} [122]. This is clearly a stricter
condition than the
condition of uncorrelatedness. There is, however, an important special
case where independence and uncorrelatedness are equivalent. This is
the case when
y_{1},...,y_{m} have a joint Gaussian distribution (see
[36]). Due to this property, independent component analysis is
not interesting (or possible) for Gaussian variables, as will be seen below.
Next: Definitions of linear independent
Up: Independent Component Analysis
Previous: Independent Component Analysis
Aapo Hyvarinen
19990423