To define the concept of independence, consider two scalar-valued
random variables *y*_{1} and *y*_{2}. Basically, the variables *y*_{1}and *y*_{2} are said to be independent if information on the value of
*y*_{1} does not give any information on the value of *y*_{2}, and vice
versa. Above, we noted that this is the case with the variables *s*_{1},
*s*_{2} but not with the mixture variables *x*_{1}, *x*_{2}.

Technically, independence can be defined by the probability densities.
Let us denote by
*p*(*y*_{1},*y*_{2}) the joint probability density function
(pdf) of *y*_{1} and *y*_{2}. Let us further denote by *p*_{1}(*y*_{1}) the
marginal pdf of
*y*_{1}, i.e. the pdf of *y*_{1} when it is considered alone:

(7) |

and similarly for

This definition extends naturally for any number

The definition can be used to derive a most important property of
independent random variables. Given two functions, *h*_{1} and *h*_{2},
we always have

This can be proven as follows: