next up previous
Next: Uncorrelated variables are only Up: What is independence? Previous: What is independence?

Definition and fundamental properties

To define the concept of independence, consider two scalar-valued random variables y1 and y2. Basically, the variables y1and y2 are said to be independent if information on the value of y1 does not give any information on the value of y2, and vice versa. Above, we noted that this is the case with the variables s1, s2 but not with the mixture variables x1, x2.

Technically, independence can be defined by the probability densities. Let us denote by p(y1,y2) the joint probability density function (pdf) of y1 and y2. Let us further denote by p1(y1) the marginal pdf of y1, i.e. the pdf of y1 when it is considered alone:

\begin{displaymath}p_1(y_1)=\int p(y_1,y_2) dy_2,
\end{displaymath} (7)

and similarly for y2. Then we define that y1 and y2 are independent if and only if the joint pdf is factorizable in the following way:

 
p(y1,y2)=p1(y1)p2(y2). (8)

This definition extends naturally for any number n of random variables, in which case the joint density must be a product of n terms.

The definition can be used to derive a most important property of independent random variables. Given two functions, h1 and h2, we always have

 \begin{displaymath}
E\{h_1(y_1) h_2(y_2)\}=E\{h_1(y_1)\}E\{h_2(y_2)\}.
\end{displaymath} (9)

This can be proven as follows:
\begin{multline}E\{h_1(y_1) h_2(y_2)\}=\int \int h_1(y_1) h_2(y_2)p(y_1,y_2) dy_...
...) dy_1 \int h_2(y_2)p_2(y_2) dy_2
\\ =E\{h_1(y_1)\}E\{h_2(y_2)\}.
\end{multline}


next up previous
Next: Uncorrelated variables are only Up: What is independence? Previous: What is independence?
Aapo Hyvarinen
2000-04-19