next up previous
Next: Appendix: Adaptive neural algorithms Up: Appendix: Proofs Previous: Proof of convergence of

Proof of convergence of (26)

Denote by ${\bf W}_+$ the result of applying once the iteration step 2 in (26) on ${\bf W}$. Let ${\bf W}{\bf C}{\bf W}^T={\bf E}{\bf D}{\bf E}^T$ be the eigenvalue decomposition of ${\bf W}{\bf C}{\bf W}^T$. Then we have

 
$\displaystyle {\bf W}_+{\bf C}{\bf W}_+^T=\frac{9}{4}{\bf E}{\bf D}{\bf E}^T-\frac{3}{2}{\bf E}{\bf D}^2{\bf E}^T
+\frac{1}{4} {\bf E}{\bf D}^3{\bf E}^T$     (35)
$\displaystyle =
{\bf E}(\frac{9}{4}{\bf D}-\frac{3}{2}{\bf D}^2+\frac{1}{4}{\bf D}^3){\bf E}^T$     (36)

Note that due to the normalization, i.e. division of ${\bf W}$by $\sqrt{\Vert{\bf W}{\bf C}{\bf W}^T\Vert}$, all the eigenvalues of ${\bf W}{\bf C}{\bf W}^T$ are in the interval [0,1]. Now, according to (35), for every eigenvalue of ${\bf W}{\bf C}{\bf W}^T$, say $\lambda_i$, ${\bf W}_+{\bf C}{\bf W}_+^T$ has a corresponding eigenvalue $h(\lambda_i)$ where h(.) is defined as:

 \begin{displaymath}
h(\lambda)=\frac{9}{4}\lambda-\frac{3}{2}\lambda^2+\frac{1}{4}\lambda^3
\end{displaymath} (37)

Thus, after k iterations, the eigenvalues of ${\bf W}{\bf C}{\bf W}^T$are obtained as $h(h(h(...h(\lambda_i$)))), where h is applied k times on the $\lambda_i$, which are the eigenvalues of ${\bf W}{\bf C}{\bf W}^T$ for the original matrix before the iterations. Now, we have always $h(\lambda)>\lambda$ for $0<\lambda<1$. It is therefore clear that all the eigenvalues of ${\bf W}{\bf C}{\bf W}^T$converge to 1, which means that ${\bf W}{\bf C}{\bf W}^T\rightarrow{\bf I}$, Q.E.D. Moreover, it is not difficult to see that the convergence is quadratic.


next up previous
Next: Appendix: Adaptive neural algorithms Up: Appendix: Proofs Previous: Proof of convergence of
Aapo Hyvarinen
1999-04-23