Using the principle of stochastic gradient descent, one can derive
simple algorithms from the one-unit contrast functions explained above.
Let us consider first whitened data.
For example, taking the instantaneous gradient
of the generalized contrast function
in (28) with respect to ,
and taking the normalization
one obtains the following Hebbian-like learning rule
To estimate several independent components, one needs a system of several units, each of which learns according to a one-unit learning rule. The system must also contain some feedback mechanisms between the units, see e.g. [71,73]. In , a special kind of feedback was developed to solve some problems of non-locality encountered with the other learning rules.