There are many things, systems, that have Principal Components Analysis as the result of their evolution, their computation; their dynamics. Things like neural networks, for example. So, this time, I decided to play with autoencoders.
An autoencoder is a feed forward neural network that satisfies three properties:
- It has only one hidden layer
- If
is the dimension of the input layer,
the dimension of the output layer and
the dimension of the hidden one; then
and
- The output should be as close as possible to the input (In some sense, usually the quadratic error one)
This is the dimensionality reduction setup of the autoencoder, portraying the characteristic funnel architecture shown in figure 1b; it can be seen as a sequence of two affine maps between 3 vector spaces ,
and
as in the figure 1a.
