Autoencoders

There are many things, systems, that have Principal Components Analysis as the result of their evolution, their computation; their dynamics. Things like neural networks, for example. So, this time, I decided to play with autoencoders.

An autoencoder is a feed forward neural network that satisfies three properties:

  1. It has only one hidden layer
  2. If n_i is the dimension of the input layer, n_o the dimension of the output layer and p the dimension of the hidden one; then n_i = n_o = n and p < n
  3. The output should be as close as possible to the input (In some sense, usually the quadratic error one)

This is the dimensionality reduction setup of the autoencoder, portraying the characteristic funnel architecture shown in figure 1b; it can be seen as a sequence of two affine maps between 3 vector spaces XH and Y  as in the figure 1a.

Asset 1.png
Figure 1

  Continue reading Autoencoders