There are many things, systems, that have Principal Components Analysis as the result of their evolution, their computation; their dynamics. Things like neural networks, for example. So, this time, I decided to play with autoencoders.

An autoencoder is a feed forward neural network that satisfies three properties:

  1. It has only one hidden layer
  2. If n_i is the dimension of the input layer, n_o the dimension of the output layer and p the dimension of the hidden one; then n_i = n_o = n and p < n
  3. The output should be as close as possible to the input (In some sense, usually the quadratic error one)

This is the dimensionality reduction setup of the autoencoder, portraying the characteristic funnel architecture shown in figure 1b; it can be seen as a sequence of two affine maps between 3 vector spaces XH and Y  as in the figure 1a.

Asset 1.png
Figure 1

  Continue reading Autoencoders

The curvature of curves and its computation

How much does a curve bend? That looks like an important question to ask. Indeed, it is THE question to ask because curvature is everything we need to know about a curve (modulo some annoying groups we will talk about in the future). If you are too shy to ask, you can compute it and that is what this post is about. In order to compute the curvature you need a bunch of things and for each one there is a bunch of ways of doing it, so, let’s talk about some of them.

Continue reading The curvature of curves and its computation