Goal: discussion of the first prototype implementation of our auto-encoder and the learned atoms on image patches.

Related notebooks: formulation, image.

### Achieved

- Implementation of a prototype in Python.
- Optimizing for Z last leads to a lower global objective function (two Z terms, only one D).
- The solution does not depend too much on the initialization, even if the global problem is not convex.

- See the
*formulation*notebook for a construction of the algorithm form the base (linear least square fitting) to the version with encoder (no graph yet). Tests on synthetic examples are included. - See the
*image*notebook for an application to image data along with observations. We try here to recover Gabor filters. - Results from Xavier: J’ai testé les 2 contraintes sur le dictionary: (1) ||d_j|| = 1 et (2) ||d_j|| <= 1. L’algorithme est 2x plus rapide avec (2) ! Je pense que cela vient simplement du fait que la contrainte (2) est convexe, contrairement a (1).

### Discussion

- \(E\) seems to approach \(D^T\) when it is less constrained, i.e. \(m>>n\). This was due to an implementation error which constrained the L1 norm of the whole matrix instead of independently constraining each column.
- Used atoms norm are oddly not 1 (which should minimize the objective function). Caused by the aforementioned bug.
- Centric atoms seem to appear when the dictionary is more constrained.
- Measure how \(E\) is similar to \(D^T\).

### Notes

- FISTA is not guaranteed to be monotone.