Meeting 9 (29.05)

Goal: discussion of classification accuracy using the features extracted by our auto-encoder.

Related notebooks: audio_features, audio_classification

Achieved

  • Resolved memory exhaustion.
    1. Prevent data copy in the PyUNLocBoX toolbox. The toolbox was internally storing multiple copies of the data when instantiating functions and solvers. Store a reference instead. See git commit for further details. The memory consumption of our application was then divided by 5. No significant impact on speed.
    2. Use 32 bits floating point instead of 64 bits. It saves half the memory and increase the speed by 1.6. We are not memory bandwidth limited anymore. Computations almost always fully load two CPU cores.
  • Projection in the L2 ball, not on the ball surface. It is a convex constraint. No significant improvement in speed.
  • Insert the energy auto-encoder in the pipeline.

Discussion

  • Data normalization: features or samples ?
  • Decrease dimensionality for faster experiments.
  • Approaches for faster convergence.

Next

  1. Experiment with hyper-parameters.
  2. Create the graph.
  3. Insert the Dirichlet energy in the objective.

10d_discussion

Michaël Defferrard

I am currently pursuing master studies in Information Technologies at EPFL. My master project, conducted at the LTS2 Signal Processing laboratory led by Prof. Pierre Vandergheynst, is about audio classification with structured deep learning. I previously devised an image inpainting algorithm. It used a non-local patch graph representation of the image and a structure detector which leverages the graph representation and influences the fill-order of the exemplar-based algorithm. I've been a Research Assistant in the lab, where I did investigate Super Resolution methods for Mass Spectrometry. I develop PyUNLocBoX, a convex optimization toolbox in Python.

Leave a Reply