Revisiting Semi-Supervised Learning with Graph Embeddings

Discussed April 1st, presented by Vassilis, notes by Vassilis


The paper we reviewed today is not as close to our stuff as we thought initially… It is actually quite
complicated and it builds a lot on other people’s works that are already complex. Some interesting
  1. the graph is given from outside, not created from the standard data matrix. It encodes what they call “context”, that depends on the application. I.e. the context of a node is its neighboring nodes.
  2. it is a kind of simple neural network that passes both the input data and embeddings (to be learned) through a non-linear function (max(0, x)), and finally from a softmax to predict the labels. They only used one hidden layer.
  3. The graph is only used in learning the embeddings, but the embeddings are used both for graph context prediction AND for class label prediction (therefore both loss functions are used for learning the embeddings).
  4. For input queries that don’t reside on nodes of the graph, the embeddings can be computed as the outputs of a hidden layer fed with the input query.
  5. to train using the graph they uniformly sample a short random walk from the graph, then from this they pick two nodes with a maximum distance of d hops. This is done because getting all possible pairs is too expensive and this sampling seems more efficient for (block) stochastic gradient descent.


Explanations of word2vec:


[1] Yang, Z., Cohen, W., and Salakhutdinov, R. (2016). Revisiting Semi-Supervised Learning with Graph Embeddings. arXiv:1603.08861 [Cs].

Michaël Defferrard

I am currently pursuing master studies in Information Technologies at EPFL. My master project, conducted at the LTS2 Signal Processing laboratory led by Prof. Pierre Vandergheynst, is about audio classification with structured deep learning. I previously devised an image inpainting algorithm. It used a non-local patch graph representation of the image and a structure detector which leverages the graph representation and influences the fill-order of the exemplar-based algorithm. I've been a Research Assistant in the lab, where I did investigate Super Resolution methods for Mass Spectrometry. I develop PyUNLocBoX, a convex optimization toolbox in Python.

Leave a Reply