Some inpainting results

Yesterday I cleaned the code of GIIN and ran inpaintings of larger pictures on our cluster. The results are below. While some images were realistically reconstructed, others look weird. I will run simulations with different parameters. Our hope is that some set of parameters will work on most of our test images.


GIIN: end of the project

My work on GIIN at the EPFL LTS2 laboratory came to an end last December. The GIIN project aimed to explore the applications of spectral graph theory to address the inpainting problem of large missing chunks. We used a non-local patch graph representation of the image and proposed a structure detector which leverages the graph representation and influences the fill-order of our exemplar-based algorithm. Our method achieved state-of-the-art performances.

A report of this work is available here. We are now working on a paper to present our results in IEEE Transactions on Image Processing.

Graph analysis


Compare the regularity \(xLx^T\) of four graphs :

  1. the perfect graph : constructed with the complete original image
  2. the disconnected graph : constructed by leaving the unknown vertices alone
  3. the reconstructed graph : constructed during the inpainting process
  4. a dumb graph : unknown vertices are connected on a grid



  1. As expected, the perfect graph is the most regular of the three.
  2. Surprisingly, the disconnected and dumb graphs are more regular than the reconstructed graph. This would indicate that our reconstruction scheme does not produce an ideal graph.
  3. Greater the weights of the inserted connections in the dumb graph, higher its regularity measure. The image shows weights of 1.

Meeting 10 (w12)


  1. Sparsity measure \(\frac{\|T_ig\|_1}{\|T_ig\|_2}\) as structure priority.
  2. Regularity measure \(x^TLx\) of the perfect graph, the unconnected graph, the reconstructed graph and a dumb graph.


  • Sparsity measure seems to well detect edges. \(\|T_ig\|_1\) is indeed constant and equal to one, the amplitude of the Kronecker delta. Priority defined as \(\|T_ig\|_2\).
  • The heat scale plays the role of a tradeoff between the two priorities (information and structure). Higher scale gives higher priority to structure.
  • Combinatorial Laplacian : The disconnected and dumb graphs have a better regularity \(x^TLx\) than the reconstructed graph. Greater the weights of the inserted connections in the dumb graph, higher its regularity measure.
  • Normalized Laplacian : The dumb graph has the best regularity \(x^TLx\). The weights of the inserted connections in the dumb graph have a very limited impact.
  • Relative importance of locality in graph construction has a great impact on the regularity.
  • The regularity measure prefers graphs who are more local. Lowest regularity (on perfect graph) achieved with gparam.graph.loc=0.1. If too local, regularity increases because there is connections across edges. If too non-local, regularity increases for no apparent reason.


  • The algorithm cannot reconstruct a curved edge because it does not know where to go. It goes straight by default. Try with the images of the paper, we have the same limitation as them in this regard.
  • Maybe the fact that we don’t allow internal connections, i.e. connections from unknown vertices to unknown vertices, has some impact.
  • We should think about some way to globally optimize the regularity of the reconstructed graph.
  • Why regularity increases for too non-local graphs ?


  • A bigger weight of the locality drastically increases the computational efficiency of the patch graph. Probably due to a smaller difference in scales (pixel values vs. positional values).
  • We did not follow the project proposal, but we have an inpainting scheme and a beautiful structure detector.

Meeting 9 (w11)


  • Inpainting scheme :
    • KD-tree : We cannot use it to search the K nearest neighbors when we connect a new patch because the comparison function changes for each patch. A comparison mask is indeed derived for each patch in order to compare solely the known pixels. It would be feasible to use our own implementation of a kd-tree that would leave out certain dimension while searching (that’s what the mask does).
    • TV prior : It is possible to use a TV prior instead of Thikonov via an option. As already shown by the reconstruction using a perfect graph, there is no visible difference between the two.
    • Priority : We introduced a second priority based on the amount of information we have about a patch. The priority of a patch is the average of its pixels priorities. A known pixel has priority 1, an unknown pixel has priority 0. An inpainted pixel is assigned the priority of the patch which inpainted it.
    • Framework : Great improvement in code modularity (functions perform actions, scripts execute experiments). Some cleanup.
    • Performance : Faster neighbors search (by a factor of 10) using knnsearch when connecting new vertices. Computing lmax only once saves one third of the computing time. The other big performance bottleneck is the computation of heat diffusion by Chebyshev approximation which is linear in time with respect to the polynomial order. However decreasing this value decreases the quality of the estimated priority.
    • Neighborhood comparison : Inpainted patches can be smaller than patches used to construct the graph (and connect new vertices). This allows to compare the neighborhood of a patch such that it fits its environment and only inpaints the center part, i.e. leave the immediate environment intact.
  • Testings :
    • Nathanaël wanted to know if there was a difference between Thikonov and TV according to the graph connectivity. It seems that this is not the case. The two always give similar results.
    • The addition of a priority on the quantity of information improves the results by starting to fill an edge from the two sides.
    • Patch size has a great influence on the structure priority thus results.
    • Sigma has a great influence on the structure priority.
    • Allowing more unknown pixels when connecting can be beneficial. We have less information to connect but inpaint greater surfaces at a time.


  • Why an approximation of lmax should always be greater than the real lmax.
  • Patch size has a great influence.
  • Most bad results seem to be associated with a poor estimation of the structure priority, i.e. when the algorithm fails to fill the edge first.
  • Is there a way to estimate sigma without looking at the histogram of weights ? There is a surprising difference between the distribution of a perfect graph and an incomplete graph.
  • Neighborhood comparison does lead to interesting behaviors.
  • With greater patch size, texture start to be detected as structure.

Next week

Implement the two main ideas of Monday’s brainstorming :

  1. Sparsity measure for priority
  2. Compare regularity of graphs


  • Smoothing according to \(\|T_ig\|_2\)
  • Multiple local binary pattern (MLBP) : used to implement state-of-the-art object tracking.
  • Haar wavelet

First inpainting


  • Have a first working inpainting scheme.


  1. Create a patch graph : only the entirely known patches are connected. Others are free vertices.
  2. Sequential inpainting and graph reconstruction :
    1. Identify suitable patches, i.e. patches where most pixels are known. There is no point to insert in the graph a patch which contain not enough information.
    2. Insert (connect) the new patches in the graph.
    3. Compute the priority of a patch according to its belonging to a structure See our post about structure detector for how to to this using the patch graph. Structure is more constrained than texture : we want to inpaint it first.
    4. Inpaint the patch of highest priority.
      • Two retrieving schemes : copy the patch to which it has the strongest connection, weighted average of all the connections. The first seems better as averaging introduces blur.
      • Two composing scheme : replace the entire patch, inpaint unknown patches only. The two are similar : one or the other is better in certain circumstances.
    5. Inpainting unknown pixels unlock new suitable patches to connect.
    6. Loop until there is no more patches to inpaint.
  3. Perform a global optimization using Thikonov regularization. See our post about inpainting using a perfect graph. We do the same here : the iterative scheme reconstructs the graph, we then use it instead of the pixel values we just retrieved. Another way to say it : we inpainted the pixels only to connect the unknown patches in the graph.




  • Thikonov regularization introduces blur : try TV.
  • Symmetrization of weights creates “polarized” weights : 0.5 and 1.0. This is because the chosen heat kernel width does not spread the distances enough.

Meeting 8 (w10)


  • First implementation of a complete inpainting scheme.


  • Use a KD-tree in giin_connect to speed-up the search of Euclidean neighbors. As we only connect to fully known patches we only have to construct the tree once.
  • Look at the neighboring when searching for connections such that an inpainted patch does not look like a stranger in his environment. Johan proposed to use a regularity measure. I proposed to compare the pixels around the patch, i.e. to compare a greater patch than the inpainted patch.
  • Use TV instead of Thikonov regularization for the global optimization. It may prevent some blur.
  • Play with graph locality, i.e. the rho parameter of gsp_patch_graph.
  • Use locality (not only pixels values) when comparing patches ? Can make it an option.

Next week

  • Introduce some information priority. Combine it with edge priority.
  • Search with 6×6 patches, inpaint with 4×4.
  • Test on larger holes (eventually use the server to speed up computations).
  • Push the code to GitLab.
  • Brainstorming Monday afternoon on ways to make more use of the graph in the scheme.

Meeting 7 (w9)


  1. Structure detector :
    • Kronecker delta on vertex
    • Low pass filter with heat or wavelet (second derivative of Gaussian) kernel
    • Hough transform on the diffused delta
    • Binary priority : structure or texture
  2. Experiments :
    • Influence of heat kernel scaling
    • Hard threshold on binary diffused image


  • Filtering the delta for each vertex with gsp_filter_analysis takes time (6 minutes for 100×100 pixels). Could we optimize by applying the filter to the sole non-zero vertex (i.e. no need to translate the filter) ? Either give all the signals in parallel to gsp_filter_analysis or, if we want to put diracs on the entire graph, compute \(expm(-\tau \cdot L)\). Another solution is to diminish the order (the number of coefficients) of the Chebishev approximation from 30 (default) to e.g. 5. This number can be interpreted as the number of hopes the information will diffuse.
  • The way our edge detector is able to identify curved lines is a “Future work” of the Object Removal by Exemplar-Based Inpainting paper.
  • On Barbara, structures (composed by lines) may be identified as lines. The introduction of some local information in the weights should prevent this.
  • Fitting to a parametric object (line, curve, parabola, etc.) instead of the Hough transform ?
  • When going binary, fix a surface ratio instead of a threshold ?
  • We don’t even need the Hough transform : the heat diffusion spreads the energy of the activated vertex across the graph through the connections of the patch. Diffusion on a texture element results in spreaded low energy over the texture. Diffusion on a structure results in concentrated high energy along the structure. We only need a hard threshold on the diffused signal. It is a measure of the concentration.
  • Texture should be large enough for the energy to diffuse (example of vertical line). That would work better with Hough transform : surfaces are constituted of many lines.

Next week

  • Implement a first working inpainting scheme.

Meeting 6 (w8)


  • Construction of a patch graph with unknown pixels
    • Signals : pixels and patches have negative values
    • Graph : unknown vertices are not connected to any other vertex
  • Priority based on patch graph analysis
    • Construction of a dictionary of atoms
    • Projection of the patch connections to the dictionary
    • The priority of the vertex is the norm (1, infinite aka max) of the projection
  • Improvements to the GSP toolbox
    • gsp_patch_graph returns 2D coordinates in G.coords so that it is easy to plot a patch graph
    • gsp_patch_graph takes into account unknown pixels (represented by negative values)


  • Why gsp_patch_graph / gsp_nn_graph produces a graph where some vertices are connected to more than K vertices ? The code is artificially symmetrizing the weight matrix by \(W=\frac{W+W^T}{2}\) such that the graph is not directed. (The connections who are already symmetric end up with an averaged weight, the weight of an assymetric connection is divided by two.) Think of the comet graph, connect the 2 closer vertices. The head is not connected to the tail but the tail is connected to the head.
  • Priority with lena : line has a low priority because it’s not a perfect line. That is the problem of the dictionary of atoms which cannot be exhaustive. The new scheme with the Hough transform solves this.
  • Priority with unknown pixels : priority decreases as we approach the edge of missing information.
  • For this scheme to work we have to integrate some local information in our graph (otherwise a vertex could be connected far away). Isn’t it contrary to the idea of a non-local graph ? Why if we would use similarity measures without KNN ? At least for priority computations : texture pixels will be even better connected, some structure pixels (eg. a corner) may stay alone.

Next week

  1. Put a dirac on a vertex and low-pass filter it with a heat kernel to assess its connections (the dirac energy will spread around according to the connection weights)
  2. Use the Hough transform to determine if the connections form a line (high time complexity, do it only on the relevant vertices)
  3. Continue to implement the complete inpainting scheme


Johan :

  1. Use the Hough transform instead of a dictionary to asses the existence of a line
  2. Put a dirac on the vertex then low-pass filter it instead of looking for one hope in the adjacency matrix (use the weights, better theoretical foundations) (he has some code to show)

Pierre :

  1. SVD of the adjacency matrix
  2. Graph inpainting under low rank constraint (Nath does not think that it would work)
  3. Investigate the spectrum of graphs from natural images

Nathanaël :

  1. Normalize the atoms of the dictionary (i.e. a vertical line has the same energy as a low inclination one)
  2. Measure the sparsity of the projection \(\frac{\|x\|_1}{\|x\|_2}>1\) (1 means co-linear to an atom)


  • We have derived a structure detector on patch graph
  • Analysis of patch graphs

Inpainting priority based on patch graph analysis

Data structures

Each pixel is represented by a vertex of a weighted graph. The weights are a measure of similarity between patches.

Signals on the graph :

  1. pixels : the actual pixel value (a negative value indicates an unknown value)
  2. coords : the coordinates of the vertex in the image space
  3. patches : the constructed patch around the pixel (used to compute the weights)
  4. priorites : the vertex priority, used to choose the inpainting order


  1. Generate ahead of time a dictionary of atoms representing various orientations of lines
  2. Construct the patch graph of the input image
  3. Project each patch adjacency vector to the dictionary
  4. Bigger the similarity to an atom, higher the priority


The generated dictionary for an analysis of 5 hopes around the pixel of interest :


  1. Vertical line
  2. Diagonal line
  3. Cross
  4. Lena


  1. If the line is not perfect (i.e. it does not fit well an atom of the dictionary), the priority of the corresponding patch will end up being low.