DNNs are increasingly used in sensitive environments such as healthcare. In these sensitive environments a number of important questions arise. For example we would like to know how certain is our DNN that it has classified correctly if a patient has a disease or not. Not much would be a sensible answer, taking into account that DNNs are massively overparametrised and should overfit by the bias variance tradeoff. Recently in a somewhat distant field there have been lots of works aiming to reduce the number of weights in DNNs with the goal of memory efficiency for mobile applications [1]. Usually pruning results in a loss of accuracy that has to be recovered through finetuning the pruned architecture. This two stage approach, pruning then fine tuning, usually results in networks with up to 90% reduced memory footprint while still achieving the same level of accuracy. This yields an important insight for uncertainty estimation; the many parameters in the original network implied that the classification function was complex and at risk of overfitting, having effectively arrived at the same or a similar function with much fewer parameters implies a much simpler function that is also much less prone to overfitting. This insight has yielded state of the art theoretical results in estimating the ability of the network to generalise to new data [2]. These results are however difficult to interpret for individual data points. Coming back to our original question, in this project the student will take a different perspective allowing them to make estimates of the network uncertainty for specific data samples. The student will have to implement various network pruning techniques and discover whether they result in better uncertainty estimation.

Student profile:

  • Good knowledge of Tensorflow/Keras or Pytorch

  • Good knowledge of Linear Algebra, Probability and Statistics

[1] Han, Song, et al. "Learning both weights and connections for efficient neural network." Advances in neural information processing systems. 2015.

[2] Arora, Sanjeev, et al. "Stronger generalization bounds for deep nets via a compression approach." arXiv preprint arXiv:1802.05296 (2018).