Parallelization of low rank matrix completion algorithms

18th October 2018

Proposed by Mikael Le Pendu
Email: lependum at

The problem of recovering a matrix from a subset of its entries assuming its rank is sufficiently low has received a great deal of attention in the past decade. A typical application that motivated the fast development of this research area is the Netflix problem [1], where users can rate a few movies and are given recommendations for other movies based on their own ratings and other users preferences. In this example, the data is arranged in a matrix whose columns contain all the users’ ratings for a given movie. Thanks to the correlations between the ratings among users, such a matrix is likely to satisfy the low rank property required for recovering the missing entries. Low rank matrix completion is also key to many other applications, including computer vision (e.g. background subtraction [2], face recognition [3], image inpainting [4]).

However, most algorithms for solving this problem are by nature sequential and too computationally expensive for large scale matrices. This project will aim to study such algorithms and adapt them for parallel processing to take advantage of GPU or multi-threading.

[1] ACM SIGKDD and Netflix. Proceedings of KDD Cup and Workshop, 2007
[2] S. E. Ebadi, V. G. Ones and E. Izquierdo, “Efficient background subtraction with low-rank and sparse matrix decomposition,” IEEE International Conference on Image Processing (ICIP), 2015.
[3] L. Ma, C. Wang, B. Xiao and W. Zhou, “Sparse representation for face recognition based on discriminative low-rank dictionary learning,” IEEE Conference on Computer Vision and Pattern Recognition, 2012.
[4] J. Liu, P. Musialski, P. Wonka, and J. Ye, “Tensor completion for estimating missing values in visual data,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 1, pp. 208–220, Jan.2013.