Contrastive Learning using Random Walk Laplacian Matrix
Résumé
In recent years, Self-Supervised Learning (SSL) has gained in popularity due to the availability of unlabeled data. SSL consists in training a neural network encoder capable of representing data in a low-dimensional space efficiently, i.e., capable of extracting useful features from data (the one it is trained on, as well as new data). This problem of learning good representations is generally called representation learning. To this end, SimCLR [1] is a popular contrastive method that optimizes an encoder to output similar representations for different views of the same data (positive pairs), and different representations for views of different data (negative pairs). In this work, we treat the representation learning problem as a random walk on a graph of data representations in the latent space (vertices) and their similarities (edges). This can be formulated as an optimization problem that maximizes the transition probability between views of the same data, and minimizes the transition probability between views of different data. This problem has been approximated in [2], which proposes a solution minimizing the sum of euclidean distances between positive pairs, and adds a decorrelation term to avoid representation collapse (i.e., convergence to a trivial solution in which representations are constant). In this work, we propose a simpler loss function, that leverages the random walk Laplacian matrix directly. We benchmark our approach on the CIFAR10 dataset using standard data augmentations from the literature to create different views of data [3], and compare our results to SimCLR.
Domaines
Informatique [cs]
Fichier principal
Moummad et al. - Contrastive Learning using Random Walk Laplacian M.pdf (638.15 Ko)
Télécharger le fichier
Origine | Fichiers produits par l'(les) auteur(s) |
---|