Supervised contrastive learning for pre-training bioacoustic few-shot systems
Résumé
We show in this work that learning a rich feature extractor from scratch using only official training data is feasible. We achieve this by learning representations using a supervised contrastive learning framework. We then transfer the learned feature extractor to the sets of validation and test for few-shot evaluation. For fewshot validation, we simply train a linear classifier on the negative and positive shots and obtain a F-score of 63.46% outperforming the baseline by a large margin. We don't use any external data or pretrained model. Our approach doesn't require choosing a threshold for prediction or any post-processing technique. Our code is publicly available on Github : https://github.com/ ilyassmoummad/dcase23_task5_scl
Origine | Fichiers produits par l'(les) auteur(s) |
---|