Multi-tasking Siamese networks for breast mass detection using dual-view mammogram matching
Abstract
In clinical practice, radiologists use multiple views of routine mammograms for breast cancer screening. Similarly, computer-aided diagnosis (CAD) systems could be enhanced by integrating information arising from pairs of views. In this work, we present a new multi-tasking framework that combines craniocaudal (CC) and mediolateral-oblique (MLO) mammograms. We exploit multi-tasking properties of deep networks to jointly learn mass matching and classification, towards better detection performance. A combined Siamese model that includes patch-level mass classification and dual-view mass matching is used to take full advantage of multi-view information. This network is exploited in a full image detection pipeline based on You-Only-Look-Once (YOLO) region proposals. Experiments highlight the benefits of dual-view analysis for both patch-level classification and examination-level detection scenarios. Our pipeline outperforms conventional single-task deep models with 94.78% as Area Under the Curve (AUC) score and a classification accuracy of 0.8791. Additionally to these gains, our method further guides clinicians by providing accurate multi-view mass correspondences. This suggests that it could act as a relevant automatic second opinion for mammogram interpretation and breast cancer diagnosis.