Quantized Guided Pruning for Efficient Hardware Implementations of Convolutional Neural Networks - IMT Atlantique Accéder directement au contenu
Pré-Publication, Document De Travail Année : 2018

Quantized Guided Pruning for Efficient Hardware Implementations of Convolutional Neural Networks

Ghouthi Boukli Hacene
  • Fonction : Auteur
  • PersonId : 1041034
Vincent Gripon
Matthieu Arzel
Nicolas Farrugia

Résumé

Convolutional Neural Networks (CNNs) are state-of-the-art in numerous computer vision tasks such as object classification and detection. However, the large amount of parameters they contain leads to a high computational complexity and strongly limits their usability in budget-constrained devices such as embedded devices. In this paper, we propose a combination of a new pruning technique and a quantization scheme that effectively reduce the complexity and memory usage of convolutional layers of CNNs, and replace the complex convolutional operation by a low-cost multiplexer. We perform experiments on the CIFAR10, CIFAR100 and SVHN and show that the proposed method achieves almost state-of-the-art accuracy, while drastically reducing the computational and memory footprints. We also propose an efficient hardware architecture to accelerate CNN operations. The proposed hardware architecture is a pipeline and accommodates multiple layers working at the same time to speed up the inference process.
Fichier principal
Vignette du fichier
hardware_quake.pdf (193.28 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-01965304 , version 1 (25-12-2018)

Identifiants

Citer

Ghouthi Boukli Hacene, Vincent Gripon, Matthieu Arzel, Nicolas Farrugia, Yoshua Bengio. Quantized Guided Pruning for Efficient Hardware Implementations of Convolutional Neural Networks. 2018. ⟨hal-01965304⟩
343 Consultations
220 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More