Lavoisier S.A.S.
14 rue de Provigny
94236 Cachan cedex
FRANCE

Heures d'ouverture 08h30-12h30/13h30-17h30
Tél.: +33 (0)1 47 40 67 00
Fax: +33 (0)1 47 40 67 02


Url canonique : www.lavoisier.fr/livre/autre/embedded-deep-learning/descriptif_3830277
Url courte ou permalien : www.lavoisier.fr/livre/notice.asp?ouvrage=3830277

Embedded Deep Learning, Softcover reprint of the original 1st ed. 2019 Algorithms, Architectures and Circuits for Always-on Neural Network Processing

Langue : Anglais
Couverture de l’ouvrage Embedded Deep Learning

This book covers algorithmic and hardware implementation techniques to enable embedded deep learning. The authors describe synergetic design approaches on the application-, algorithmic-, computer architecture-, and circuit-level that will help in achieving the goal of reducing the computational cost of deep learning algorithms. The impact of these techniques is displayed in four silicon prototypes for embedded deep learning.

  • Gives a wide overview of a series of effective solutions for energy-efficient neural networks on battery constrained wearable devices;
  • Discusses the optimization of neural networks for embedded deployment on all levels of the design hierarchy ? applications, algorithms, hardware architectures, and circuits ? supported by real silicon prototypes;
  • Elaborates on how to design efficient Convolutional Neural Network processors, exploiting parallelism and data-reuse, sparse operations, and low-precision computations;
  • Supports the introduced theory and design concepts by four real silicon prototypes. The physical realization?s implementation and achieved performances are discussed elaborately to illustrated and highlight the introduced cross-layer design concepts.

Chapter 1 Embedded Deep Neural Networks.- Chapter 2 Optimized Hierarchical Cascaded Processing.- Chapter 3 Hardware-Algorithm Co-optimizations.- Chapter 4 Circuit Techniques for Approximate Computing.- Chapter 5 ENVISION: Energy-Scalable Sparse Convolutional Neural Network Processing.- Chapter 6 BINAREYE: Digital and Mixed-signal Always-on Binary Neural Network Processing.- Chapter 7 Conclusions, contributions and future work.

Dr. ir. Bert Moons received the B.S. and M.S. and PhD degree in Electrical Engineering from KU Leuven, Leuven, Belgium in 2011, 2013 and 2018. He performed his PhD research at ESAT-MICAS as an IWT-funded Research Assistant, focusing on energy-scalable and run-time adaptable digital circuits for embedded Deep Learning applications. Bert authored 15+ conference and journal publications, was a Visiting Research Student at Stanford University in the Murmann Mixed-Signal Group and received the SSCS predoctoral achievement award in 2018.  Currently he is with Synopsys, as a hardware design architect for the DesignWare EV6x Embedded Vision and Deep Learning processors.

Daniel Bankman received the S.B. degree in electrical engineering from the Massachusetts Institute of Technology, Cambridge, MA in 2012 and the M.S. degree from Stanford University, Stanford, CA in 2015. Since 2012, he has been working toward the Ph.D. degree at Stanford University, focusing on mixed-signal processing for machine learning. He has held internship positions with Analog Devices and Intel. His research interests include algorithms, architectures, and circuits for energy-efficient learning and inference in smart devices. He was a recipient of the Texas Instruments Stanford Graduate Fellowship in 2012, the Numerical Technologies Founders Prize in 2013, and the John von Neumann Student Research Award in 2015 and 2017.

Prof. Dr. ir. Marian Verhelst is a professor at the MICAS laboratories (MICro-electronics And Sensors) of the Electrical Engineering Department of KU Leuven. Her research focuses on embedded machine learning, energy-efficient hardware accelerators, self-adaptive circuits and systems, and low-power sensing and processing. Before that, she received a PhD from KU Leuven cum ultima laude, she was a visiting scholar at the Berkeley Wireless Research Center (BWRC) of UC Berkeley, and she worked as a research scientist at Intel Labs, Hi

Gives a wide overview of a series of effective solutions for energy-efficient neural networks on battery constrained wearable devices Discusses the optimization of neural networks for embedded deployment on all levels of the design hierarchy – applications, algorithms, hardware architectures, and circuits – supported by real silicon prototypes Elaborates on how to design efficient Convolutional Neural Network processors, exploiting parallelism and data-reuse, sparse operations, and low-precision computations Supports the introduced theory and design concepts by four real silicon prototypes. The physical realization’s implementation and achieved performances are discussed elaborately to illustrated and highlight the introduced cross-layer design concepts

Date de parution :

Ouvrage de 206 p.

15.5x23.5 cm

Disponible chez l'éditeur (délai d'approvisionnement : 15 jours).

89,66 €

Ajouter au panier

Date de parution :

Ouvrage de 206 p.

15.5x23.5 cm

Disponible chez l'éditeur (délai d'approvisionnement : 15 jours).

137,14 €

Ajouter au panier