Lavoisier S.A.S.
14 rue de Provigny
94236 Cachan cedex
FRANCE

Heures d'ouverture 08h30-12h30/13h30-17h30
Tél.: +33 (0)1 47 40 67 00
Fax: +33 (0)1 47 40 67 02


Url canonique : www.lavoisier.fr/livre/electricite-electronique/feed-forward-neural-networks-vector-decomposition-analysis-modelling-and-analog-implementation-kluwer-int-l-ser-in-engineering-et-computer-science-314/annema/descriptif_1602396
Url courte ou permalien : www.lavoisier.fr/livre/notice.asp?ouvrage=1602396

Feed-Forward Neural Networks, 1995 Vector Decomposition Analysis, Modelling and Analog Implementation The Springer International Series in Engineering and Computer Science Series, Vol. 314

Langue : Anglais

Auteur :

Couverture de l’ouvrage Feed-Forward Neural Networks
Feed-Forward Neural Networks: Vector Decomposition Analysis, Modellingand Analog Implementation presents a novel method for the mathematical analysis of neural networks that learn according to the back-propagation algorithm. The book also discusses some other recent alternative algorithms for hardware implemented perception-like neural networks. The method permits a simple analysis of the learning behaviour of neural networks, allowing specifications for their building blocks to be readily obtained.
Starting with the derivation of a specification and ending with its hardware implementation, analog hard-wired, feed-forward neural networks with on-chip back-propagation learning are designed in their entirety. On-chip learning is necessary in circumstances where fixed weight configurations cannot be used. It is also useful for the elimination of most mis-matches and parameter tolerances that occur in hard-wired neural network chips.
Fully analog neural networks have several advantages over other implementations: low chip area, low power consumption, and high speed operation.
Feed-Forward Neural Networks is an excellent source of reference and may be used as a text for advanced courses.
Foreword. 1. Introduction. 2. The Vector Decomposition Method. 3. Dynamics of Single Layer Nets. 4. Unipolar Input Signals in Single-Layer Feed-Forward Neural Networks. 5. Cross-Talk in Single-Layer Feed-Forward Neural Networks. 6. Precision Requirement for Analog Weight Adaptation Circuitry for Single-Layer Nets. 7. Discretization of Weight Adaptations in Single-Layer Nets. 8. Learning Behavior and Temporary Minima of Two-Layer Neural Networks. 9. Biases and Unipolar Input Signals for Two-Layer Neural Networks. 10. Cost Functions for Two-Layer Neural Networks. 11. Some Issues for f'(x). 12. Feed-Forward Hardware. 13. Analog Weight Adaptation Hardware. 14. Conclusions. Index. Nomenclature.

Date de parution :

Ouvrage de 238 p.

15.5x23.5 cm

Disponible chez l'éditeur (délai d'approvisionnement : 15 jours).

Prix indicatif 105,49 €

Ajouter au panier