Lavoisier S.A.S.
14 rue de Provigny
94236 Cachan cedex
FRANCE

Heures d'ouverture 08h30-12h30/13h30-17h30
Tél.: +33 (0)1 47 40 67 00
Fax: +33 (0)1 47 40 67 02


Url canonique : www.lavoisier.fr/livre/informatique/neural-networks-and-learning-machines-3rd-ed/haykin/descriptif_1097950
Url courte ou permalien : www.lavoisier.fr/livre/notice.asp?ouvrage=1097950

Neural networks and learning machines (3rd Ed.)

Langue : Anglais

Auteur :

Couverture de l’ouvrage Neural networks and learning machines

For graduate-level neural network courses offered in the departments of Computer Engineering, Electrical Engineering, and Computer Science.

Neural Networks and Learning Machines, Third Edition is renowned for its thoroughness and readability. This well-organized and completely up-to-date text remains the most comprehensive treatment of neural networks from an engineering perspective. This is ideal for professional engineers and research scientists.

Matlab codes used for the computer experiments in the text are available for download at: revised and renamed to reflect the duality of neural networks and learning machines, this edition recognizes that the subject matter is richer when these topics are studied together. Ideas drawn from neural networks and machine learning are hybridized to perform improved learning tasks beyond the capability of either independently.

Preface x
Introduction 1
1. What is a Neural Network? 1
2. The Human Brain 6
3. Models of a Neuron 10
4. Neural Networks Viewed As Directed Graphs 15
5. Feedback 18
6. Network Architectures 21
7. Knowledge Representation 24
8. Learning Processes 34
9. Learning Tasks 38
10. Concluding Remarks 45
Notes and References 46

Chapter 1 Rosenblatt's Perceptron 47
1.1 Introduction 47
1.2. Perceptron 48
1.3. The Perceptron Convergence Theorem 50
1.4. Relation Between the Perceptron and Bayes Classifier for a Gaussian Environment 55
1.5. Computer Experiment: Pattern Classification 60
1.6. The Batch Perceptron Algorithm 62
1.7. Summary and Discussion 65
Notes and References 66
Problems 66

Chapter 2 Model Building through Regression 68
2.1 Introduction 68
2.2 Linear Regression Model: Preliminary Considerations 69
2.3 Maximum a Posteriori Estimation of the Parameter Vector 71
2.4 Relationship Between Regularized Least-Squares Estimation and MAP Estimation 76
2.5 Computer Experiment: Pattern Classification 77
2.6 The Minimum-Description-Length Principle 79
2.7 Finite Sample-Size Considerations 82
2.8 The Instrumental-Variables Method 86
2.9 Summary and Discussion 88
Notes and References 89
Problems 89

Chapter 3 The Least-Mean-Square Algorithm 91
3.1 Introduction 91
3.2 Filtering Structure of the LMS Algorithm 92
3.3 Unconstrained Optimization: a Review 94
3.4 The Wiener Filter 100
3.5 The Least-Mean-Square Algorithm 102
3.6 Markov Model Portraying the Deviation of the LMS Algorithm from the Wiener Filter 104
3.7 The Langevin Equation: Characterization of Brownian Motion 106
3.8 Kushner's Direct-Averaging Method 107
3.9 Statistical LMS Learning Theory for Small Learning-Rate Parameter 108
3.10 Computer Experiment I: Linear Prediction 110
3.11 Computer Experiment II: Pattern Classification 112
3.12 Virtues and Limitations of the LMS Algorithm 113
3.13 Learning-Rate Annealing Schedules 115
3.14 Summary and Discussion 117
Notes and References 118
Problems 119

Chapter 4 Multilayer Perceptrons 122
4.1 Introduction 123
4.2 Some Preliminaries 124
4.3 Batch Learning and On-Line Learning 126
4.4 The Back-Propagation Algorithm 129
4.5 XOR Problem 141
4.6 Heuristics for Making the Back-Propagation Algorithm Perform Better 144
4.7 Computer Experiment: Pattern Classification 150
4.8 Back Propagation and Differentiation 153
4.9 The Hessian and Its Role in On-Line Learning 155
4.10 Optimal Annealing and Adaptive Control of the Learning Rate 157
4.11 Generalization 164
4.12 Approximations of Functions 166
4.13 Cross-Validation 171
4.14 Complexity Regularization and Network Pruning 175
4.15 Virtues and Limitations of Back-Propagation Learning 180
4.16 Supervised Learning Viewed as an Optimization Problem 186
4.17 Convolutional Networks 201
4.18 Nonlinear Filtering 203
4.19 Small-Scale Versus Large-Scale Learning Problems 209
4.20 Summary and Discussion 217
Notes and References 219
Problems 221

Chapter 5 Kernel Methods and Radial-Basis Function Networks 230
5.1 Introduction 230
5.2 Cover's Theorem on the Separability of Patterns 231
5.3 The Interpolation Problem 236
5.4 Radial-Basis-Function Networks 239
5.5 K-Means Clustering 242
5.6 Recursive Least-Squares Estimation of the Weight Vector 245
5.7 Hybrid Learning Procedure for RBF Networks 249
5.8 Computer Experiment: Pattern Classification 250
5.9 Interpretations of the Gaussia

Date de parution :

Ouvrage de 936 p.

17.8x23.5 cm

Sous réserve de disponibilité chez l'éditeur.

Prix indicatif 88,31 €

Ajouter au panier

Thème de Neural networks and learning machines :