Lavoisier S.A.S.
14 rue de Provigny
94236 Cachan cedex
FRANCE

Heures d'ouverture 08h30-12h30/13h30-17h30
Tél.: +33 (0)1 47 40 67 00
Fax: +33 (0)1 47 40 67 02


Url canonique : www.lavoisier.fr/livre/autre/applied-deep-learning-with-tensorflow-2/descriptif_4662528
Url courte ou permalien : www.lavoisier.fr/livre/notice.asp?ouvrage=4662528

Applied Deep Learning with TensorFlow 2 (2nd Ed., 2nd ed.) Learn to Implement Advanced Deep Learning Techniques with Python

Langue : Anglais

Auteur :

Couverture de l’ouvrage Applied Deep Learning with TensorFlow 2

Understand how neural networks work and learn how to implement them using TensorFlow 2.0 and Keras. This new edition focuses on the fundamental concepts and at the same time on practical aspects of implementing neural networks and deep learning for your research projects.

This book is designed so that you can focus on the parts you are interested in. You will explore topics as regularization, optimizers, optimization, metric analysis, and hyper-parameter tuning. In addition, you will learn the fundamentals ideas behind autoencoders and generative adversarial networks.

All the code presented in the book will be available in the form of Jupyter notebooks which would allow you to try out all examples and extend them in interesting ways. A companion online book is available with the complete code for all examples discussed in the book and additional material more related to TensorFlow and Keras. All the code will be available in Jupyter notebook format and can be opened directly in Google Colab (no need to install anything locally) or downloaded on your own machine and tested locally.

You will: 

? Understand the fundamental concepts of how neural networks work

? Learn the fundamental ideas behind autoencoders and generative adversarial networks

? Be able to try all the examples with complete code examples that you can expand for your own projects

? Have available a complete online companion book with examples and tutorials.


This book is for:

Readers with an intermediate understanding of machine learning, linear algebra, calculus, and basic Python programming. 

Chapter 1 :  Optimization and neural networks
Subtopics:
How to read the book
Introduction to the book

Chapter 2:  Hands-on with One Single Neuron
Subtopics:
Overview of optimization
A definition of learning
Constrained vs. unconstrained optimization
Absolute and local minima
Optimization algorithms with focus on Gradient Descent
Variations of Gradient Descent (mini-batch and stochastic)
How to choose the right mini-batch size

Chapter 3: Feed Forward Neural Networks
Subtopics:
A short introduction to matrix algebra
Activation functions (identity, sigmoid, tanh, swish, etc.)
Implementation of one neuron in Keras
Linear regression with one neuron
Logistic regression with one neuron

Chapter 4: Regularization
Subtopics:
Matrix formalism
Softmax activation function
Overfitting and bias-variance discussion
How to implement a fully conneted network with Keras
Multi-class classification with the Zalando dataset in Keras
Gradient descent variation in practice with a real dataset
Weight initialization
How to compare the complexity of neural networks
How to estimate memory used by neural networks in Keras
 

Chapter 5: Advanced Optimizers
Subtopics:
An introduction to regularization
l_p norm
l_2 regularization
Weight decay when using regularization
Dropout
Early Stopping


Chapter 6
Chapter Title: Hyper-Parameter tuning
Subtopics:
Exponentially weighted averages
Momentum
RMSProp
Adam
Comparison of optimizers

Chapter 7
Chapter Title: Convolutional Neural Networks
Subtopics:
Introduction to Hyper-parameter tuning
Black box optimization
Grid Search
Random Search
Coarse to fine optimization
Sampling on logarithmic scale
Bayesian optimisation

Chapter 8
Chapter Title:  Brief Introduction to Recurrent Neural Networks
Subtopics:
Theory of convolution
Pooling and padding
Building blocks of a CNN 
Implementation of a CNN with Keras
Introduction to recurrent neural networks
Implementation of a RNN with Keras


Chapter 9: Autoencoders
Subtopics:
Feed Forward Autoencoders
Loss function in autoencoders
Reconstruction error
Application of autoencoders: dimensionality reduction
Application of autoencoders: Classification with latent features
Curse of dimensionality
Denoising autoencoders
Autoencoders with CNN 

Chapter 10: Metric Analysis
Subtopics: 
Human level performance and Bayes error
Bias
Metric analysis diagram
Training set overfitting
How to split your dataset
Unbalanced dataset: what can happen
K-fold cross validation
Manual metric analysis: an example

Chapter 11 
Chapter Title: General Adversarial Networks (GANs)
Subtopics: 
Introduction to GANs
The building blocks of GANs
An example of implementation of GANs in Keras

APPENDIX 1: Introduction to Keras
Subtopics:
Sequential model
Keras Layers
Functional APIs
Specifying loss functions
Putting all together and training a model
Callback functions
Save and load models

APPENDIX 2: Customizing Keras
Subtopics:
Custom callback functions
Custom training loops
Custom loss functions

APPENDIX 3: Symbols and Abbreviations


Umberto Michelucci is the founder and the chief AI scientist of TOELT – Advanced AI LAB LLC. He’s an expert in numerical simulation, statistics, data science, and machine learning. He has 15 years of practical experience in the fields of data warehouse, data science, and machine learning. His first book, Applied Deep Learning—A Case-Based Approach to Understanding Deep Neural Networks, was published in 2018. His second book, Convolutional and Recurrent Neural Networks Theory and Applications was published in 2019. He publishes his research regularly and gives lectures on machine learning and statistics at various universities. He holds a PhD in machine learning, and he is also a Google Developer Expert in Machine Learning based in Switzerland.

Covers Debugging and optimization of deep learning techniques with TensorFlow 2.0 and Python

Covers recent advances in autoencoders and multitask learning

Explains how to build models and deploy them on edge devices as Raspberry Pi using TensorFlow lite

Date de parution :

Ouvrage de 380 p.

17.8x25.4 cm

Disponible chez l'éditeur (délai d'approvisionnement : 15 jours).

63,29 €

Ajouter au panier