A Concise Introduction to Machine Learning Chapman & Hall/CRC Machine Learning & Pattern Recognition Series
Auteur : Faul A.C.
The emphasis of the book is on the question of Why ? only if why an algorithm is successful is understood, can it be properly applied, and the results trusted. Algorithms are often taught side by side without showing the similarities and differences between them. This book addresses the commonalities, and aims to give a thorough and in-depth treatment and develop intuition, while remaining concise.
This useful reference should be an essential on the bookshelves of anyone employing machine learning techniques.
The author's webpage for the book can be accessed here.
Introduction. Probability Theory. Sampling. Linear Classification. Non-Linear Classification. Dimensionality Reduction. Regression. Feature Learning.
A.C. Faul was a Teaching Associate, Fellow and Director of Studies in Mathematics at Selwyn College, University of Cambridge. She came to Cambridge after studying two years in Germany. She did Part II and Part III Mathematics at Churchill College, Cambridge. Since these are only two years, and three years are necessary for a first degree, she does not hold one. However, this was followed by a PhD on the Faul-Powell Algorithm for Radial Basis Function Interpolation under the supervision of Professor Mike Powell. She then worked on the Relevance Vector Machine with Mike Tipping at Microsoft Research Cambridge. Ten years in industry followed where she worked on various algorithms on mobile phone networks, image processing and data visualization. Current projects are on machine learning techniques. In teaching, she enjoys to bring out the underlying, connecting principles of algorithms, which is the emphasis of a book on Numerical Analysis she has written.
Date de parution : 08-2019
15.6x23.4 cm
Date de parution : 08-2019
15.6x23.4 cm
Thèmes d’A Concise Introduction to Machine Learning :
Mots-clés :
Kernel Principal Component Analysis; algorithms; Logistic Sigmoid Activation Function; deep learning; Cumulative Distribution Function; classification; Conditional Probability Distribution; clustering; Dirichlet Process; feature learning; Traditional Teaching Teaching; dimensionality reduction; Probabilistic Principal Component Analysis; sampling algorithms; Support Vector Machine; machine learning techniques; Hidden Neurons; intuition development; Inverse Wishart Distribution; Artificial Intelligence; Probability Density Function; Layer Neural Network; Heaviside Step Function; Logistic Sigmoid; Dirichlet Distribution; Noise Variance; Gibbs Sampling; Markov Chain; Proposal Distribution; Valid Kernel; QDA; Rejection Sampling; Mixing Coefficients; SAE; MCMC