Lavoisier S.A.S.
14 rue de Provigny
94236 Cachan cedex
FRANCE

Heures d'ouverture 08h30-12h30/13h30-17h30
Tél.: +33 (0)1 47 40 67 00
Fax: +33 (0)1 47 40 67 02


Url canonique : www.lavoisier.fr/livre/informatique/natural-language-processing/descriptif_4415544
Url courte ou permalien : www.lavoisier.fr/livre/notice.asp?ouvrage=4415544

Natural Language Processing A Machine Learning Perspective

Langue : Anglais

Auteurs :

Couverture de l’ouvrage Natural Language Processing
This undergraduate textbook introduces essential machine learning concepts in NLP in a unified and gentle mathematical framework.
With a machine learning approach and less focus on linguistic details, this gentle introduction to natural language processing develops fundamental mathematical and deep learning models for NLP under a unified framework. NLP problems are systematically organised by their machine learning nature, including classification, sequence labelling, and sequence-to-sequence problems. Topics covered include statistical machine learning and deep learning models, text classification and structured prediction models, generative and discriminative models, supervised and unsupervised learning with latent variables, neural networks, and transition-based methods. Rich connections are drawn between concepts throughout the book, equipping students with the tools needed to establish a deep understanding of NLP solutions, adapt existing models, and confidently develop innovative models of their own. Featuring a host of examples, intuition, and end of chapter exercises, plus sample code available as an online resource, this textbook is an invaluable tool for the upper undergraduate and graduate student.
Part I. Basics: 1. Introduction; 2. Counting relative frequencies; 3. Feature vectors; 4. Discriminative linear classifiers; 5. A perspective from information theory; 6. Hidden variables; Part II. Structures: 7. Generative sequence labelling; 8. Discriminative sequence labelling; 9. Sequence segmentation; 10. Predicting tree structures; 11. Transition-based methods for structured prediction; 12. Bayesian models; Part III. Deep Learning: 13. Neural network; 14. Representation learning; 15. Neural structured prediction; 16. Working with two texts; 17. Pre-training and transfer learning; 18. Deep latent variable models; Index.
Yue Zhang is an associate professor at Westlake University. Before joining Westlake, he worked as a research associate at the University of Cambridge and then a faculty member at Singapore University of Technology and Design. His research interests lie in fundamental algorithms for NLP, syntax, semantics, information extraction, text generation, and machine translation. He serves as an action editor for TACL, and as area chairs of ACL, EMNLP, COLING, and NAACL. He gave several tutorials at ACL, EMNLP and NAACL, and won a best paper award at COLING in 2018.
Zhiyang Teng is currently a postdoctoral research fellow in the natural language processing group of Westlake University, China. He obtained his Ph.D. from Singapore University of Technology and Design (SUTD) in 2018, and his Master's from the University of Chinese Academy of Science in 2014. He won the best paper award at CCL/NLP-NABD 2014, and published conference papers for ACL/TACL, EMNLP, COLING, NAACL, and TKDE. His research interests include syntactic parsing, sentiment analysis, deep learning, and variational inference.

Date de parution :

Ouvrage de 484 p.

19.3x25.2 cm

Disponible chez l'éditeur (délai d'approvisionnement : 14 jours).

Prix indicatif 71,34 €

Ajouter au panier

Thème de Natural Language Processing :