Lavoisier S.A.S.
14 rue de Provigny
94236 Cachan cedex
FRANCE

Heures d'ouverture 08h30-12h30/13h30-17h30
Tél.: +33 (0)1 47 40 67 00
Fax: +33 (0)1 47 40 67 02


Url canonique : www.lavoisier.fr/livre/autre/machine-learning-with-pyspark/descriptif_4570027
Url courte ou permalien : www.lavoisier.fr/livre/notice.asp?ouvrage=4570027

Machine Learning with PySpark (2nd Ed., 2nd ed.) With Natural Language Processing and Recommender Systems

Langue : Anglais

Auteur :

Couverture de l’ouvrage Machine Learning with PySpark

Master the new features in PySpark 3.1 to develop data-driven, intelligent applications. This updated edition covers topics ranging from building scalable machine learning models, to natural language processing, to recommender systems.

Machine Learning with PySpark, Second Edition begins with the fundamentals of Apache Spark, including the latest updates to the framework. Next, you will learn the full spectrum of traditional machine learning algorithm implementations, along with natural language processing and recommender systems. You?ll gain familiarity with the critical process of selecting machine learning algorithms, data ingestion, and data processing to solve business problems. You?ll see a demonstration of how to build supervised machine learning models such as linear regression, logistic regression, decision trees, and random forests. You?ll also learn how to automate the steps using Spark pipelines, followed by unsupervised models such as K-means and hierarchical clustering. A section on Natural Language Processing (NLP) covers text processing, text mining, and embeddings for classification. This new edition also introduces Koalas in Spark and how to automate data workflow using Airflow and PySpark?s latest ML library.

After completing this book, you will understand how to use PySpark?s machine learning library to build and train various machine learning models, along with related components such as data ingestion, processing and visualization to develop data-driven intelligent applications

What you will learn:

  • Build a spectrum of supervised and unsupervised machine learning  algorithms
  • Use PySpark's machine learning library to implement machine learning and recommender systems 
  • Leverage the new features in PySpark?s machine learning library
  • Understand data processing using Koalas in Spark
  • Handle issues around feature engineering, class balance, bias and variance, and cross validation to build optimally fit models

Who This Book Is For 

Data science and machine learning professionals.

Chapter 1:  Introduction to Spark 3.1

Chapter Goal: The book’s opening chapter introduces the readers to latest changes in PySpark and updates to the framework. This chapter covers the different components of Spark ecosystem. The chapter doubles up as an introduction to the book’s format, including explanation of formatting practices, pointers to the book’s accompanying codebase online, and support contact information. The chapter sets readers’ expectations in terms of the content and structure of the rest of the book. This chapter provides the audience with a set of required libraries and code/data download information so that the user is able to set up their environment appropriately.

No of pages -30

Sub -Topics

1. Data status

2. Apache Spark evolution

3. Apache Spark fundamentals

4. Spark components

5. Setting up Spark 3.1


Chapter 2:  Manage Data with PySpark

Chapter Goal: 

This chapter covers the steps right from reading the data, pre-processing and cleaning for machine learning purpose. The chapter showcases the steps to build end to end data handling pipelines to transform and create features for machine learning. It covers simple way to use Koalas in order to leverage pandas in a distributed way in Spark.It also covers the method to automate the data scripts in order to run schedules data jobs using Airflow.

No of pages:50

Sub - Topics

1. Data ingestion

2.  Data cleaning

3. Data transformation

4. End- to end data pipelines

5. Data processing using koalas in Spark on Pandas DataFrame

6. Automate data workflow using Airflow 


Chapter 3: Introduction to Machine Learning

Chapter Goal:

This chapter introduces the readers to basic fundamentals of machine learning. This chapter covers different categories of machine learning and different stages in the machine learning lifecycle. It highlights the method to extract information related to model interpretation to understand the reasoning behind model predictions in PySpark . 

No of pages: 25

Sub - Topics:  

1. Supervised machine learning

2. Unsupervised machine learning

3. Model interpretation 

4.  Machine learning lifecycle


Chapter 4: Linear Regression with PySpark

Chapter Goal: 

This chapter covers the fundamentals of linear regression for readers. This chapter then showcases the steps to build feature engineering pipeline and fitting a regression model using PySpark latest machine learning library 

No of pages:20

Sub - Topics: 

1. Introduction to linear regression 

2. Feature engineering in PySpark

3. Model training 

4. End-to end pipeline for model prediction


Chapter 5: Logistic Regression with PySpark

Chapter Goal: 

This chapter covers the fundamentals of logistic regression for readers. This chapter then showcases the steps to build feature engineering pipeline and fitting a logistic regression model using PySpark machine learning library on a customer dataset

No of pages:25

1. Introduction to logistic regression 

2. Feature engineering in PySpark

3. Model training 

4. End-to end pipeline for model prediction


Chapter 6: Ensembling with Pyspark

Chapter Goal: 

This chapter covers the fundamentals of ensembling methods including bagging, boosting and stacking. This chapter then showcases strengths of ensembling methods over other machine learning techniques. In the final part -the steps to build feature engineering pipeline and fitting random forest model using PySpark Machine learning library are covered 

No of pages:30

1. Introduction to ensembling methods 

2. Feature engineering in PySpark

3. Model training 

4. End-to end pipeline for model prediction


Chapter 7: Clustering with PySpark

Chapter Goal: 

This chapter introduces the unsupervised part of machine learning - clustering. This chapter covers the steps to build feature engineering pipeline and running a customer segmentation exercise using PySpark machine learning library 

No of pages:20

1.Introduction to clustering 

2. Feature engineering in PySpark

3. Segmentation using Pyspark

Chapter 8: Recommendation Engine with PySpark

Chapter Goal: 

This chapter focuses on the fundamentals of building scalable recommendation models. This chapter introduces different types of recommendation models that are used widely and then showcases the steps to build data pipeline and training a hybrid recommendation model using PySpark machine learning library for making recommendations to customers 

No of pages:25

1. Introduction to types of recommender systems

2. Deep dive into collaborative filtering

3. Building recommendation engine using PySpark

Chapter 9: Advanced Feature Engineering with PySpark

Chapter Goal: 

This chapter covers the process to handle sequential data such as customer journey that can be used in prediction as well. This chapter also includes the use of PCA technique for reducing the dimensional space to handful features. At the end -it  showcase use of machine learning flow to deploy Spark models in production.

No of pages:45

1.Sequence embeddings for prediction

2. Dimensionality reduction 

3. Model deployment in PySpark 




Pramod Singh works at Bain & Company in the Advanced Analytics Group. He has extensive hands-on experience in large scale machine learning, deep learning, data engineering, designing algorithms and application development. He has spent more than 13 years working in the field of Data and AI at different organizations. He’s published four books – Deploy Machine Learning Models to Production, Machine Learning with PySpark, Learn PySpark and Learn TensorFlow 2.0, all for Apress. He is also a regular speaker at major conferences such as O’Reilly’s Strata and AI conferences. Pramod holds a BTech in electrical engineering from B.A.T.U, and an MBA from Symbiosis University. He has also earned a Data Science certification from IIM–Calcutta. He lives in Gurgaon with his wife and 5-year-old son. In his spare time, he enjoys playing guitar, coding, reading, and watching football.

Covers how to transition from Python-based ML models to PySpark-based large scale models

Covers how to automate your data workflow using Airflow

Explains the end-to end machine learning pipeline for model prediction

Date de parution :

Ouvrage de 220 p.

17.8x25.4 cm

Disponible chez l'éditeur (délai d'approvisionnement : 15 jours).

58,01 €

Ajouter au panier