Feature engineering with Python
Find out what you will learn throughout the course (if the video does not show, try allowing cookies in your browser).
What you'll learn
👉 Multiple methods for missing data imputation.
👉 Strategies to transform categorical variables into numbers.
👉 How to handle infrequent categories.
👉 Variance stabilizing transformations.
👉 Multiple discretization techniques.
👉 How and when to handle outliers.
👉 How to create features from dates and times.
👉 How to create features from small text data.
👉 Apply transformations with Python open source libraries.
What you'll get
Lifetime access
Instructor support
Certificate of completion
💬 English subtitles
Instructor
Soledad Galli, PhD
Sole is a lead data scientist, instructor, and developer of open source software. She created and maintains the Python library Feature-engine, which allows us to impute data, encode categorical variables, transform, create, and select features. Sole is also the author of the"Python Feature Engineering Cookbook," published by Packt.
More about Sole on LinkedIn.
Can't afford it? Get in touch.
30 days money back guarantee
If you're disappointed for whatever reason, you'll get a full refund.
So you can buy with confidence.
Feature Engineering Course
Welcome to the most comprehensive course on feature engineering for machine learning available online.
In this course, you will learn everything you need to preprocess your datasets to train machine learning models like linear regression, logistic regression, decision trees, random forests and gradient boosting machines.
What is feature engineering?
Feature engineering consists in using domain knowledge and statistical methods to create features that make machine learning algorithms work effectively.
Raw data is almost never suitable to train machine learning models. In fact, data scientists devote a lot of effort to data analysis, data engineering and preprocessing, and feature extraction, to create the best features to train predictive models.
Feature engineering includes imputation of missing data, encoding of categorical variables, transformation or discretization of continuous variables, combination of variables, extraction of dates and times, and much more.
What will you learn in this online course?
In this course, you will learn about missing data imputation, encoding of categorical features, numerical variable transformation, discretization, and how to create new features from your dataset.
Specifically, you will learn:
- How to impute missing values
- How to encode categorical features
- How to transform and scale numerical variables
- How to perform discretization
- How to remove outliers
- How to perform feature extraction from dates and time
- How to create new features from existing ones
And there is more...
You probably saw a lot of courses on other learning platforms like Coursera or Udemy. In fact, this is the full version of the Udemy course. Why is this course special?
While most online courses will teach you the very basics of feature engineering, like imputing variables with the mean or transforming categorical features using one hot encoding, this course will teach you all of that, and much more.
Here, you will first learn the most popular techniques for variable engineering, like mean and median imputation, one-hot encoding, transformation with logarithm, and discretization. Then, you will discover more advanced methods that capture information while encoding or transforming your variables, to obtain better features and improve the performance of regression and classification models.
Advanced methods for feature engineering
These methods include encoding variables with the target mean, using decision trees for discretization or variable combination, combining features mathematically, and automating the procedure of feature creation with a bespoke library: Feature-engine.
You will learn methods described in scientific articles, used in data science competitions like those hosted by Kaggle and the KDD, and that are commonly utilized in organizations. And what’s more, you will easily implement all of these methods by utilizing Python's open-source libraries, like pandas, Scikit-learn, category encoders and Feature-engine.
By the end of the course, you will be able to create end-to-end machine learning workflows that fully transform your datasets and obtain predictions from them.
Feature engineering with Python
Throughout the course, we will use Python as the main language. We will compare the feature engineering implementations of the open-source libraries Pandas, Scikit-learn, Category Encoders and Feature-engine.
Throughout the tutorials, you’ll find detailed explanations of each technique and a discussion about their advantages, limitations, and underlying assumptions, followed by the best programming practices to implement them in Python.
By the end of the course, you will be able to decide which feature engineering technique you need based on the variable characteristics and the models you wish to train. And you will also be well placed to test various transformation methods and let your models decide which ones work best.
Finally, you will be able to create end-to-end machine learning pipelines, packed with feature engineering preprocessing steps that you can easily deploy to production.
Who is this course for?
This course is for data scientists, machine learning engineers and software engineers who want to improve their skills and advance their careers.
Course prerequisites
To make the most out of this course, learners need to have basic knowledge of machine learning, data analytics, and familiarity with the most common predictive models, like linear and logistic regression, decision trees, and random forests.
The instructor
Sole will be guiding you through this course. She’s been selected as Linkedin’s top voice in data science and analytics in 2018 and then again in 2024. She is the author of Packt’s Python Feature Engineering Cookbook and Leanpub’s Feature Selection in Machine Learning book. She is also the maintainer of Feature-engine, an open source Python library for feature engineering and feature selection. With all this experience, who would be better place to teach a course about feature engineering for machine learning? Not that many ;)
To wrap-up
This comprehensive feature engineering course contains over 100 lectures spread across 15 hours of in-demand video, more than 10 quizzes and assessments, demonstrations using real-world use cases, and all topics include hands-on Python code examples in Jupyter notebooks that you can use for reference, practice, and reuse in your own projects.
The course comes with a 30-day money-back guarantee, so you can sign up today with no risk.
So what are you waiting for? Enroll today and join the world's most comprehensive course on feature engineering for machine learning, and start creating better machine learning models.
Course Curriculum
- Basic imputation methods (3:52)
- Mean or median imputation (4:53)
- Arbitrary value imputation (3:16)
- Frequent category imputation (3:30)
- Missing category imputation (1:22)
- Adding a missing indicator (3:42)
- Basic methods - considerations (11:15)
- Basic imputation with pandas (6:45)
- Basic imputation with pandas - demo (12:35)
- Basic methods with Scikit-learn (9:44)
- Mean or median imputation with Scikit-learn (10:53)
- Arbitrary value imputation with Scikit-learn (3:57)
- Frequent category imputation with Scikit-learn (4:38)
- Missing category imputation with Scikit-learn (2:24)
- Adding a missing indicator with Scikit-learn (4:59)
- Imputation with GrdiSearch - Scikit-learn (8:24)
- Basic methods with Feature-engine (7:19)
- Mean or median imputation with Feature-engine (6:50)
- Arbitrary value imputation with Feature-engine (3:16)
- Frequent category imputation with Feature-engine (2:34)
- Arbitrary string imputation with Feature-engine (3:24)
- Adding a missing indicator with Feature-engine (4:52)
- Wrapping up (2:19)
- How are we doing? (0:24)
- Exercise
- Added Treat: A Movie We Recommend🍿
- Alternative imputation methods (2:59)
- Complete Case Analysis (6:30)
- CCA - considerations with code demo (3:45)
- End of distribution imputation (4:14)
- Random sample imputation (14:14)
- Random imputation - considerations with code (7:56)
- Mean or median imputation per group (4:32)
- CCA with pandas (5:19)
- End of distribution imputation with pandas (5:24)
- Random sample imputation with pandas (4:46)
- Mean imputation per group with pandas (5:34)
- CCA with Feature-engine (6:47)
- End of distribution imputation with Feature-engine (5:13)
- Random sample imputation with Feature-engine (2:25)
- Wrapping up (5:52)
- Imputation - Summary table
- Exercise
- Categorical encoding | Introduction (4:59)
- One hot encoding (6:03)
- One hot encoding with pandas (7:29)
- One hot encoding with sklearn (11:06)
- One hot encoding with Feature-engine (2:19)
- One hot encoding with Category encoders (5:04)
- Ordinal encoding (1:50)
- Ordinal encoding with pandas (3:16)
- Ordinal encoding with sklearn (4:05)
- Ordinal encoding with Feature-engine (1:49)
- Ordinal encoding with Category encoders (1:43)
- Count or frequency encoding (3:11)
- Count encoding with pandas (2:58)
- Count encoding with Feature-engine (1:21)
- Count encoding with Category encoders (1:42)
- Unseen categories (11:35)
- Wrapping up (3:03)
- Categorical encoding | Monotonic (5:09)
- Ordered ordinal encoding (2:25)
- Ordered ordinal encoding with pandas (8:11)
- Ordered ordinal encoding with Feature-engine (2:36)
- Mean encoding (1:34)
- Mean encoding with pandas (4:39)
- Mean encoding with Feature-engine (2:36)
- Mean encoding with Category encoders (2:15)
- Mean encoding plus smoothing (4:55)
- Mean encoding plus smoothing - Category encoders (6:35)
- Mean encoding plus smoothing - Feature-engine (6:15)
- Weight of evidence (WoE) (4:36)
- Weight of Evidence with pandas (9:47)
- Weight of Evidence with Feature-engine (1:40)
- Weight of Evidence with Category encoders (1:12)
- Weight of evidence - gotchas (3:05)
- Unseen categories (2:15)
- Wrapping up (3:24)
- Comparison of categorical variable encoding (9:09)
- Additional reading resources
- Grouping rare labels (4:17)
- One hot encoding of top categories (3:06)
- OHE of top categories with pandas (5:33)
- OHE of top categories with Feature-engine (2:14)
- OHE of top categories with sklearn (5:35)
- Rare label encoding (4:31)
- Rare label encoding with pandas (8:12)
- Rare label encoding with Feature-engine (1:39)
- Wrapping up (2:20)
- Categorical encoding - More... (3:31)
- More Wisdom: Our Chosen Podcast Episode 🎧
- Variable transformation - Introduction (3:36)
- Variable transformation (6:46)
- Box-Cox transformation (2:47)
- Yeo-Johnson transformation (3:00)
- Logarithm transformation with Numpy (5:14)
- Reciprocal transformation with Numpy (1:55)
- Square-root transformation with Numpy (1:12)
- Power transformation with Numpy (1:17)
- Box-Cox transformation with Scipy (1:37)
- Yeo-Johnson transformation with Scipy (1:05)
- Arcsin transformation with Numpy (1:16)
- Logarithm transformation with sklearn (2:49)
- Reciprocal transformation with sklearn (1:01)
- Square-root transformation with sklearn (0:52)
- Power transformation with sklearn (0:35)
- Box-Cox transformation with sklearn (2:05)
- Yeo-Johnson transformation with sklearn (0:56)
- Arcsin transformation with sklearn (1:12)
- Logarithm transformation with Feature-engine (3:41)
- Reciprocal transformation with Feature-engine (0:44)
- Square-root transformation with Feature-engine (0:57)
- Power transformation with Feature-engine (0:53)
- Box-Cox transformation with Feature-engine (1:07)
- Yeo-Johnson transformation with Feature-engine (0:38)
- Arcsin transformation with Feature-engine (1:28)
- Wrapping up (4:59)
- Additional reading resources
- Quiz
- Discretization (4:43)
- Discretization methods (3:53)
- Equal-width discretization (4:06)
- Equal-width discretization with pandas (5:52)
- Equal-width discretization with sklearn (1:59)
- Equal-width discretization with Feature-engine (2:36)
- Equal-frequency discretization (4:13)
- Equal-frequency discretization with pandas (3:59)
- Equal-frequency discretization with sklearn (1:03)
- Equal-frequency discretization with Feature-engine (1:32)
- Arbitrary discretization (1:44)
- Arbitrary discretization with pandas (3:06)
- Arbitrary discretization with Feature-engine (2:20)
- Discretization plus categorical encoding (2:54)
- Discretization plus encoding | Demo (5:45)
- Wrapping up (12:29)
- Additional reading resources
- Discretization - section intro (2:56)
- K-means discretization (4:13)
- K-means discretization with sklearn (2:43)
- Discretization with classification trees (5:05)
- Discretization with decision trees using Scikit-learn (11:55)
- Discretization with decision trees using Feature-engine (3:48)
- Binarization (2:13)
- Binarization with sklearn (4:11)
- Additional reading resources
- Feature creation (2:11)
- Math functions (5:06)
- Math functions with pandas (2:52)
- Math functions with Feature-engine (2:58)
- Relative functions with pandas (1:53)
- Relative functions with Feature-engine (2:57)
- Polynomial features (3:54)
- Polynomial features demo (3:45)
- Features from decision trees (3:36)
- Feature scaling (3:10)
- Scaling and distributions (3:49)
- Standardisation (3:53)
- Standardisation | Demo (2:26)
- Scaling to minimum and maximum values (1:43)
- MinMaxScaling | Demo (1:53)
- Mean normalisation (2:12)
- Mean normalisation | Demo (4:09)
- Maximum absolute scaling (1:35)
- MaxAbsScaling | Demo (2:08)
- Scaling to median and quantiles (1:49)
- Robust Scaling | Demo (1:40)
- Scaling to vector unit length (5:45)
- Scaling to vector unit length | Demo (4:24)
- Scaling categorical variables
- Additional reading resources
Frequently Asked Questions
When does the course begin and end?
You can start taking the course from the moment you enroll. The course is self-paced, so you can watch the tutorials and apply what you learn whenever you find it most convenient.
For how long can I access the course?
The course has lifetime access. This means that once you enroll, you will have unlimited access to the course for as long as you like.
What if I don't like the course?
There is a 30-day money back guarantee. If you don't find the course useful, contact us within the first 30 days of purchase and you will get a full refund.
Will I get a certificate?
Yes, you'll get a certificate of completion after completing all lectures, quizzes and assignments.