Welcome to Part 3 of our Data Science Primer. In this guide, we’ll teach you how to get your dataset into tip-top shape through data cleaning. Data cleaning is crucial, because garbage in gets you garbage out, no matter how fancy your ML algorithm is.
The steps and techniques for data cleaning will vary from dataset to dataset. As a result, it’s impossible for a single guide to cover everything you might run into. However, this guide provides a reliable starting framework that can be used every time. Let’s get started!
Better Data > Fancier Algorithms
Data cleaning is one those things that everyone does but no one really talks about. Sure, it’s not the “sexiest” part of machine learning. And no, there aren’t any hidden tricks and secrets to uncover.
However, proper data cleaning can make or break your project. Professional data scientists usually spend a very large portion of their time on this step.
Why? Because of a simple truth in machine learning:
Better data beats fancier algorithms.
In other words… garbage in gets you garbage out. Even if you forget everything else from this guide, please remember this point. If you have a properly cleaned dataset, even simple algorithms can learn impressive insights from the data.
Obviously, different types of data will require different types of cleaning. However, the systematic approach laid out in this lesson can always serve as a good starting point.
Remove Unwanted Observations
The first step to data cleaning is removing unwanted observations from your dataset. Specifically, you’ll want to remove duplicate or irrelevant observations.
Duplicate observations
Duplicate observations are important to remove because you don’t want them to bias your results or models. Duplicates most frequently arise during data collection, such as when you:
- Combine datasets from multiple places
- Scrape data
- Receive data from clients/other departments
Irrelevant observations
Irrelevant observations are those that don’t actually fit the specific problem that you’re trying to solve. For example, if you were building a model for Single-Family homes only, you wouldn’t want observations for Apartments in there.
This is also a great time to review your charts from Exploratory Analysis. You can look at the distribution charts for categorical features to see if there are any classes that shouldn’t be there. Checking for irrelevant observations before engineering features can save you many headaches down the road.
Fix Structural Errors
The next bucket under data cleaning involves fixing structural errors. Structural errors are those that arise during measurement, data transfer, or other types of “poor housekeeping.”
For instance, you can check for typos or inconsistent capitalization. This is mostly a concern for categorical features, and you can look at your bar plots to check.
Here’s an example:
As you can see:
- Both
'composition'
and'Composition'
refer to the same roofing, but are recorded as two different classes - Same with
'asphalt'
and'Asphalt'
- Same with
'shake-shingle'
and'Shake Shingle'
- Finally,
'asphalt,shake-shingle'
could probably just be lumped into'Shake Shingle'
as well
After we replace the typos and inconsistent capitalization, the class distribution becomes much cleaner:
Finally, check for mislabeled classes, or separate classes that should really be the same. They don’t appear in the dataset above, but watch out for abbreviations, such as:
- e.g. If
’N/A’
and’Not Applicable’
appear as two separate classes, you should combine them. - e.g.
’IT’
and’information_technology’
should be a single class.
Filter Unwanted Outliers
Outliers can cause problems with certain types of models. For example, linear regression models are less robust to outliers than decision tree models. In general, if you have a legitimate reason to remove an outlier, it will help your model’s performance.
However, outliers are innocent until proven guilty. You should never remove an outlier just because it’s a “big number.” That big number could be very informative for your model.
We can’t stress this enough: you must have a good reason for removing an outlier, such as suspicious measurements that are unlikely to be real data.
Handle Missing Data
Missing data is a deceptively tricky issue in applied machine learning.
First, just to be clear, you cannot simply ignore missing values in your dataset. You must handle them in some way for the very practical reason that most algorithms do not accept missing values.
“Common sense” is not sensible here
Unfortunately, the two most commonly recommended methods of dealing with missing data are actually very bad. At best, these methods are often ineffective. At worst, they can completely sabotage your results.
They are:
- Dropping observations that have missing values
- Imputing the missing values based on other observations
Dropping missing values is sub-optimal because when you drop observations, you drop information. The fact that the value was missing may be informative in itself. Plus, in the real world, you often need to make predictions on new data even if some of the features are missing!
Imputing missing values is sub-optimal because the value was originally missing but you filled it in. This also leads to a loss in information, no matter how sophisticated your imputation method is. Even if you build an imputation model, you’re just reinforcing the patterns already provided by other features.
In short, you should always tell your algorithm that a value was originally missing because “missingness” is almost always informative in itself.
So how can you do so?
Missing categorical data
The best way to handle missing data for categorical features is to simply label them as ’Missing’
!
- You’re essentially adding a new class for the feature.
- This tells the algorithm that the value was missing.
- This also gets around the technical requirement for no missing values.
Missing numeric data
For missing numeric data, you should flag and fill the values.
- Flag the observation with an indicator variable of missingness.
- Then, fill the original missing value with
0
just to meet the technical requirement of no missing values.
By using this technique of flagging and filling, you are essentially allowing the algorithm to estimate the optimal constant for missingness, instead of just filling it in with the mean.
After properly data cleaning, you’ll have a robust dataset that avoids many of the most common pitfalls. This can really save you from a ton of headaches down the road, so please don’t rush this step.
That wraps it up for the Data Cleaning step of the Machine Learning Workflow. Next, it’s time to learn more about the next core step: Feature Engineering!
More About Data Cleaning
- How to Handle Imbalanced Classes in Machine Learning
- Datasets for Data Science and Machine Learning
- Python Cheat Sheet for Data Science
Read the rest of our Intro to Data Science here.