Unsupervised vs Supervised Machine Learning: Full Explanation
If you want to understand machine learning, you should learn the difference between the two main types of machine learning. We can group most machine learning tasks into one of two categories: supervised learning and unsupervised learning. But what’s the difference between supervised and unsupervised learning? We’ll compare supervised vs. unsupervised learning and take a look at each learning type’s use cases, techniques, and algorithms.
Supervised Vs. Unsupervised Learning
Before we can make meaningful comparisons between supervised and unsupervised learning, we need to define each type of learning.
Defining Terms
Supervised Learning
The vast majority of machine learning tasks fall into the category of supervised learning. Supervised learning tasks are tasks where individual data points/instances are assigned a label or class. This means we know the data instance’s type in advance. As a result, the machine learning model can learn to recognize which features are correlated with a given class or label. In a supervised learning task, we can check the performance of a machine learning model by comparing the predicted labels to the actual labels.
Unsupervised Learning
In unsupervised learning tasks, the data points are unlabeled, so it isn’t known what type/class each data point is. An unsupervised learning model must be able to analyze input features, determine what the most important features are, and group data points based on attributes the model finds important. An unsupervised learning algorithm basically creates its own labels/classes for the data points in the dataset.
Use Cases
Now that we understand the primary differences, we can explore the differences between the two learning types in greater detail.
Let’s compare use cases in terms of supervised learning vs. unsupervised learning. We’ll take a look at some actual examples of supervised learning and unsupervised learning.
Supervised Learning Use Cases
Supervised learning algorithms are appropriate for cases where the potential classes in a dataset are known ahead of time. Supervised learning examples include:
- One real-life example of supervised learning is Spam detection, where words in an email are used to classify the email as spam or not spam.
- Object classification, where features in an image are used to classify objects in that image as belonging to different classes.
- Bioinformatics, where features like fingerprints or iris textures are used to identify one of many different people.
Unsupervised Learning Cases
Unsupervised learning algorithms are appropriate for any situation where data is not grouped in advance, often because the features that define the groups aren’t known. Examples of unsupervised learning tasks include:
- Anomaly detection or fraud detection, as what events constitute an anomaly are unknown and discerned through the model’s training process.
- Customer segmentation is another unsupervised learning example. In this case, different customer groups are created based upon features like their responses to marketing strategies.
- Recommendation systems, where the features of viewed media are analyzed to group users together based on similar tastes in media.
Techniques
Now we’ll cover the techniques common to both types of machine learning, before taking a look at some specific algorithms.
Supervised Learning Techniques
Classification – Classification is the process of grouping data points based on the values of the attributes possessed by those data points. In the machine learning sense, classification algorithms operate by taking the input values of a data point, analyzing the values for patterns that match patterns known by the model, and assigning the data point to a class or category.
Regression – Regression is a statistical method that attempts to identify a relationship between independent and dependent continuous variables, like income, test scores, or occurrence counts. A regression technique draws a line of best fit through the data approximating the relationship between the dependent and independent variables. This estimated relationship can be used to predict the value of dependent variables based on known independent variable values.
Unsupervised Learning Techniques
Clustering – Clustering is an unsupervised learning technique where unlabeled data is analyzed to find potential patterns, forming natural “clusters” in the data. The number of clusters to divide the data into can be chosen by the user of the algorithm. Altering the number of clusters you want to divide the dataset into will adjust how granular the clustering model is.
Dimensionality Reduction – Dimensionality reduction is a technique used to reduce the overall number of variables in a dataset. More input variables can make a predictive modeling task complicated. Dimensionality reduction techniques can compress a dataset with a large number of features down into a smaller number of features. This enables a clustering algorithm to more easily label data.
Supervised and Unsupervised Classification or Regression Algorithms
We’ll now turn to the machine learning algorithms themselves in our comparison of both learning methods. We’ll examine some of the most commonly used algorithms for each type of learning, and we’ll understand the difference between supervised and unsupervised classification.
Supervised Learning Algorithms
Linear Regression – This regression algorithm operates by taking numerical variables and plotting out a linear relationship between the variables. A relationship between a dependent variable and an independent variable is expressed using Y = a +bX, where ‘b’ is the line’s slope and ‘a’ is where ‘Y’ crosses the X-axis.
Logistic Regression – This algorithm is a binary classification algorithm that assigns probabilities values to numerical or categorical features. A sigmoid function is used to “squeeze” these values towards either a 0 or 1 probability. Strong probabilities will be converted to a 1 while weak probabilities will be converted to a 0.
K-Nearest Neighbors – This algorithm assigns data points to classes based on the classes of nearby data points, considering a number of “neighbor” data points in order to determine the best possible class for a given data point.
Decision Trees – Decision trees can be used for both classification and regression. Decision tree models function by dividing a dataset up into smaller and smaller portions until left with just single data points that can’t be split any further. These single data points are assigned labels according to the filtering criteria. Single data points on a decision tree are referred to as “leaves”, while “nodes” are where filtering criteria are applied to decide how groups of data points should be split.
Support Vector Machines (SVM) – SVMs are a classification algorithm that classifies data points by dividing the plane containing the points into discrete sections. Lines of separation called hyperplanes are drawn between data points and the points are classified based on which side of the hyperplane they lay on. An SVM classifier endeavors to maximize the margin when drawing lines of separation, which is the distance between the dividing hyperplane to the nearest point on either side.
Unsupervised Learning Algorithms
K-Means Clustering – This algorithm clusters data points based on the similarity of their features. Datapoints are split into groups based on patterns found within the dataset. K-means clustering essentially generates its own classes for the data points, and “K” refers to the chosen number of classes. This means that the distinction between supervised and unsupervised classification is that unsupervised classification algorithms create their own classes.
K-means functions by assigning “centroids” to the dataset, which represent the center of a given class. The distance between a given data point and the centroids is measured, and the data point is assigned to the class represented by the nearest centroid. The positions of the centroids are moved and the process is repeated until the distance between all centroids and their surrounding points is minimized.
Principal Component Analysis (PCA) – PCA is used to reduce the dimensionality of a dataset, representing the information contained by the dataset in a simpler way. PCA reduces a dataset’s complexity by identifying new, orthogonal dimensions for the data. While a PCA model aims to reduce the data’s dimensionality, the variance of the data should be preserved, enabling separation of data points despite the simpler representation. In short, PCA takes the specified input features and compresses them into fewer features while still representing most of the information contained by the individual data points.
Comparison of Supervised and Unsupervised
We’ll now compare and contrast the distinguishing features of both learning types to summarize what we’ve covered and get a better sense of how the two concepts differ in their details.
Supervised Learning
Definition: Supervised learning tasks are tasks where data points in the training data have class labels assigned to them.
Techniques: Classification, Regression
Use cases: Spam detection, object classification, bioinformatics
Algorithms: Linear Regression, Logistic Regression, K-Nearest Neighbors, Decision Trees, Support Vector Machines
Unsupervised Learning
Definition: In unsupervised learning tasks, it isn’t known what type/class each data point is.
Techniques: Clustering, Dimensionality Reduction
Use cases: Anomaly Detection, Customer segmentation, Recommendation Systems
Algorithms: K-Means Clustering, PCA
Summing Up
In machine learning, there are two main types of tasks: supervised learning tasks and unsupervised learning tasks. Comparing supervised vs. unsupervised learning lets us understand the differences between the two kinds of problems. Supervised learning is used when you have data that is already labeled with classes that you want to predict, while unsupervised learning is for instances where you don’t know what kinds of classes you have in advance.
If this article helped you learn about supervised learning and unsupervised learning, please share it with others so they can learn about these topics too. You can also subscribe to our email newsletter for updates about new content and data science tips.