Differences Between Supervised and Unsupervised Learning

Differences Between Supervised and Unsupervised Learning

9 mins read2.6K Views Comment
clickHere
Rashmi
Rashmi Karan
Manager - Content
Updated on Jul 10, 2023 10:04 IST

The below article goes through Supervised vs Unsupervised Machine Learning. It aims to explain Supervised and Unsupervised Machine Learning along with differences between them.

2022_04_ML.jpg

While delving into AI (Artificial Intelligence) and ML (Machine Learning), you will come across two main ways machines learn from the data fed into them – Supervised and Unsupervised. What is the fundamental difference between the two types, you ask? This article will help you understand everything you need to know about them and which learning type fits the nature of the machine learning problem. Based on this, you can choose a suitable learning algorithm

Diagram

Description automatically generated

We will be covering the following sections:

What is Supervised Learning?

In supervised learning, we have prior knowledge of how the predicted output values should be. Supervised learning algorithms use the features of a dataset to learn the relationship between a given input and the observable output. This is known as training the model. We aim to use the algorithm to predict the output labels when new data is fed into the model. 

Diagram

Description automatically generated

In the above example, we are using labeled data (aka training dataset) to train our supervised learner. 

Diagram

Description automatically generated

Once our learner is trained and ready, we will apply the algorithm to predict the label of the new (testing) data as accurately as possible. 

The performance of the algorithm is determined by its accuracy in the prediction of the new data. Accuracy refers to how often the algorithm’s predictions are correct.

Supervised Learning Techniques

Classification

It’s a supervised learning technique that predicts a discrete class label output to which the data element belongs.

Image result for regression vs classification

Classification techniques can be applied to a wide range of problems, such as:

  • Spam e-mails/texts detection
  • Waste segregation
  • Disease detection
  • Image classification
  • Speech recognition

Regression

It is a supervised learning technique that predicts a continuous outcome variable based on the independent variable(s). 

Image result for regression vs classification

Regression techniques can be applied to problems such as:

  • Fuel price prediction
  • Stock price prediction
  • Sales revenue prediction 

Supervised Learning Algorithms

Examples of supervised learning algorithms include:

  • Linear Regression (Regression)
  • Logistic Regression (Classification)
  • K-Nearest Neighbours (Classification)
  • Support Vector Machines (Classification)
  • Naïve Bayes (Classification)
  • Decision Tree (Classification/Regression)
  • Random Forest (Classification/Regression)

Demo: Building a Supervised Learning Model – Naïve Bayes Classifier

Let’s build a Gaussian Naïve Bayes Classifier model using the scikit-learn library in Python. We are going to make use of the wine dataset already present in the library.

Step 1 – Load the data and get its shape (number of rows and columns)

 
 
   
  1. from sklearn import datasets
  2.  
  3. #Load the dataset
  4. wine = datasets. load_wine ( )
  5.  
  6. wine. data. shape
2022_04_image-36.jpg

So, there are 13 features and 178 records in the data.

Step 2 – Print the feature and the target variables

 
 
   
  1. #Print the features of the data
  2. print ( "Features: " , wine. feature_names )
  3.  
  4. #Print the label type of wine(class_0, class_1, class_2)
  5. print ( "Labels: " , wine. target_names )

Let’s check out the values of our target column:

 
 
   
  1. print (wine. target )
A picture containing electronics, keyboard

Description automatically generated

Step 3 – Split the data into training and testing sets

 
 
   
  1. #Split the dataset into 70% training set and 30% testing set
  2.  
  3. X_train , X_test , y_train , y_test = train_test_split (wine. data , wine. target , test_size = 0.3 ,random_state = 109 )

Step 4.1 – Create a Gaussian Naïve Bayes Classifier

 
 
   
  1. from sklearn. naive_bayes import GaussianNB
  2.  
  3. #Create a Gaussian Classifier
  4. gnb = GaussianNB ( )

Step 4.2 – Train the GaussianNB Classifier

 
 
   
  1. #Train the model using the training sets
  2. gnb. fit (X_train , y_train )

Step 4.3 – Predict the outcome

 
 
   
  1. #Predict the response for testing set
  2. y_pred = gnb. predict (X_test )

Step 5 – Get the accuracy of the GaussianNB Classifier

 
 
   
  1. from sklearn import metrics
  2.  
  3. #Model Accuracy
  4. print ( "Accuracy:" ,metrics. accuracy_score (y_test , y_pred ) )

Hence, our classifier predicts accurate outcomes over 90% of the time. This is great!

Must Read – How to Become a Machine Learning Engineer

What is Unsupervised Learning?

In unsupervised learning, we work with data that isn’t explicitly labeled. It is suitable for ML problems requiring an algorithm to identify some underlying structure in the data. Unlike supervised learning, we do not need to train the machine and can directly apply the algorithm to the data.

Unsupervised learning is commonly used during text mining and dimensionality reduction. A very common unsupervised learning method is Clustering which identifies similar inputs and categorizes them into clusters.

image

In the example above, the fruits are clustered together without using any labels to train the machine. Clustering, in this case, is done based on similar physical characteristics (color). The output is dynamic to the input values i.e.; it will change with a change in input.

Unsupervised learning can be applied to a wide range of problems, such as:

  • Fraud detection
  • Movie recommendation
  • Product promotion

Unsupervised Learning Algorithms

Examples of unsupervised learning algorithms include:

  • Clustering
    • K-means clustering
    • KNN (k-nearest neighbors)
    • Hierarchal clustering
  • Association
    • Apriori algorithm
  • Dimensionality Reduction
    • Principle Component Analysis (PCA)
    • Linear Discriminant Analysis (LDA)
  • Neural Networks

Types of Unsupervised Learning

Clustering 

Involves grouping the unlabeled data based on their similarities or differences such as shape, size, color, price, etc. Clustering algorithms are helpful for market segmentation, image compression, etc. Different types of clustering algorithms are – 

  • K-Means Clustering
  • Mean-Shift Clustering
  • Hierarchical Clustering
  • Expectation–Maximization (EM) Clustering using Gaussian Mixture Models (GMM)

Association Rule Mining

Association Rule Mining spots the repeating item or finds any associations between elements. A good example is Amazon, which throws recommendations based on your shopping behavior. Look for  “ Customers Who Bought This Item Also Bought” or “Products related to this item”.

2022_02_featured-items-Amazon.jpg

Association Rule Mining algorithms are –

Dimensionality Reduction

Dimensionality reduction involves transforming data from a high-dimensional space into a low-dimensional space. It is used when the given data set has a very high number of features and data integrity may be compromised. Some popular dimensionality reduction methods are –

  • Principal Components Analysis
  • Singular Value Decomposition
  • Non-Negative Matrix Factorization
  • Locally Linear Embedding
  • Multidimensional Scaling
  • Spectral Embedding

Demo: Unsupervised Learning – Implementing PCA

Principal Component Analysis (PCA) is used to speed up the fitting of an ML algorithm. If your learning model is too slow because of high input dimensions, we perform dimensionality reduction using PCA.

Let’s see how we implement PCA in Python. We are going to make use of the iris dataset for this.

Step 1 – Load the data 

 
 
   
  1. import pandas as pd
  2.  
  3. data = pd. read_csv ( 'Iris.csv' )
  4. data. head ( )
2022_04_image-35.jpg

Step 2 – Standardize the data 

We need to scale the features in your data before applying PCA. We use StandardScaler to standardize the dataset features onto a unit scale (mean = 0 and variance = 1). This ensures the optimal performance of many ML algorithms. 

 
 
   
  1. from sklearn. preprocessing import StandardScaler
  2.  
  3. features = [ 'SepalLengthCm' , 'SepalWidthCm' , 'PetalLengthCm' , 'PetalWidthCm' ]
  4.  
  5. #Separating the features
  6. x = data. loc [: , features ]. values
  7.  
  8. #Separating the target variable
  9. y = data. loc [: , [ 'Species' ] ]. values
  10.  
  11. #Standardizing the features
  12. x = StandardScaler ( ). fit_transform (x )

Step 3 – Project PCA to 2D 

The original dataset has 4 features: sepal length, sepal width, petal length, and petal width. Now, we will project our data into 2 dimensions – PC1 and PC2. These new components are the two main dimensions of variation.

 
 
   
  1. from sklearn. decomposition import PCA
  2.  
  3. pca = PCA (n_components = 2 )
  4. components = pca. fit_transform (x )
  5. df = pd. DataFrame (data = components ,
  6. columns = [ 'PC1' , 'PC2' ] )
  7.  
  8. df. head ( )
Text

Description automatically generated

Step 4 – Concatenate the DataFrame along with columns 

Now, we will concatenate the DataFrame along axis = 1. 

 
 
   
  1. final_df = pd. concat ( [df , data [ [ 'Species' ] ] ] , axis = 1 )
  2. final_df. head ( )
Text

Description automatically generated

So, we have performed dimensionality reduction on our data.

Step 5 – Visualize the 2D Projection

 
 
   
  1. import matplotlib. pyplot as plt
  2.  
  3. fig = plt. figure (figsize = ( 8 , 8 ) )
  4. ax = fig. add_subplot ( 1 , 1 , 1 )
  5. ax. set_xlabel ( 'Principal Component 1' , fontsize = 15 )
  6. ax. set_ylabel ( 'Principal Component 2' , fontsize = 15 )
  7. ax. set_title ( '2 component PCA' , fontsize = 20 )
  8.  
  9. targets = [ 'Iris-setosa' , 'Iris-versicolor' , 'Iris-virginica' ]
  10. colors = [ 'red' , 'green' , 'orange' ]
  11.  
  12. for target , color in zip (targets ,colors ):
  13. indicesToKeep = final_df [ 'Species' ] == target
  14. ax. scatter (final_df. loc [indicesToKeep , 'PC1' ]
  15. , final_df. loc [indicesToKeep , 'PC2' ]
  16. , c = color
  17. , s = 50 )
  18. ax. legend (targets )
  19. ax. grid ( )
Chart, scatter chart

Description automatically generated

Can you notice on the graph above how the classes seem well-separated?

Difference between Supervised and Unsupervised Learning

Let’s find out the main differences between supervised and unsupervised learning.

Use of Labeled Data Sets 

This is the key factor that distinguishes supervised and unsupervised learning. Labeled data means that input data is already tagged with the desired output. To clarify, supervised learning uses labeled data, and unsupervised learning uses unlabeled data.

Applications

Supervised learning has applications in tasks like –

  • Spam detection and classification
  • User sentiment analysis
  • Weather forecasting
  • Stock market predictions
  • Sales forecasting
  • Demand and supply analysis
  • Fraud identification

Unsupervised learning has applications in –

  • Anomaly detection
  • Recommendation engines
  • Market segmentation
  • Medical imaging

Functions

In supervised learning, algorithms learn functions to predict the output associated with new inputs. On the other hand, unsupervised learning systems focus on finding new patterns in the given unlabeled data.

Challenges

Supervised learning models are simple yet time-consuming to train. You would need a certain level of expertise to label the input and output variables. Unsupervised learning can throw inaccurate output without human validation of the results.

Now let’s take a quick look at the differences between Supervised and Unsupervised Learning

Supervised Learning Unsupervised Learning
The input data is labeled. The input data is not labeled.
The data is classified based on the training dataset. Assigns properties of the given data to categorize it.
Supervised algorithms have a training phase to learn the mapping between input and output. Unsupervised algorithms have no training phase.
Used for prediction problems. Used for analysis problems.
Supervised algorithms include Classification and Regression. Unsupervised algorithms include Clustering and Association.

Endnotes

We have discussed how Machine Learning problems can be solved based on two fundamental learning approaches. Supervised algorithms learn from data features and labels, whereas unsupervised algorithms only take the data features without labels. The choice of algorithm depends on the problem and what kind of data we have in our hands. Artificial Intelligence & Machine Learning is an increasingly growing domain that has hugely impacted big businesses worldwide. Interested in being a part of this frenzy? Explore related articles here.


Top Trending Articles:
Data Analyst Interview Questions | Data Science Interview Questions | Machine Learning Applications | Big Data vs Machine Learning | Data Scientist vs Data Analyst | How to Become a Data Analyst | Data Science vs. Big Data vs. Data Analytics | What is Data Science | What is a Data Scientist | What is Data Analyst

About the Author
author-image
Rashmi Karan
Manager - Content

Rashmi is a postgraduate in Biotechnology with a flair for research-oriented work and has an experience of over 13 years in content creation and social media handling. She has a diversified writing portfolio and aim... Read Full Bio

Comments