Types of Sorting Algorithm

# Types of Sorting Algorithm

Updated on Jan 2, 2024 17:14 IST

Sorting algorithms are an essential part of computer science and programming. They help to efficiently organize data in a specific order, making it easier to perform various computational tasks such as searching, analyzing, and optimizing algorithms. There are various types of sorting algorithms available, each with its advantages and disadvantages.

In this article, we will discuss three of the most commonly used sorting algorithms - Bubble Sort, Selection Sort, and Insertion Sort, along with their working principles, advantages, and disadvantages.

Table of Content

## What is Sorting Algorithm?

Sorting Algorithms in data structure and algorithms are methods to rearrange a set of data into a specific order (typically in numerical order). These algorithms organize data efficiently for various computational tasks such as searching for items, analysing data, and optimizing other algorithms requiring sorted data as input.

Example of Sorted Algorithm

• Library Book Arrangement
• E-Commerce Product Listing
• School Report Card
• Hospital Record Management

## Types of Sorting Algorithm

There are different types of sorting algorithms, but here we will discuss 6 sorting algorithms that are mainly used.

• Bubble Sort Algorithm
• Selection Sort Algorithm
• Insertion Sort Algorithm
• Merge Sort Algorithm
• Quick Sort Algorithm
• Heap Sort Algorithm

Now, we will discuss them individually in complete detail with their advantages and disadvantages.

## Bubble Sort Algorithm

It is a simple sorting technique to arrange a list of elements (like numbers or letters) in a specific order, usually ascending or descending. The process involves repeatedly stepping through the list, comparing adjacent elements, and swapping them if they are in the wrong order.

Let's understand with the help of an example.

Imagine an array of numbers: [5, 3, 8, 4, 1]

First Pass Through the Array:

• Compare the first two elements (5 and 3). Since 5 is greater than 3, swap them. The array becomes [3, 5, 8, 4, 1].
• Next, compare 5 and 8. No swap is needed as 5 is less than 8.
• Then compare 8 and 4. Swap them since 8 is greater. The array becomes [3, 5, 4, 8, 1].
• Finally, compare 8 and 1. Swap them. The array is now [3, 5, 4, 1, 8].

Second Pass:

• Repeat the process, ignoring the last element (as it's already in its correct position).
• After this pass, the next highest number, 5, will be in its correct position.

Subsequent Passes:

• Continue this process, with each pass requiring one less comparison than the previous.
• The array is sorted in a complete pass when no swaps are needed.

• Simplicity: It is easy to understand, making it a good educational tool for beginners in programming.
• No Extra Space Needed: Bubble sort is an in-place sorting algorithm, meaning it doesn't require extra space for sorting.
• Detection of Sorted List: It can detect if the list is already sorted, making it efficient for nearly sorted arrays.

• Inefficient for Large Lists: It has a time complexity of O(n^2), making it inefficient for large datasets.
• Slow and Less Efficient: Compared to more advanced sorting algorithms, bubble sort is generally slower and less efficient.
• Too Many Swaps: It performs poorly because it makes too many swaps, potentially leading to high processing time.

## Selection Sort Algorithm

It is a straightforward, in-place comparison-based sorting technique. The idea behind it is to divide the list into two parts: the sorted part at the start of the list and the unsorted part at the rest of the list.

Initially, the sorted part is empty, and the unsorted part is the entire list. The algorithm repeatedly selects the smallest (or largest, depending on the desired order) element from the unsorted part and moves it to the end of the sorted part.

Let's understand with the help of an example.

Consider an array of numbers: [29, 10, 14, 37, 13].

First Pass:

• Find the smallest element in the array, which is 10.
• Swap it with the first element. The array becomes [10, 29, 14, 37, 13].

Second Pass:

• Find the next smallest element in the remaining unsorted part, which is 13.
• Swap it with the first element of the unsorted part (29). The array is now [10, 13, 14, 37, 29].

Subsequent Passes:

• Continue this process for each element, excluding the last one (as it will already be in its correct place).
• After each pass, the next smallest element is placed in its correct position.

• Simplicity: Like Bubble Sort, it is easy to understand and implement, making it suitable for educational purposes.
• Memory Efficiency: It is an in-place sort (doesn’t require additional temporary arrays).
• Performance on Small Lists: For small arrays it can perform reasonably well.
• Minimized Swaps: Unlike Bubble Sort, it minimizes the number of swaps, as it only swaps once per pass.

• Inefficient on Large Lists: With a time complexity of O(n^2), it becomes inefficient for sorting large datasets.
• Not Adaptive: It does not adapt to the existing order of elements; hence, its performance is constant irrespective of the initial order of elements.
• No Early Termination: Unlike Bubble Sort, it doesn’t stop early if the array becomes sorted before all passes are complete.

## Insertion Sort Algorithm

It is a simple and intuitive sorting algorithm that builds the final sorted list one item at a time. It is much like the way you might sort playing cards in your hands.

Let's take an example to get a better understanding.

Imagine you have an array of numbers: [12, 11, 13, 5, 6].

First Element (Initial Step):

The first element (12) is considered sorted by itself.

Second Element:

• Look at the second element (11). Compare it with elements in the sorted part (12).
• Since 11 is smaller than 12, move 12 one position to the right and place 11 before 12. The array becomes [11, 12, 13, 5, 6].

Third Element:

Look at the third element (13). It’s already in the correct position relative to the first two. No changes are made.

Fourth Element:

Now, consider the fourth element (5). Compare it with each element in the sorted part and shift them from one position to the right until the correct position for 5 is found. Insert 5 there. The array becomes [5, 11, 12, 13, 6].

Fifth Element:

Finally, place the last element (6) in its correct position.

• Simple Implementation: It’s straightforward and easy to implement.
• Efficient for Small Data Sets: It works well for small or nearly sorted lists.
• Adaptive: It can adapt to the existing order of elements, performing well when the list is partially sorted.
• Stable: It does not change the relative order of elements with equal keys.
• In-Place: Requires only a constant amount of O(1) of additional memory space.

• Inefficient for Large Lists: With a time complexity of O(n^2), it becomes inefficient and slow for larger datasets.
• More Complex Than Other Simple Sorts: Though straightforward, it's more complex to understand and implement than Bubble Sort.
• Many Movements: If the unsorted part has a smaller element towards the end, it requires many shifts.

## Merge Sort Algorithm

It is a highly efficient, comparison-based, divide-and-conquer sorting algorithm. It divides the input array into two halves, calls itself the two halves, and then merges the two sorted halves.

Let's take an example to better understand how it works.

Consider an array of numbers: [38, 27, 43, 3, 9, 82, 10].

Divide the Array:

• First, divide the array into two halves. In our example: [38, 27, 43] and [3, 9, 82, 10].
• Keep dividing each half until you have arrays with single elements. Single elements are considered sorted.

Merge Halves:

• Start merging these arrays back together. Merge each pair of single-element arrays into sorted two-element arrays.
• Continue merging these sorted arrays, maintaining the sorted order, until you get back to the full length of the array.
• For instance, merge [38] and [27] into [27, 38], and so on.

Final Sorted Array:

After the final merge, you will have a completely sorted array: [3, 9, 10, 27, 38, 43, 82].

• Efficient and Consistent: It has a time complexity of O(n log n) in all cases, making it more efficient than algorithms like Bubble, Selection, or Insertion Sort, especially for large datasets.
• Stable: Does not change the relative order of elements with equal keys.
• Parallelizable: Merge Sort can be easily parallelized due to its divide-and-conquer nature.
• Good for Large Data Sets: Performs well with large datasets or datasets that do not fit entirely in RAM (External Merge Sort).

• Space Complexity: Requires additional space (O(n)) for the temporary array used for merging. This can be a drawback for very large arrays.
• Slower for Small Tasks: For smaller tasks, simpler algorithms like Insertion Sort may be faster due to lower overhead.
• Complex Implementation: More complex to understand and implement compared to more straightforward sorting algorithms like Bubble or Selection Sort.

## Quick Sort Algorithm

It is a highly efficient sorting algorithm based on the divide-and-conquer principle. It works by selecting a 'pivot' element from the array and partitioning the other elements into two sub-arrays, according to whether they are less than or greater than the pivot.

Let's take an example to get a better understanding of how it works.

Consider an array of numbers: [10, 80, 30, 90, 40, 50, 70].

Selecting a Pivot:

• Choose a pivot element from the array. Various methods exist for this; a common one is to pick the last element, so here, 70 is the pivot.
• Rearrange the array so that elements smaller than the pivot come before it, while elements greater come after.

Partitioning:

• After the first pass, the array might look like this: [10, 30, 40, 50, 70, 90, 80].
• Here, elements less than 70 are positioned before it, and those greater are after.

Recursive Sort:

• Apply the same process to the sub-arrays formed (elements before and after the pivot).
• Continue this recursively, resulting in a sorted array.

• Time Efficiency: On average, it has a time complexity of O(n log n), making it one of the fastest sorting algorithms for large datasets.
• Space Efficiency: Being an in-place sort, it does not require additional storage, which is advantageous for large arrays.
• Highly Optimizable: Many variations of Quick Sort can be optimized for specific types of data.

• Worst-Case Performance: In the worst-case scenario, such as when the smallest or largest element is always chosen as the pivot, its time complexity degrades to O(n^2).
• Unstable: The relative positioning of equal elements might change, which is not ideal for certain applications.
• Pivot Selection: The choice of pivot can greatly affect performance. Bad pivot selection can lead to poor performance.

## Heap Sort Algorithm

It is a comparison-based sorting algorithm that uses a binary heap data structure. It works by building a heap from the input data, then repeatedly extracting the maximum element from the heap and rebuilding the heap until all elements are sorted.

Let's take an example to get a better understanding of how it works.

Consider an array of numbers: [12, 11, 13, 5, 6, 7].

Building a Max Heap:

• First, transform the array into a max heap (a complete binary tree where each parent node is greater than its child nodes).
• For our array, after building the max heap, it may look like this: [13, 11, 12, 5, 6, 7].

Sorting the Array:

• Remove the root element (13) and swap it with the last element in the heap (7) since it's the largest. Now, the heap is [7, 11, 12, 5, 6], and the sorted part is [13].
• Rebuild the heap with the remaining elements. After rebuilding, the heap might be [12, 11, 7, 5, 6].

Repeat this process until all elements are removed from the heap and placed in the sorted section. The final sorted array is [5, 6, 7, 11, 12, 13].

• Time Complexity: Heap Sort has a time complexity of O(n log n) for all cases, which is better than algorithms like Bubble Sort or Insertion Sort, especially for large datasets.
• Space Efficiency: It's an in-place sorting algorithm requiring no additional storage space.
• No Stack Overflow Risk: Unlike Quick Sort or Merge Sort, Heap Sort does not use recursion, so it doesn’t risk stack overflow errors.