top of page

Quick Sort: A Deep Dive

  • Writer: Samvar Shah
    Samvar Shah
  • Oct 21, 2024
  • 3 min read

Updated: Nov 13, 2024




What is Quick Sort?


Quick Sort is a foundational algorithm within the category of comparison-based, divide-and-conquer sorting algorithms. Developed by C. A. R. Hoare in 1960, Quick Sort operates by recursively partitioning a dataset into smaller subsets, each ordered independently. Quick Sort achieves an average time complexity of O(n log n), making it particularly efficient for large datasets due to its cache-friendly, in-place nature and reduced memory overhead compared to many other O(n log n) algorithms.


How Does Quick Sort Work?


  • Pivot Selection: A crucial aspect of Quick Sort is the choice of a pivot element within the array, which serves as the reference point for ordering other elements. There are multiple strategies for pivot selection:

    • First or Last Element: Using either the initial or final element as the pivot.

    • Median-of-Three: Selecting the median of the first, middle, and last elements as the pivot, which helps avoid worst-case scenarios in already ordered arrays.

    • Random Element: Randomly selecting a pivot element, which can statistically reduce the likelihood of poor pivot choices and avoid worst-case performance.


  • Partitioning Operation: Given the pivot, Quick Sort rearranges the elements of the array into two partitions: values less than the pivot and values greater than the pivot. This is achieved in O(n) operations:

    • Two pointers initialize at the array's start and end, incrementing or decrementing as necessary to maintain the invariant that elements to the left of the pivot are smaller and elements to the right are larger.

    • Elements that do not meet the partitioning condition are swapped across the left and right sections.

    • Once completed, the pivot is placed in its final position, leaving two subarrays to the left and right of the pivot.


  • Recursive Decomposition: Quick Sort is then recursively applied to each subarray. Each recursion independently selects a pivot, partitions the subset, and further decomposes until subarrays of size zero or one remain:

    • The recursive subdivision, or divide-and-conquer, continues until reaching base cases of single-element or empty subarrays, which are inherently sorted.


  • Base Case: Recursion terminates when the subarrays reduce to size one or zero, which requires no additional sorting. The recursive calls unwind, yielding a fully sorted array.


Detailed Steps of Quick Sort


To illustrate, let's walk through an example:

Example Array: [3, 6, 8, 10, 1, 2, 1]


  1. Choose a Pivot: Suppose we choose the last element, 1, as the pivot.

  2. Partitioning:

    • Initialize two pointers: one at the start (low) and one at the end (high) of the array.

    • Move the low pointer to the right and the high pointer to the left, comparing elements to the pivot.

    • Swap elements to ensure all elements less than the pivot are on the left and those greater are on the right.

    • Place the pivot in its correct position.

    After partitioning, the array might look like [1, 1, 8, 10, 3, 2, 6], with 1 in its correct place.

  3. Recursively Apply:

    • Apply Quick Sort to the left sub-array [1, 1] and the right sub-array [8, 10, 3, 2, 6].

    • Repeat the process of choosing a pivot, partitioning, and recursively sorting.

  4. Base Case:

    • Continue recursively sorting until all sub-arrays are of size one or zero.


Why is Quick Sort Efficient?


  • Average-Case Time Complexity: The average-case time complexity of Quick Sort is O(n log n), where n is the number of elements. This is because each partition operation takes O(n), and there are O(log n) levels of recursion on average.

  • In-Place Sorting: Quick Sort is an in-place sorting algorithm, requiring only a small, constant amount of additional storage.

  • Cache Efficiency: Quick Sort often performs well with modern computer architectures due to good cache locality.


Comparing Quick Sort to Other Algorithms


  • Merge Sort: Merge Sort also has a time complexity of O(n log n) but requires additional space for merging. It is stable, meaning equal elements maintain their relative order.

  • Bubble Sort: Bubble Sort has a time complexity of O(n^2) and is significantly slower compared to Quick Sort, especially on large datasets.


Practical Considerations


While Quick Sort is efficient, there are some considerations:

  • Worst-Case Time Complexity: In the worst case, Quick Sort can degrade to O(n^2), occurring when the pivot choices are consistently poor. This can be mitigated by using random pivots or techniques like "median-of-three".

  • Not Stable: Quick Sort is not a stable sort, so it does not necessarily preserve the order of equal elements.


Conclusion


Quick Sort remains one of the most widely used sorting algorithms due to its average-case efficiency and in-place sorting capabilities. By dividing the problem into smaller sub-problems and solving them recursively, Quick Sort efficiently sorts arrays with minimal additional memory usage.

1 Comment

Rated 0 out of 5 stars.
No ratings yet

Add a rating
Aadvik Jain
Aadvik Jain
Nov 12, 2024
Rated 5 out of 5 stars.

Very informative 😍

Like
bottom of page