|Data Structures and Algorithms|
|Comparing Quick and Heap Sorts|
Empirical studies show that generally quick sort is considerably faster than heapsort.
The following counts of compare and exchange operations were made for three different sorting algorithms running on the same data:
Thus, when an occasional "blowout" to O(n2) is tolerable, we can expect that, on average, quick sort will provide considerably better performance - especially if one of the modified pivot choice procedures is used.
Most commercial applications would use quicksort for its better average performance: they can tolerate an occasional long run (which just means that a report takes slightly longer to produce on full moon days in leap years) in return for shorter runs most of the time.
However, quick sort should never be used in applications which require a guarantee of response time, unless it is treated as an O(n2) algorithm in calculating the worst-case response time. If you have to assume O(n2) time, then - if n is small, you're better off using insertion sort - which has simpler code and therefore smaller constant factors.
And if n is large, you should obviously be using heap sort, for its guaranteed O(nlog n) time. Life-critical (medical monitoring, life support in aircraft and space craft) and mission-critical (monitoring and control in industrial and research plants handling dangerous materials, control for aircraft, defence, etc) software will generally have a response time as part of the system specifications. In all such systems, it is not acceptable to design based on average performance, you must always allow for the worst case, and thus treat quicksort as O(n2).
So far, our best sorting algorithm has O(nlog n) performance: can we do any better?
In general, the answer is no.
However, if we know something about the items to be sorted, then we may be able to do better.
But first, we should look at squeezing the last drop of performance out of quicksort.
|Continue on to The last drop of performance!||Back to the Table of Contents|