%CustomEntities; %CourseEntities; %CommonEntities; ]>
Quicksort Summary: In recent reading, you explored merge sort, a comparatively efficient algorithm for sorting lists or vectors. In this reading, we consider one of the more interesting sorting algorithms, which its inventor called Quicksort.
Alternative Strategies for Dividing Lists As you may recall, the two key ideas in merge sort are: (1) use the technique known as of divide and conquer to divide the list into two halves (and then sort the two halves); (2) merge the halves back together. As we saw in both the reading and the corresponding lab, we can divide the list in almost any way. Are there better sorting algorithms than merge sort? If our primary activity is to compare values, we cannot do better than some constant times nlog2n steps in the sorting algorithm. However, that hasn't stopped computer scientists from exploring alternatives to merge sort. (In the next course, you'll learn some reasons that we might want such alternatives.) One way to develop an alternative to merge sort is to split the values in the list in a more sensible way. For example, instead of splitting into about half the elements and the remaining elements, we might choose the to divide into the smaller elements and the larger elements. Why would this strategy be better? Well, if we know that every small element precedes every large element, then we can significantly simplify the merge step: We just need to append the two sorted lists together, with the equal elements in the middle. (define new-sort (lambda (lst may-precede?) (if (or (null? lst) (null? (cdr lst))) lst (let ((smaller (small-elements lst may-precede?)) (larger (large-elements lst may-precede?))) (append (new-sort smaller may-precede?) (new-sort larger may-precede?)))))) But that's a bit difficult. How do we identify the smaller and larger elements? Typically, we start with the median and segment out those that are less than the median and those that are greater than the median. (define new-sort (lambda (lst may-precede?) (if (or (null? lst) (null? (cdr lst))) lst (let ((median (find-median lst))) (let ((smaller (small-elements lst may-precede? median)) (medians (equal-elements lst may-precede? median)) (larger (large-elements lst may-precede? median))) (append (new-sort smaller may-precede?) medians (new-sort larger may-precede?))))))) You'll note that instead of two parts, we're segmenting the list into three parts: the small elements, the median elements and the large elements. Why do we treat the median separately? It turns out that it makes the algorithm a bit easier. Why a list of medians, rather than a single median? Because we want to support lists with repeated values, and all the values equal to the median should be treated the same. Other variants of this sorting algorithm might include the median elements in one side or the others. However, such versions require consideration of some subtleties that we'd like to avoid.
Identifying Small and Large Elements It sounds like a great idea, doesn't it? Instead of split and merge, we can sort by writing small-elements, equal-elements, and large-elements. So, how do we write those procedures? It seems fairly straightforward: We just compare each element of the list in turn to the median, keeping it when it is less than, equal to, or greater than the median, in turn. Actually, to keep things general, lets write these procedures to select the elements less than (equal to, greater than) any reasonable selected value. Now, how do we find the median? Usually, we sort the values and look at the middle position. Whoops. If we need the median to sort, and we need to sort to get the median, we're stuck in an arbitrarily recursive situation. So, what do we do? A computer scientist named C. A. R. Hoare had an interesting suggestion: Randomly pick some element of the list and use that as a simulated median. That is, anything smaller than that element is small and anything larger than that element is large. Because it's not the median, we need another name for that element. Traditionally, we call it the pivot. Is using a randomly-selected pivot a good strategy? You need more statistics than most of us know to prove formally that it works well. However, practice suggests that it works very well, indeed. You may note that equal-elements does not use equal? to compare the pivot to the car. Why not? Because our ordering may be more subtle. For example, we may be comparing people by height (one of the many values we store for each person), and two non-equal people (e.g., with different names) can still be equal in terms of height. Once we've defined these three procedures, we then have to update our main sorting algorithm to find the pivot and to use it in identifying small, equal, and large elements. Since we use the sublists only once, we won't even bother naming them. We'll call the new algorithm Quicksort, since that's what Hoare called it. (Hoare's version was a bit different, but it had the same general theme.)
Selecting a Pivot How do we select a random element from the list? We've done so many times before that the code should be self explanatory.
Refactoring Are we done? In one sense, yes, we have a working sorting procedure. However, good design practice suggests that we look for ways to simplify or otherwise clean up our code. What are the main principles? (1) Don't name anything you don't need to name. (2) Don't duplicate code. The only thing we've named is the pivot. We've used it three times, which argues for naming it. More importantly, since the pivot is a randomly chosen value, our code will not work the same (nor will it work correctly) if we substitute `(random-element lst)` for each of the instances of `pivot`. Hence, the naming is a good strategy. Do we have any duplicated code? Yes, small-values, equal-values, and large-values are very similar. Each scans a list of values and selects those that meet a particular predicate (in the first case, those that precede than the pivot, in the second, those equal to the pivot, in the third, those that follow the pivot). Hence, we might want to write a higher-order list.select procedure that generalizes and parameterizes the similar code. Now, we can write Quicksort without relying on the helpers small-elements, equal-elements, and large-elements. Can we make this even shorter? Well, the selection of large elements looks a lot like use of `left-section`, although the result is then negated. Hence, we'll need a higher-order helper, negate that, given a predicate, returns an opposite predicate. Suppose we use `not-pred?` to name ```(negate pred?)```. If `(pred? x)` is `#t`, `(not-pred? x)` is `#f`. Similarly, if `(pred? x)` is `#f`, then `(not-pred? x)` is `#t`. But that's not enough for the equal elements. In that case, we want to combine two predicates into a single predicate. We'll call that combiner both. Now we can write an even more concise definition of Quicksort.
An Alternate Strategy for Building the Lists of Small, Equal, and Large Values Some designers might focus not on the duplication of code between small-elements, equal-elements, and large-elements and instead focus on the issue that all we're really doing is splitting lst into three lists. They might suggest that instead of writing select, we write a partition procedure that breaks a list into three parts. Here's yet another version of Quicksort that uses this procedure.
Which Quicksort is Best? You've now seen four versions of Quicksort. Which one is best? The last one is probably the most efficient, since it only scans the list once to identify the small, equal, and large elements. The other three scan the list thrice. Different readers find different versions clearer or less clear. You'll need to decide which you like the most from that perspective (and you might even want to think about how you'd express the criteria by which you make your decision).