In the past few months I have spent some time on the in-depth analysis of Quicksort. Even the average-case analysis as it’s laid out in Knuth’s book *The Art of Computer Programming* took me a little while to digest, and now I have looked at a very closely related algorithm.

Consider the problem of finding the -th largest element in a set of distinct real numbers. We shall use the ideas of the Quicksort algorithm to solve this task, and ask the question: how many comparisons are needed on the average? The corresponding algorithm is sometimes called “Quickfind” or “Quickselect“.

This problem is taken from an exercise in Knuth’s book (Sect. 5.2.2, Ex. 32). The exercises are rated according to the effort needed to solve them. An exercise rated “00” is immediately clear, while “50” is a research problem. The interpolation takes place on a logarithmic scale, meaning that the exercises between “20” and “30” are challenging to me, but can be reached with a little effort. The exercise solved here is rated “40” — and rightly so as we shall see. The solution to the exercise in Knuth’s book is quite brief: the basic idea of the recursion is stated and then there is the final result. In a citation, Knuth refers to himself, to the paper “Mathematical Analysis of Algorithms”, from the Proceedings of IFIP Congress 71 (1972). It has been reprinted in Knuth’s “Selected Papers on Analysis of Algorithms“. In what follows, we will heavily rely on this paper, even though there is a slight mismatch of how Quickfind is programmed.

First, let’s have a brief look at the Quicksort algorithm. It has been designed by Hoare in the 1960’s as a sorting algorithm. You take a certain element of the set of real numbers and call it the pivot. You scan the set from left to right looking for elements greater than the pivot, then you scan from right to left looking for elements lesser than the pivot. When you find two such elements, swap them and continue scanning. When you have seen each element from your set, you stop and put the pivot at the point where your scans have crossed. Thus, you have placed the pivot in its correct position, having partitioned the set in two parts, one of which only contains elements less than the pivot, the other one of which only contains elements greater than the pivot. Then you can continue in the new, smaller parts of the set. In the end, you have sorted the entire set (quite efficiently in most cases).

In our present task, we can save some of the trouble of Quicksort. We don’t want to sort the entire set, we’re only looking for the -th largest element. So, after the partitioning phase, we have to look at the position where the pivot has come up. If the pivot-position is smaller than , we’ll have to look at the right-hand side of our set; if it is larger than , we’ll look at the left-hand side. The other side is unimportant for the task at hand. That saves, heuristically, half the effort of Quicksort.

It is well-known that Quicksort takes about operations to sort a set of elements. Quickfind, however, actually does better: it’s . You can shallowly argue like this: Assuming that you can split the set in halves each time, there will be roughly operations. Of course, the set will not split properly like this each time and you cannot expect operations on a set of elements (that’s what we want to prove, mind you). So, we’ll look a little deeper right away.

Before we start, we note that in special cases one can obviously do without Quickfind: if you look for the largest element (which amounts to the case ), a single scan looking for the running maximum will do without the overhead of Quickfind. There will be a break-even point for your engine, when scans are superior to Quickfind for finding the -th largest element. Besides, note the symmetry: whatever we say about maxima applies to minima in exactly the same way.

Let us designate the average number of comparisons needed to find the -th largest element in a set of real numbers by . Obviously, we need , and there is . In what follows, we will prove:

Here, is the -th harmonic number, and is Kronecker’s symbol: .

Remember, we have . In the end, we shall have a particular look at the case and consider a Taylor-expansion of this.

Let us start with a recursive expression to . During the first partitioning phase, you will have to compare every element of the set against the pivot (that makes comparisons, because you won’t compare the pivot to itself) and when the scans cross each other, you will inevitably scan two elements twice. Hence, this first partitioning yields comparisons. So, you can put the pivot in its proper position, and now three cases are possible:

- the pivot is in position , so we are done with no more comparisons necessary.
- the pivot is in a position , say. So, we have to continue with the elements in positions . There are of them, and we will look for the $(t-k)$-th largest in this subset. There will be further commparisons.
- the pivot is in a position , say. So, we have to continue with the elements in positions . There are of them, and we will look for the -th largest in this subset. There will be further comparisons.

Beforehand, any of the real numbers had had equal chances of being the pivot, so the recursion reads as follows:

As a warm-up, we start with the special case of . As we have already seen, . Now, let :

which tells us

Keeping in mind that this recursion only works for (to avoid the eerie case of vacuous sums and the expression in the recursion above), we still need the base case , and so, by induction, we arrive at

For reasons of symmetry, we can look at the case by considering the case :

Now, let’s look at the recursion formula for . By divine intuition, we have a look at:

That leads to:

and to

Note, that we can only write this down if , since won’t make sense otherwise. The recursion applies to any , because, the basic reasoning behind the recursion is only possible for these ( is always a very special case, as we have already seen above).

Let us ignore additive constants for now, and particularly let us ignore the (quite nasty) base cases. The recursion will tell us how to go on from these base cases, so they only serve as an additive offset to the closed-form expression that we want to achieve. We iterate:

That means

Let us now look at

Here, we have used the useful relation , which can be proved without much effort by an interchange of summations.

Then we get

Be aware that this is not yet the truth: we have not been concerned with base cases. Let us therefore compute several special cases of to find the correct offset for the closed-form expression.

Let’s have a look at several particular cases to determine the additive constant that we still need to deal with the connection of recursion and base case.

By definition:

while by the recursion formula:

Hence, we have to add to our solution to the recursion for any of the “standard” values for and . The boundary cases and are different, and so is .

For , we already know , which should equal

Hence, in this case, no additive constant is necessary. For , the same holds because of symmetry.

Finally, for , we already know . On the other hand,

thus we need the constant in this case.

We shall write this down from the perspective of the “standard” values, yielding a constant of . The other cases are considered the special ones and are done away with using Kronecker’s delta. Altogether, we have found what we claimed at the beginning:

This can in retrospect easily be verified by numerical computations. Note the symmetry of this expression with respect to and : it is unimportant if you are looking for the -th largest or the -th smallest number in your set.

In some way, I’m tempted to take a notion of Pisier (Lecture notes in Mathematics 770, p. 312-327): ending a rough proof with a heart-felt “Oof!” I do acknowledge, though, that Pisier’s proof is way more complicated than what I did here…

The intuitive heuristics in the beginning have told us that the Quickfind algorithm should take linear time in on the average. In our closed form expression, we can see terms of the form , which is approximately equal to — but when you have a closer look at the Taylor expansion of the whole expression, the -terms will cancel. In particular, any you may want to choose will take linear time in . The naive algorithm of having passes over your set and kicking the maximum out each time will take comparisons — this may be superior for some , but when you are interested in the median, say, this yields comparisons. Now, what have we found out for the important special case of finding the median of real numbers?

We are interested in the case for sufficiently large and find

It seems hard to think of an algorithm to pick the median of a set that is more efficient on the average than this one: even the constant is rather small. Note, however, that the worst-case analysis will yield a much larger number of necessary comparisons, and we have said nothing about higher moments (indicating the variation around the average case). Here, the very related “median of medians” algorithms can be used, and one can select the pivot in Quickselect at random, for instance. But anyway, this seems like a good place to stop.