Quicksort and k-th smallest elements

Finding the minimum element of a list can be implemented by first sorting the list and then taking the first element. Thanks to lazy evaluation, this is still efficient and takes time O(n)! We also discuss how this generalizes to finding the k-th smallest element.

This article is a bunch of mailing list posts for now.

Introduction

In a lazy functional language, finding the minimum element of a list can be implemented as

minimum :: Ord a => [a] -> a
minimum = head . sort

If the sort function is lazy enough, this will take O(n) time instead of the expected O(n log n) that would be needed to first sort the list and then extract the first element. Moreover, we can also query the k least elements

smallest k = take k . sort

in O(n + k log n) time.

The following three messages to the general Haskell mailing list from March 2007 explain and explore this. The message thread was started by Steven Mazanek asking the question of whether the above would work

Message 1 - Mergesort

Steffen Mazanek wrote:

say, we want to find the k’th element in a sorted list.

I assume that you want to find the k’th smallest element of an unsorted list by sorting it?

[…]

However, I was wondering whether it might be possible to implement quicksort in a way that quicksearch is done out of the box by lazy evaluation. Is there any way to do this?

Yes, that can be done. It’s a small extension of the folklore (?) example

minimum = head . mergesort

that extracts the smallest element of a list. Given a proper mergesort, laziness will ensure that this function actually runs in O(n) time instead of the expected O(n log n). Apparently, the first k smallest elements now can be obtained by

take k . mergesort

which may run in O(n + k*log n) time with a laziness-friendly mergesort.

From my understanding for small k’s lazy evaluation already does the trick for the naive quicksort algorithm (quicksort smaller ++ [x] ++ quicksort greater), doesn’t it? Is there a search algorithm that makes better use of lazy evaluation out of the box?

No, as far as I thought about quicksort, it needs O(n log n) to produce the first element. But even the mergesort implementation has to be chosen carefully as I’ve seen many that still need O(n log n) just to return the smallest element.

Alas, her is a laziness-friendly mergesort:

mergesort []  = []
mergesort xs  = foldtree1 merge $ map return xs

foldtree1 f [x] = x
foldtree1 f xs  = foldtree1 f $ pairs xs
    where
    pairs []        = []
    pairs [x]       = [x]
    pairs (x:x':xs) = f x x' : pairs xs

merge []     ys     = ys
merge xs     []     = xs
merge (x:xs) (y:ys) =
    if x <= y then x:merge xs (y:ys) else y:merge (x:xs) ys

Why does this work? The function ‘foldtree1’ folds the elements of a list as if they where in a binary tree:

  foldrtree1 f [1,2,3,4,5,6,7,8]
 ==>
  ((1 `f` 2) `f` (3 `f` 4)) `f` ((5 `f` 6) `f` (7 `f` 8))

The important point is that this expression is constructed in O(n + n/2 + n/4 + n/8 + ..) = O(n) time. Now, given such a binary tree of merge operations, getting the next list element of the result takes O(log n) time. Why? Because the merge in the middle gets the next element either from its left or its right subexpression which in turn gets its element from its left or its right subexpression and so on. Clearly, we only need logarithmically many steps to hit the bottom. Putting both together, we see that we have to pay O(n + k*log n) steps to build the expression and to fetch the first k elements.

Making this argument rigorous with a suitable debit invariant is left to the attentive reader :)

Regards,

apfelmus

Message 2 - Quicksort

apfelmus wrote:

Steffen Mazanek wrote:

From my understanding for small k’s lazy evaluation already does the trick for the naive quicksort algorithm (quicksort smaller ++ [x] ++ quicksort greater), doesn’t it? Is there a search algorithm that makes better use of lazy evaluation out of the box?

No, as far as I thought about quicksort, it needs O(n log n) to produce the first element. But even the mergesort implementation has to be chosen carefully as I’ve seen many that still need O(n log n) just to return the smallest element.

Apparently, I did not think enough about quicksort :) as it is well capable of delivering the minimum element in O(n) time. In fact, given proper pivot choice,

take k . qsort

is the optimal O(n + k log k) algorithm for returning the first k smallest elements in sorted order! See also

http://en.wikipedia.org/wiki/Selection_algorithm

Let’s assume that the quicksort in

qsort []     = []
qsort (x:xs) = qsort ls ++ [x] ++ qsort rs
where
    ls = filter (<  x) xs
    rs = filter (>= x) xs

always uses a good pivot x, i.e. that ls and rs have around n/2 elements each. As long as this n/2 is greater than k, taking the first k elements does not evaluate the second recursive call to quicksort. In other words, it takes

  O(n)   -- for filtering xs during the initial call to qsort
+ O(n/2) -- same as above but with the first half of the list
+ O(n/4) -- filtering the half of the half
+ ...
+ O(k)
________
< O(n + n/2 + n/4 + n/8 + ...) = O(n)

time until ls has fewer than k elements. When this becomes the case, the argument list to the recursive call to qsort has a size of at most 2*k and it takes at most O(k log k) time finish sorting it completely and taking the first k elements of this. This gives a total of O(n + k log k).

Regards,

apfelmus

Message 3 - Just one k-th element

Steffen Mazanek wrote:

ok, but this still means, that you have to rewrite the algorithm to get an efficient qsearch for arbitrary k.

You mean the case when you only want the k-th minimum but not the others? Well, you can make this work with qsort, too.

The fundamental solution would be to change the data type returned by qsort to better support random access to the k-th element. A binary search tree that only gets partially evaluated is well suited for that.

But I think you can get the desired behavior without changing the type of qsort but by explicitly introducing the invariant that qsort doesn’t change the length of the list.

qsort []     = []
qsort (x:xs) =
    zip2nd ls (qsort ls) ++ [x] ++ zip2nd rs (qsort rs)
    where
    ls = filter ( < x) xs
    rs = filter (>= x) xs
    
        -- forces the second argument to have the same length
        -- as the first.
    zip2nd []      _      = []
    zip2nd (_:xs) ~(y:ys) = y:zip2nd xs ys

Now (assuming proper pivot again),

qsort xs !! k

only takes O(n) time to retrieve the k-th minimum because only the relevant recursive calls to qsort will be evaluated. The curious reader may recognize what should probably be coined the “blueprint technique”.

Are there any popular algorithms whose overall complexity is improved by lazyness? Or where you get such special cases as free lunch?

Well, it’s highly unlikely that algorithms get faster by introducing laziness. I mean, lazy evaluation means to evaluate only those things that are really needed and any good algorithm will be formulated in a way such that the unnecessary things have already been stripped off. But laziness allows to simplify and compose algorithms. Sometimes, seemingly different algorithms turn out to be two sides of the same coin when formulated with lazy evaluation. Isn’t it great that finding the k-th minimum is not only an adaption of quicksort but can readily be obtained from it by composing it with (!! k)?

Regards,

apfelmus

Comments

Some HTML formatting is allowed.