**Quick sort**

We have a list of items that can be sorted. Their type is an instance of the ** Ord** typeclass. And now, we want to sort them! There’s a very cool algoritm for sorting called

**. It’s a very clever way of sorting items. While it takes upwards of 10 lines to implement quicksort in imperative languages, the implementation is much shorter and elegant in Haskell.**

`quicksort`

**has become a sort of poster child for Haskell. Therefore, let’s implement it here, even though implementing**

`quicksort`

**in Haskell is considered really cheesy because everyone does it to showcase how elegant Haskell is.**

`quicksort`

So, the type signature is going to be ** quicksort :: (Ord a) => [a] -> [a]**. No surprises there. The edge condition? Empty list, as is expected. A sorted empty list is an empty list. Now here comes the main algorithm: a sorted list is a list that has all the values smaller than (or equal to) the head of the list in front (and those values are sorted), then comes the head of the list in the middle and then come all the values that are bigger than the head (they’re also sorted). Notice that we said sorted two times in this definition, so we’ll probably have to make the recursive call twice! Also notice that we defined it using the verb is to define the algorithm instead of saying do this, do that, then do that …. That’s the beauty of functional programming! How are we going to filter the list so that we get only the elements smaller than the head of our list and only elements that are bigger? List comprehensions. So, let’s dive in and define this function.

**qs.hs**

**quicksort :: (Ord a) => [a] -> [a]
quicksort [] = []
quicksort (x:xs) =
let smallerSorted = quicksort [a | a <- xs, a <= x]
biggerSorted = quicksort [a | a <- xs, a > x]
in smallerSorted ++ [x] ++ biggerSorted**

Let’s give it a small test run to see if it appears to behave correctly.

`Prelude> `**:l qs**
[1 of 1] Compiling Main ( qs.hs, interpreted )
Ok, modules loaded: Main.
*Main> **quicksort [10,2,5,3,1,6,7,4,2,3,4,8,9]**
[1,2,2,3,3,4,4,5,6,7,8,9,10]
*Main> **quicksort "the quick brown fox jumps over the lazy dog"**
" abcdeeefghhijklmnoooopqrrsttuuvwxyz"

Booyah! That’s what I’m talking about! So if we have, say ** [5,1,9,4,6,7,3]** and we want to sort it, this algorithm will first take the head, which is

**and then put it in the middle of two lists that are smaller and bigger than it. So at one point, you’ll have**

`5`

**. We know that once the list is sorted completely, the number 5 will stay in the fourth place since there are 3 numbers lower than it and 3 numbers higher than it. Now, if we sort**

`[1,4,3] ++ [5] ++ [9,6,7]`

**and**

`[1,4,3]`

**, we have a sorted list! We sort the two lists using the same function. Eventually, we’ll break it up so much that we reach empty lists and an empty list is already sorted in a way, by virtue of being empty. Here’s an illustration:**

`[9,6,7]`

An element that is in place and won’t move anymore is represented in *orange*. If you read them from left to right, you’ll see the sorted list. Although we chose to compare all the elements to the heads, we could have used any element to compare against. In ** quicksort**, an element that you compare against is called a pivot. They’re in

*green*here. We chose the head because it’s easy to get by pattern matching. The elements that are smaller than the pivot are

*light green*and elements larger than the pivot are

*dark green*. The

*yellowish*gradient thing represents an application of

**.**

`quicksort`

**Pensare ricorsivamente**

We did quite a bit of recursion so far and as you’ve probably noticed, there’s a pattern here. Usually you define an edge case and then you define a function that does something between some element and the function applied to the rest. It doesn’t matter if it’s a list, a tree or any other data structure. A sum is the first element of a list plus the sum of the rest of the list. A product of a list is the first element of the list times the product of the rest of the list. The length of a list is one plus the length of the tail of the list. Etcetera, etcetera …

Of course, these also have edge cases. Usually the edge case is some scenario where a recursive application doesn’t make sense. When dealing with lists, the edge case is most often the empty list. If you’re dealing with trees, the edge case is usually a node that doesn’t have any children.

It’s similar when you’re dealing with numbers recursively. Usually it has to do with some number and the function applied to that number modified. We did the factorial function earlier and it’s the product of a number and the factorial of that number minus one. Such a recursive application doesn’t make sense with zero, because factorials are defined only for positive integers. Often the edge case value turns out to be an identity. The identity for multiplication is 1 because if you multiply something by 1, you get that something back. Also when doing sums of lists, we define the sum of an empty list as 0 and 0 is the identity for addition. In quicksort, the edge case is the empty list and the identity is also the empty list, because if you add an empty list to a list, you just get the original list back.

So when trying to think of a recursive way to solve a problem, try to think of when a recursive solution doesn’t apply and see if you can use that as an edge case, think about identities and think about whether you’ll break apart the parameters of the function (for instance, lists are usually broken into a head and a tail via pattern matching) and on which part you’ll use the recursive call.

ðŸ¤©

## Trackback

[…] da qui, copio […]