Lots of people in this thread saying that it's just an unoptimised Bubble Sort.
No, it really isn't.
In bubble sort you see if two things are the wrong way round and it they are then you swap them. This does that (sort of) the "wrong way round".
In Bubble Sort your indices are always i<j ... here they aren't.
In Bubble Sort you only ever compare adjacent items. Here you don't.
This really, really isn't Bubble Sort, unoptimised or other.
Also, with Bubble Sort it's really easy to prove that it works. If you think this is "just unoptimised Bubble Sort" then I'd be interested to see your proof that this algorithm works.
So my feeling is that when people are dismissive of this, either they've not understood this at all, or they've understood it at a really, really deep level that I haven't. In that latter case I'd really appreciate a better explanation for what you think is going on, and why you think this is "just an unoptimised Bubble Sort".
The problem is that you can look at this and, after a few seconds, think you've formed an intuition on how it must work, but your intuition is wrong. It seems like the only way to really show how weird this is would be by creating an animation that shows how it shuffles entries around.
very beautiful simple animation. Good visualization of the proof of the algo; namely, that at each step i of the outer loop the first i elements are sorted (this is easy to show via induction)
I agree. At first I thought it was doing something very simple but I am not so sure anymore. When you check the condition for the swap, you realize it almost works counterproductively while i<j because it keeps pushing smaller numbers in the wrong direction, but then it starts "fixing" those mistakes when i>j. I don't find this intuitive at all.
>In Bubble Sort you only ever compare adjacent items. Here you don't.
This is the first thing I noticed and it pretty much says it all. It is not bubble sort because there are no bubbles bubbling to the top. Instead it compares every possible element pair combination.
> In that latter case I'd really appreciate a better explanation for what you think is going on, and why you think this is "just an unoptimised Bubble Sort".
I can try to explain that. The algorithm really is doing a bubble sort, with some unnecessary extra work added. It also does a bit of necessary extra work.
Here is the useful work done by the algorithm:
1. First, preprocess the list so that the maximal element occurs first. (It's a 1-indexed list, so position 1 holds the highest element of the list.)
2. Now, we consider an invariant: the part of the list up to (and including) the maximal element is always sorted. Right now that consists of indices 1-1, and since that range contains only one value it's necessarily sorted. We will use a variable ("i") to remember where the maximal element is.
3. Now, the outer loop: we start at the index immediately after the maximal element, index i+1. Whatever value is in there, we bubble it to the left (this is the inner loop), in the manner of a standard bubble sort: we compare it to index i, and if it's lower (which is always, since i holds the maximal element), we swap. Then we compare index i to index i-1, and so on. Once we find an element that is smaller than the one we've been bubbling to the left, we stop, and our invariant is restored: the list up to the maximal element is still sorted. But the space covered by the invariant has increased by one slot. We increment i.
4. The loop continues until i points to the last slot in the list, at which point we're done. The invariant tells us the the list up to i is sorted, and since i points to the last value in the list, the whole list is sorted.
-----
The algorithm I've described sends i forward from the beginning to the end of the list and j backwards from i+1 to the beginning of the list. The algorithm in the paper does all the same work, but it also does more work, instead having j cross the entire list every time. (It also sends j forwards, like i, but this doesn't matter.†) The only reason for this is to make the algorithm simpler to write down. This is why it's "unoptimized bubble sort". Note that, in the algorithm of the paper, once the maximum value has been swapped into i, the check A[i] < A[j] can never succeed, but we ignore that and keep advancing j instead of just terminating the j loop. This looks especially stupid because the maximal value will always occur before i (except when i = 1, which is the stage that, in my algorithm, is called "preprocessing").
In a bubble sort, if your cursor is at 5, and you want to move A[6] to index 3, you need to make 3 swaps and 3 comparisons. In the paper's sort, when your cursor is at 5 and you want to move A[6] to index 3, you need to make 3 swaps and n comparisons, where n is the length of the full list.
-----
If you're interested in why the j loop is the same thing as a bubble sort, here's an expansion on the example above:
We're in the process of bubble sorting [1 4 8 11 24 5], looking at the 5. Because 5 is more than 4 but less than 8, we want it to ultimately end up at index 3. Everything else will shift up by one index to accommodate this.
In a bubble sort, we swap the 5 down into index 5, then we swap the 5 down into index 4, and then we swap it down into index 3.
swap(&a6, &a5);
swap(&a5, &a4);
swap(&a4, &a3);
In the paper's sort, we do exactly the same thing from the other direction:
int* temp;
*temp = a6; /* we want to put this into its correct position */
swap(&a3, temp); /* done, but we need to remember what was in a3 so we can swap it up to a4 */
swap(&a4, temp); /* done, but now we need to remember what was in a4 so we can swap it up to a5 */
swap(&a5, temp); /* take a guess... */
swap(&a6, temp);
but there's a slight optimization where instead of using a temporary variable to hold the values we're swapping, we just use the space after the maximal element (which makes `swap(&a6, temp)` redundant - after the optimization, those are the same address).
† It matters a little bit - a bubble sort is free to terminate the inner loop as soon as the value being bubbled has found its correct position. But in the paper's algorithm, we have to find the correct position first, which is done by scanning every index that contains a lower value until we eventually find the first one that doesn't. That work is optimized out of a bubble sort. However, since the paper's sort is committed to scanning the entire list in the inner loop anyway, the fact that the order of bubbling is wrong doesn't require any more extra scanning beyond what they've already committed to.
This is a strange argument to me, because the single most critical aspect of a bubble sort is the locality.
Bubble sort only ever compares adjacent elements. It only ever swaps adjacent elements. This is what makes bubble sort optimal (within constant factors) for an external sort on a tape drive. This is why anything else isn't a bubble sort.
The number of swaps is nearly the same between Bubble Sort and Can't Believe Sort. For a random list with 100 elements, it's 2505 swaps vs. 2513 swaps.
The number of comparisons is of course about twice as large, because Bubble Sort always does exactly n(n-1)/2 comparisons while Can't Believe sort always does exactly n*n comparisons.
Yeah, this makes sense to me. I started developing some of these similar intuitions while watching a visualization another commenter posted.
Here are some things I realized:
1. The first pass (I think you call it the preprocessing step) is special in that it finds the maximum number and puts it in the first slot. The rest of the algorithm revolves around this max value (or at least I find it helpful to think of it that way). This is also the only time where j going above i makes any difference. After this pass we could stop the j loop when i == j but that would obviously complicate the code.
2. Second pass will always swap items in first and second slots since the first item is bigger than second (if they are equal, nothing happens). As noted in 1., j will keep going past i but it won't do anything since i is now pointing to the largest value in the array and the swap condition will never be met.
3. On third and all subsequent passes the value that a[i] is pointing to will be slotted in its right place in the sorted list that's being formed at the beginning of the array. This might require multiple swaps as j goes up and larger values go into the ith slot. Last thing that happens before j reaches i (so when j == i - 1) is always that the max value gets swapped from i-1 to i. This is basically what the second pass from 2. did.
4. The max value from 1. conceptually serves almost as a sentinel of sorts since it always represents the end of the list we've sorted so far and when j gets to i, it prevents the rest of j loop from doing anything. That's why the code can be so clean, without having to do any index arithmetic or short circuiting or any other complications. It's clean but definitely not simple, since it takes a while to wrap one's head around what's actually happening.
There is nothing good about this algorithm. It is slow – the algorithm
obviously runs in Θ(n2) time, whether worst-case, average-case or best-case.
It unnecessarily compares all pairs of positions, twice (but see Section 3).
There seems to be no intuition behind it, and its correctness is not entirely
obvious. You certainly do not want to use it as a first example to introduce
students to sorting algorithms. It is not stable, does not work well for
external sorting, cannot sort inputs arriving online, and does not benefit
from partially sorted inputs. Its only appeal may be its simplicity, in terms
of lines of code and the “symmetry” of the two loops.
I used to teach this to beginners all the time. I called it "exchange sort", though I'm not sure where I ever got that name from.
Beginners to sorting are often still somewhat struggling with for loops (especially the three-part C kind), struggling with indexing an array, struggling with swaps. It's overwhelming.
This is kind-of like pseudocode for a simple sort -- it has the same rough outline, but no tricky bits. It allows them to get their brains around the basic algorithm, and only after that you trace through an execution on the board to show them that it wastes a lot of time moving things around.
Then you can show them insertion sort as an optimization, and from that move on to fancier ones.
Can you give a reference to your teaching materials? The way this sort works is bizarre enough that I'm curious what you said about it. (Also to verify that this is actually the algorithm you taught, given how many other people thought it was something else.)
This isn't surprising to me in the least that it works.
In the first pass (i=1), A[1] will end up with the largest value and a later pass cannot displace it.
In the second pass (i=2), A[2] will end up with the 2nd largest value (which might be the same as the largest value, if it appeared more than once in the list). And so on.
> In the first pass (i=1), A[1] will end up with the largest value and a later pass cannot displace it.
Nah, it will get displaced later. Notice that in the inner loop j goes from 1, and not from i. After i=1 A[1] will be the largest element, but after the next big step, after i=2, A[2] will be the largest element. (And after i=n, A[n] will be the largest element, as expected.)
> In the second pass (i=2), A[2] will end up with the 2nd largest value
The algorithm sorts the array ascending, and not descending. There is an example run linked in a grandkid comment, by jmholla.
It is not, it is actually closer to an insertion sort.
The interesting part is that it often comes up when the one writing the algorithm doesn't know what he is doing, and yet, it works. It may look like a BubbleSort because it is a common wrong implementation of BubbleSort.
I love this paper; it's extremely easy to read, but still gets into rigorous proof.
Despite their explanation, I still had to hack together something in Java and step through it with a debugger for it to actually make sense to me. Once I did it worked fine, and after looking through each iteration I more or less got it.
Here's the code if anyone wants to look at it [1], though just a disclaimer that I wrote this in like ten minutes just to hack something so it's not beautiful or anything like that. It's so simple that I don't think I made any mistakes though obviously if I did let me know and I'll be slightly embarrassed.
1) After an outer loop of i is finished, the first i elements are in ascending order. (For example, in the fifth line above after i=3 is finished, the first 3 elements are "124")
2) At some point, the outer loop will point to "1", and "1" will be compared to the 1st position. Then, "1" will be placed in the 1st position and remain there forever. (For example, this happens in the 3rd line above)
3) After the outer loop points to "1", at some point it will point to "2" which will be compared to the 2nd position. (Proof: when the outer loop pointed to "1", if "2" was on the right of "1", then the statement is obvious; if "2" was on the left of "1", then it would be in the 1st position by (1), so again the statement holds.) Then, "2" will be placed in the 2nd position and remain there forever.
4) After the outer loop points to "2", at some point it will point to "3" which will be compared to the 3rd position...
Question on paper writing... There is only one author listed, but the wording keeps using "we" as in "we show that..." Is it considered poor practice to use "I" in a one-person paper? Or is "we" including the reader in the exposition?
The “royal we” is common for single author academic work. I’m honestly unsure why. StackExchange seems to think it refers to both author and reader together. I think it conveys a sort of distance from the author, giving the writing less of a personal and more general tone.
It's assumed you at some point talked to someone in the lab, or your advisor, or your advisee, or literally anyone about the paper. If you didn't that's probably problematic. So "we" gives non-formal credit to your community.
This is a sort I use all the time. Especially in coding challenges (AoC) because I'm usually lazy. It's nice to see it formally evaluated, it feels very intuitive to me.
Would you be willing ... without looking at this article ... to provide exactly the code you "use all the time"?
I'd like to compare what you write with the code in the article, because code that I write as a quick and dirty sort is definitely not this.
In particular, this seems to take items that are already the right way round, and swaps them. I definitely don't do that. To be specific, I just knocked this out:
#!/usr/bin/python
a = [ 3, 1, 4, 1, 5, 9, 2, 6, 5, 3, 5, 8, 9, 7, 9 ]
N = len(a)
for i in range(0,N-2):
for j in range(i+1,N-1):
if a[i]>a[j]:
a[i],a[j] = a[j],a[i]
print a
Specifically, note that I ask if a[i] is bigger than a[j] and if so then I swap them. The code in the article appears to do exactly the wrong thing, and I'd be surprised if your "go to" code does it the same way as the code in the article.
No, it really isn't.
In bubble sort you see if two things are the wrong way round and it they are then you swap them. This does that (sort of) the "wrong way round".
In Bubble Sort your indices are always i<j ... here they aren't.
In Bubble Sort you only ever compare adjacent items. Here you don't.
This really, really isn't Bubble Sort, unoptimised or other.
Also, with Bubble Sort it's really easy to prove that it works. If you think this is "just unoptimised Bubble Sort" then I'd be interested to see your proof that this algorithm works.
So my feeling is that when people are dismissive of this, either they've not understood this at all, or they've understood it at a really, really deep level that I haven't. In that latter case I'd really appreciate a better explanation for what you think is going on, and why you think this is "just an unoptimised Bubble Sort".
(Edit) I gave it a shot: https://jsfiddle.net/odwh6cL5/45/
https://www.youtube.com/watch?v=8raee-rvBw0
https://www.youtube.com/watch?v=bydMm4cJDeU
This is the first thing I noticed and it pretty much says it all. It is not bubble sort because there are no bubbles bubbling to the top. Instead it compares every possible element pair combination.
Deleted Comment
I can try to explain that. The algorithm really is doing a bubble sort, with some unnecessary extra work added. It also does a bit of necessary extra work.
Here is the useful work done by the algorithm:
1. First, preprocess the list so that the maximal element occurs first. (It's a 1-indexed list, so position 1 holds the highest element of the list.)
2. Now, we consider an invariant: the part of the list up to (and including) the maximal element is always sorted. Right now that consists of indices 1-1, and since that range contains only one value it's necessarily sorted. We will use a variable ("i") to remember where the maximal element is.
3. Now, the outer loop: we start at the index immediately after the maximal element, index i+1. Whatever value is in there, we bubble it to the left (this is the inner loop), in the manner of a standard bubble sort: we compare it to index i, and if it's lower (which is always, since i holds the maximal element), we swap. Then we compare index i to index i-1, and so on. Once we find an element that is smaller than the one we've been bubbling to the left, we stop, and our invariant is restored: the list up to the maximal element is still sorted. But the space covered by the invariant has increased by one slot. We increment i.
4. The loop continues until i points to the last slot in the list, at which point we're done. The invariant tells us the the list up to i is sorted, and since i points to the last value in the list, the whole list is sorted.
-----
The algorithm I've described sends i forward from the beginning to the end of the list and j backwards from i+1 to the beginning of the list. The algorithm in the paper does all the same work, but it also does more work, instead having j cross the entire list every time. (It also sends j forwards, like i, but this doesn't matter.†) The only reason for this is to make the algorithm simpler to write down. This is why it's "unoptimized bubble sort". Note that, in the algorithm of the paper, once the maximum value has been swapped into i, the check A[i] < A[j] can never succeed, but we ignore that and keep advancing j instead of just terminating the j loop. This looks especially stupid because the maximal value will always occur before i (except when i = 1, which is the stage that, in my algorithm, is called "preprocessing").
In a bubble sort, if your cursor is at 5, and you want to move A[6] to index 3, you need to make 3 swaps and 3 comparisons. In the paper's sort, when your cursor is at 5 and you want to move A[6] to index 3, you need to make 3 swaps and n comparisons, where n is the length of the full list.
-----
If you're interested in why the j loop is the same thing as a bubble sort, here's an expansion on the example above:
We're in the process of bubble sorting [1 4 8 11 24 5], looking at the 5. Because 5 is more than 4 but less than 8, we want it to ultimately end up at index 3. Everything else will shift up by one index to accommodate this.
In a bubble sort, we swap the 5 down into index 5, then we swap the 5 down into index 4, and then we swap it down into index 3.
In the paper's sort, we do exactly the same thing from the other direction: but there's a slight optimization where instead of using a temporary variable to hold the values we're swapping, we just use the space after the maximal element (which makes `swap(&a6, temp)` redundant - after the optimization, those are the same address).† It matters a little bit - a bubble sort is free to terminate the inner loop as soon as the value being bubbled has found its correct position. But in the paper's algorithm, we have to find the correct position first, which is done by scanning every index that contains a lower value until we eventually find the first one that doesn't. That work is optimized out of a bubble sort. However, since the paper's sort is committed to scanning the entire list in the inner loop anyway, the fact that the order of bubbling is wrong doesn't require any more extra scanning beyond what they've already committed to.
Bubble sort only ever compares adjacent elements. It only ever swaps adjacent elements. This is what makes bubble sort optimal (within constant factors) for an external sort on a tape drive. This is why anything else isn't a bubble sort.
An insertion op inserts one element into a sorted list. A bubble op bubbles the maximum (or minimum) element to the end of an unsorted list.
The number of comparisons is of course about twice as large, because Bubble Sort always does exactly n(n-1)/2 comparisons while Can't Believe sort always does exactly n*n comparisons.
https://play.rust-lang.org/?version=stable&mode=debug&editio...
1. The first pass (I think you call it the preprocessing step) is special in that it finds the maximum number and puts it in the first slot. The rest of the algorithm revolves around this max value (or at least I find it helpful to think of it that way). This is also the only time where j going above i makes any difference. After this pass we could stop the j loop when i == j but that would obviously complicate the code.
2. Second pass will always swap items in first and second slots since the first item is bigger than second (if they are equal, nothing happens). As noted in 1., j will keep going past i but it won't do anything since i is now pointing to the largest value in the array and the swap condition will never be met.
3. On third and all subsequent passes the value that a[i] is pointing to will be slotted in its right place in the sorted list that's being formed at the beginning of the array. This might require multiple swaps as j goes up and larger values go into the ith slot. Last thing that happens before j reaches i (so when j == i - 1) is always that the max value gets swapped from i-1 to i. This is basically what the second pass from 2. did.
4. The max value from 1. conceptually serves almost as a sentinel of sorts since it always represents the end of the list we've sorted so far and when j gets to i, it prevents the rest of j loop from doing anything. That's why the code can be so clean, without having to do any index arithmetic or short circuiting or any other complications. It's clean but definitely not simple, since it takes a while to wrap one's head around what's actually happening.
There is nothing good about this algorithm. It is slow – the algorithm obviously runs in Θ(n2) time, whether worst-case, average-case or best-case. It unnecessarily compares all pairs of positions, twice (but see Section 3). There seems to be no intuition behind it, and its correctness is not entirely obvious. You certainly do not want to use it as a first example to introduce students to sorting algorithms. It is not stable, does not work well for external sorting, cannot sort inputs arriving online, and does not benefit from partially sorted inputs. Its only appeal may be its simplicity, in terms of lines of code and the “symmetry” of the two loops.
Beginners to sorting are often still somewhat struggling with for loops (especially the three-part C kind), struggling with indexing an array, struggling with swaps. It's overwhelming.
This is kind-of like pseudocode for a simple sort -- it has the same rough outline, but no tricky bits. It allows them to get their brains around the basic algorithm, and only after that you trace through an execution on the board to show them that it wastes a lot of time moving things around.
Then you can show them insertion sort as an optimization, and from that move on to fancier ones.
For easier exposition in the proof later on, the array is 1-based, so the elements are A[1]...A[n].
In the first pass (i=1), A[1] will end up with the largest value and a later pass cannot displace it.
In the second pass (i=2), A[2] will end up with the 2nd largest value (which might be the same as the largest value, if it appeared more than once in the list). And so on.
Nah, it will get displaced later. Notice that in the inner loop j goes from 1, and not from i. After i=1 A[1] will be the largest element, but after the next big step, after i=2, A[2] will be the largest element. (And after i=n, A[n] will be the largest element, as expected.)
> In the second pass (i=2), A[2] will end up with the 2nd largest value
The algorithm sorts the array ascending, and not descending. There is an example run linked in a grandkid comment, by jmholla.
(This comment was entirely rewritten.)
Specifically, on each case when i=2 and higher, j will still start at 1, so A[1] will still get referenced.
And, if so: they're writing Arxiv papers on BubbleSort now?
The interesting part is that it often comes up when the one writing the algorithm doesn't know what he is doing, and yet, it works. It may look like a BubbleSort because it is a common wrong implementation of BubbleSort.
EDIT: insertion, not selection
I actually think it's pretty cool, it happens occasionally that I just need a quick and dirty sort and this is easier to get right.
Despite their explanation, I still had to hack together something in Java and step through it with a debugger for it to actually make sense to me. Once I did it worked fine, and after looking through each iteration I more or less got it.
Here's the code if anyone wants to look at it [1], though just a disclaimer that I wrote this in like ten minutes just to hack something so it's not beautiful or anything like that. It's so simple that I don't think I made any mistakes though obviously if I did let me know and I'll be slightly embarrassed.
[1] https://gist.github.com/Tombert/993ada32270f809bdaa6452065f8...
Is this the simplest (and most surprising) sorting algorithm? - https://news.ycombinator.com/item?id=28758106 - Oct 2021 (318 comments)
2) At some point, the outer loop will point to "1", and "1" will be compared to the 1st position. Then, "1" will be placed in the 1st position and remain there forever. (For example, this happens in the 3rd line above)
3) After the outer loop points to "1", at some point it will point to "2" which will be compared to the 2nd position. (Proof: when the outer loop pointed to "1", if "2" was on the right of "1", then the statement is obvious; if "2" was on the left of "1", then it would be in the 1st position by (1), so again the statement holds.) Then, "2" will be placed in the 2nd position and remain there forever.
4) After the outer loop points to "2", at some point it will point to "3" which will be compared to the 3rd position...
https://hsm.stackexchange.com/questions/2002/how-did-royal-w...
Dead Comment
Would you be willing ... without looking at this article ... to provide exactly the code you "use all the time"?
I'd like to compare what you write with the code in the article, because code that I write as a quick and dirty sort is definitely not this.
In particular, this seems to take items that are already the right way round, and swaps them. I definitely don't do that. To be specific, I just knocked this out:
Specifically, note that I ask if a[i] is bigger than a[j] and if so then I swap them. The code in the article appears to do exactly the wrong thing, and I'd be surprised if your "go to" code does it the same way as the code in the article.