• Welcome to The Cave of Dragonflies forums, where the smallest bugs live alongside the strongest dragons.

    Guests are not able to post messages or even read certain areas of the forums. Now, that's boring, don't you think? Registration, on the other hand, is simple, completely free of charge, and does not require you to give out any personal information at all. As soon as you register, you can take part in some of the happy fun things at the forums such as posting messages, voting in polls, sending private messages to people and being told that this is where we drink tea and eat cod.

    Of course I'm not forcing you to do anything if you don't want to, but seriously, what have you got to lose? Five seconds of your life?

Sleep Sort

Superbird

Fire emblem is great
You've heard of Bubble Sort. And Quick Sort. And merge sort, insertion sort, selection sort, and possibly lucky sort and taco sort. But this sort exceeds them all.

Sleep Sort
Sleep Sort is an interesting sorting algorithm. Unlike bubble sort, selection sort, or insertion sort, it is not iterative, and unlike quicksort and merge sort, it is not recursive. Rather, Sleep sort utilizes the multithreading capacity of a computer processor in order to sort the elements in an array. An inputted array (for example, of integers) is taken, and a certain process is created that makes the system wait for a certain amount of time depending on the value it is passed. Then, for every value in the input array, a new thread is created that runs this process on that value, and then adds it to a return array. All threads are started at the same time, and the program waits for all the threads to finish before joining them together and returning its (now full) return array. If all the threads begin simultaneously, the lower values will finish waiting first and be added to the return array before the large values. As such, this returned array is sorted, as the items were added into it in ascending order.

An analogy: Sleep Sort’s execution is similar to a method of sorting physical objects of identical volumes by their density. If many such objects were to be placed in a body of water (without any currents) all at the same time, those with higher densities would fall through it faster and hit the bottom first (because, though the buoyant force on each object would be the same, the force of gravity would be greater on the denser objects, as they would be more massive; so, the more massive objects’ net downwards force would be higher). Sleep Sort works similarly: the smaller values in the input array “fall” faster than the larger values.

Benefits
Sleep Sort has operational efficiency O(n), a value lower than any other sorting algorithm. Additionally, its runtime, while much longer than most sorting algorithms, is generally consistent depending on the range of the inputs, and does not vary much as the input array increases in size (If passed a random array of integers between 0 and 3000, for example, the runtime would always be about three seconds regardless of the size of the array). For exceptionally large data sets that would take less efficient sorting algorithms a very long time to handle, Sleep Sort is much more efficient.

Drawbacks
In practice, Sleep Sort is rather inconsistent. That is, it sometimes makes mistakes if the CPU is particularly overloaded. This is more a problem when dealing with large data sets, as the higher number of threads given to the CPU can quickly overwhelm it, and some may end up delayed. So, some elements may be misplaced. This flaw can theoretically be fixed by employing another sorting algorithm, tailored in particular to cleaning up small flaws, immediately afterwards – Gnome Sort, in particular a high-visibility bi-directional gnome sort, is excellent for this, as any errors Sleep Sort makes are unlikely to be too far from their destinations.

Another drawback to Sleep Sort is its time inefficiency on small data sets. While Sleep Sort’s operational efficiency exceeds any other sorting algorithm, its runtime exceeds them greatly. In cases where sorts need to be completed immediately, this makes Sleep Sort a bad choice; however, if the time needed to sort things is irrelevant and Sleep Sort is allowed to take, say, several seconds, this ceases to be an issue. However, since operational efficiency is rather negligible for small data sets anyway, using Sleep Sort is particularly inadvisable for them.

Large data sets present another issue, a limitation of hardware itself. There is a certain limit to how many threads can be processed by a CPU, and this limit places a cap on the size of Sleep Sort’s input array. On my personal machine, with several other processes running in the background, this limit is somewhere around 2400 threads, thus limiting Sleep Sort’s input array to 2400 elements at maximum. However, this is not necessarily a limitation of the algorithm itself, and on a theoretical infinitely powerful machine, it would cease to be a limitation at all.

Code Examples
Sleep Sort
Written in python because that's the most convenient language and also the only one in which I know how to multithread.
Code:
def sleepsort(arr):
    # Create [semi-global] return variable
    ret = []
    #Internal method to handle sleeping correctly.
    def sleepAdd(var, duration = -1):
        if(duration < 0): duration = var
        sleepTime = duration / 1000.0
        time.sleep(sleepTime)
        ret.append(var)
        pass
    # Now do the sleep sort
    threads = []
    #First, create all of our threads
    for eachValue in arr:
        threads.append(threading.Thread(target=sleepAdd, args=(eachValue,))) 
    # Then, start them all at once
    for eachThread in threads:
        eachThread.start()
        pass
    #Finally, wait for them all to finish and put them back together
    # hopefully in the right order.
    for eachThread in threads:
        eachThread.join()
    # possibly gnome sort to fix any small errors
    return ret;

Gnome Sort
Gnome Sort is used here to “clean up” after Sleep Sort, fixing any errors Sleep Sort may have made. Gnome sort is used because it excels at cleaning up mostly-sorted lists with best-case performance Ω(n) and worst-case O(n^2), approaching the former in the case of Sleep Sort. Other iterative sorting algorithms such as Insertion Sort and Selection Sort have much higher runtimes on mostly-sorted lists because they check elements multiple times when unnecessary, and recursive sorting algorithms have larger space complexity and generally worse performance on sorted lists (even Quicksort has Ω(n log n)). The version of Gnome Sort presented here is far from optimized, however, and could theoretically be improved a little bit.

Code:
def gnomesort(arr):
    # Basic swap function.
    def swap(arr, a, b):
        temp = arr[a]
        arr[a] = arr[b]
        arr[b] = temp
        pass
    # The iterator variable
    count = 0
    # Max size
    ms = len(arr)
    # Using a C-style for loop, as it's easier than dealing with python's for loops
    while(count < (ms - 1)):
        # Check to see if an element is out of place
        if(arr[count + 1] < arr[count]):
            # if so, move it backwards 
            swap(arr, count, count + 1)
            # Then decrement to check again
            count -= 1
            pass
        else:
            # Keep moving
            count += 1
            pass
        pass
    return arr

Conclusion​
Sleep Sort is an interesting thought experiment. It has an incredibly high operational efficiency, but also a very high runtime. Its advantage is that most of its runtime is simply waiting, and as such it does not take up much processing power. So, a system on which it is run can execute other operations in the meantime, with the sorting not particularly wearing on its performance. This is in contrast to other algorithms – even quicksort, for example, takes up a large amount of processing power while it’s running due to the stack of recursion; this is a limitation Sleep Sort avoids. While excellent in certain contexts in an ideal setting, in practice Sleep Sort is not very useful, though Gnome Sorting afterwards does improve its accuracy.

Currently I am experimenting with probability regarding Sleep Sort's number of errors and computation time as number of inputs increases. I'm also going to check on whether Gnome Sort is actually more efficient than bubble sort, cocktail sort, insertion sort, and quicksort. I'll add to this when I get more conclusive data on that front. For the time being, please discuss.
 
Last edited:
An analogy: Sleep Sort’s execution is similar to a method of sorting physical objects of identical volumes by their density. If many such objects were to be placed in a body of water (without any currents) all at the same time, those with higher densities would fall through it faster and hit the bottom first

Or is it more like if you dropped lead balls from the top of the Leaning Tower of Pisa? Surely the heavier ones will hit the ground first!
 
Last edited:
Or is it more like if you dropped lead balls from the top of the Leaning Tower of Pisa? Surely the heavier ones will hit the ground first!

They actually would! But not because gravity pulls harder on them - because the force of gravity on them outweighs the upwards bouyant force by a larger factor.

I'm still working on getting a distribution going, but in the time since I posted this I learned about Counting Sort, which is basically this except done intelligently.
 
and here is that distribution!

VEdvQTp.png


As you can see, Sleep Sort makes almost no errors initially, but quickly starts to make more and more errors as the sample size increases. Shown above is the mean number of errors for each input size over 100 trials, along with lines representing 1 and 2 standard deviations away from that mean. In general, these numbers were quite consistent - for anything over about 30 elements, Sleep Sort makes about half as many misplacements as there are elements. Which makes it, really, no better than random. So yeah, don't use sleep sort. This was a fun experiment, though - and if you happen to take interest in this topic, Counting Sort works under the same principle except uses linear data structures to do it rather than trusting the processor to get everything done on time.
 
Back
Top Bottom