All Sorts of Sorts

Bob Sander-Cederlof

From “AppleGram”, August, 1980, pages 6-12
Monthly newsletter of “The Dallas Apple Corps”

You can run the programs in this article on a real Apple ][ Plus, Apple //e, Apple //c, etc. Or, you can run them in a web browser at Applesoft BASIC in Javascript.

All Sorts of Sorts . . . . . . . . . . . Bob Sander-Cederlof

My first exposure to the sorting problem was back in college days [Circa 1963]. The Computer Logic Design class was over-subscribed, so we ran out of text books. While waiting for books, the prof decided to teach us programming for the Bendix G-15 (a drum- memory, diode logic, vacuum tube machine...the size of a refrigerator, but a lot hotter!). As a challenge for our first program, he assigned us the monumental task of sorting a list of three (3) numbers into ascending order. Part of the challenge was to see how few instructions it could be done with. Speed was not a concern, just memory. After all, the computer only had 2183 29-bit words!

My first attempt was just slightly under 100 instructions, and I thought that I was winning. Some fellow students were over 250! (That is an exclamation mark, not a factorial sign.) However, the prof told us that the record was 25, so I kept working. I finally did win, with 23 instructions. But I kept at it through the year, and got it down to 18 with some wild use of side effects and the like. I used to be pretty proud of this accomplishment, until I learned that I had succeeded in re-inventing the world's slowest sorting technique, the Bubble Sort.

Bubble Sort: This algorithm is very simple, which may account for its widespread popularity in spite of its turtle nature. The first and second items of a list are compared; if they are out of order, they are interchanged. Then second and third items are compared, and if out of order they are interchanged. This is repeated for overlapping adjacent pairs until the end of the list is reached. You can see that we will have succeeded in moving the item which should come last to the end of the list Then the process is repeated again starting with the first and second items, but stopping this time with the next-to-last item. After each pass, the largest remaining item has been moved to its correct position in the list. That is, it has bubbled into place. If one complete pass is made with no interchanges required, the procedure is terminated, because the list is completely sorted. Otherwise, the process is repeated until the group to be passed over is only one item.

The bubble sort does take advantage of pre-ordering. If the list is already sorted, only one pass will be required. There will be N-1 comparisons, and no interchanges, for a list of N items. But if the list is randomly sorted beforehand, we are in for a long wait. It could take as many as (N^2 - 2*N - 2)/2 comparisons and interchanges.

It doesn't take much to write the bubble sort program. Here is a three—line version, which assumes an array D of N items.

110 M = N
120 M = M - 1: SW = 0: FOR I = 1 TO M: IF D(I + 1) < D(I) THEN T = D(I): D(I)=D(I + 1): D(I + 1) = T: SW = 1
130 NEXT : IF SW THEN 120
- 6 -

Sort Demonstrations: It is a lot more interesting to try out a sort than it is to read about one. Therefore, I have put all the sorts from this article on the September Disk of the Month. They are all embedded in the same skeleton driver program, which creates a list of random values to be sorted, calls the sort subroutine, and offers to print the sorted list. Convenient aids for selecting- the length of the list and for measuring the time taken are provided. Here is the bubble sort again, this time embedded in the skeleton.

10 S$ = "BUBBLE": GOTO 1000
100 REM SORT 5UBROUTINE
110 M = N
120 M = M - 1: SW = 0: FOR I = 1 TO M: IF D(I + 1) < D(I) THEN T = D(I): D(I)=D(I + 1): D(I + 1) = T: SW = 1
130 NEXT : IF SW THEN 120
999 RETURN
1000 TEXT : HOME : PRINT S$" SORT DEMONSTRATION": INPUT "HOW MANY ITEMS? ";N: DIM D(N):R = RND (- 1): FOR I = 1 TO N:D(I) = INT ( RND (1) * 1000): NEXT I
1010 GOSUB 1500: PRINT "HIT SPACE BAR WHEN READY TO START": GET A$ : PRINT "SORTING": GOSUB 100: PRINT CHR$ (7)"FINISHED" : GOSUB 1500: END
1400 HTAB 1: PRINT Q$" (Y/N)? ";: GET A$ : IF A$ <> "N" AND A$ <> "Y" THEN 1400
1410 PRINT A$: YES = (A$ = "Y"): RETURN
1500 Q$ = "LIST THE VALUES": GOSUB 1400: IF NOT YES THEN RETURN
1510 IF A$ <> "Y" THEN 1500
1520 N1 = INT ((N + 3) / 4): FOR I = 1 TO N1
1530 FOR J = O TO 3: I$(J) = "  ": D$(J) = "     "
1540 IJ = I + J * N1: IF IJ < = N THEN I$(J) = RIGHT$("  " + STR$(IJ) + ": ",5): D$(J) = RIGHT$ ("  " + STR$(D(IJ)),3)
1550 NEXT J
1570 PRINT I$(0);D$(0)"  ";I$(1);D$(1)"  ";I$(2);D$(2)"  ";I$(3);D$(3): NEXT I
1580 RETURN

Line 10 identifies the kind of sort algorithm, and branches to the main line routine at 1000. The sort algorithm is always from line 100 through 999. It is written as a subroutine, called from line 1010.

The main line program first requests the length of the list, and then generates a series of random numbers to fill up the list. The random number generator is seeded with -1 each time, so that every run of the program will use the same sequence of randomly ordered values. This assures that the timing comparisons between different algorithms are comparing apples with apples. Since the list is random, it will be neither worst case nor best case for any algorithm. After the list is generated, the option to print it out is given. Then the list is sorted, and the option to print is again given. You must start the sorting by hitting the space bar; you can start a stop watch at the same time. Then a bell rings when the sort is finished, to help you punch the stop watch again. Lines 1400-141O are my famous YESNO subroutine. Lines 1500-1580 print out the list in four columns, with right-justification within the columns.

- 7 -

Search-for—Smallest Sort: Another very simple yet very slow sorting algorithm is one I call "search-for-smallest". A pair of of nested loops compare every element in the list with the first element in the list. At the end of the first pass, the smallest value is placed in the first slot. After the second pass, the smallest remaining item is placed in the second slot. And so on. This one always makes the maximum number of comparisons, but the minimum number of interchanges. It can be written with only two lines of Applesoft code (lines 110 and 120 below):

10 S$ = "SEARCH FOR SMALLEST": GOTO 1000
100 REM SORT 5UBROUTINE
110 FOR I = 1 TO N - 1: FOR J = I + 1 TO N: IF D(J) < D(I) THEN T = D(I): D(I) = D(J): D(J) = T
120 NEXT : NEXT : RETURN

Insertion Sort: The insertion sort is similar to the bubble sort, in that overlapping adjacent pairs are compared. However, when an out of order item is found, a search upward is done to find the position the item should be in. Then sorting is resumed at the point we were at before we started searching upwards. After one complete downward pass, the whole list is sorted.

Since, when an out-of-order item is found, all the preceding items in the list are in order, it is possible to use a binary search to find the correct position for the out-of-order item. However, the overhead is too great for small lists. With a 100-item list, the fancy version takes longer than the simple one given here.

10 S$ = "INSERTION": GOTO 1000
100 REM SORT 5UBROUTINE
110 IF N < 2 THEN RETURN
120 FOR J = 2 TO N: IF D(J - 1) <= D(J) THEN 160
130 D = D(J): I = J - 1
140 D(I + 1) = D(I): I = I - 1: IF D < D(I) THEN 140
150 D(I + 1) = D
160 NEXT
999 RETURN

Shell Sort: One of the most popular sorting techniques in the various micro magazines has been the Shell Sort. This is a diminishing increment sort, named after its creator, Donald L. Shell. Items a distance M apart in the list are compared, where M has an initial value of half the length of the list. After every pair a distance M apart has been compared and possibly interchanged, the value M is halved again. The process is _ repeated until a pass with M=1 has been performed. The particu- lar series of values for M used is not important, just so that the final value is 1. Hibbard improved on the Shell sort by choosing a more optimum initial distance, and Donald Knuth has written an excellent chapter on the subject in his Art of Computer Programmming Volume 3. pages 84-95. Knuth gives several etter , M-series than even Hibbard's.

- 8 -
10 S$ = "SHELL": GOTO 1000
100 REM SORT SUBRUUTINE
110 M = N
120 M = INT (M / 2): IF M = 0 THEN RETURN
130 J = 1: K = N - M
140 I = J
150 L = I + M: IF D(I) > D(L) THEN T=D(I):D(I)=D(L):D(L)=T: I = I - M: IF I > 0 THEN 150
160 J = J + 1: IF J > K THEN 120
170 GOTO 140

Hibbard Sort: As I mentioned above, the Hibbard sort is just an improved Shell sort based on a more optimum choice of initial comparison distance. T. N. Hibbard chose a distance of the largest power of two less than the number of items in the list. The sample program determines this value by getting the base two logarithm of N, and raising 2 to that power. The result is a speed-up of about 10%. [ONLY CHANGED LINES 110 AND 120]

10 S$ = "HIBBARD": GOTO 1000
100 REM SORT 5UBROUTINE
110 M = 2 ^ INT ( LOG (N) / LOG (2)) - 1
120 M =INT (M / 2): IF M = 0 THEN RETURN
130 J = 1: K = N - M
140 I = J
150 L = I + M: IF D(I) > D(L) THEN T=D(I):D(I)=D(L):D(L)=T:I = I - M: IF I > 0 THEN 150
160 J = J + 1: IF J > K THEN 120
170 GOTO 140

Faster Shell Sort: There is an even better choice for the initial comparison distance, and for the algorithm for the subsequent distances. Knuth shows that a distance of the form 3*M+1 is much better. The following program creates the largest number of this fonm which is less than the length of the list, and uses that value for the first distance. Subsequent distances are then computed from the inverse of that, (M-1)/3. The result is an improvement into the speed range of Quick Sort.

10 S$ = "FASTER SHELL": GOTO 1000
100 REM SORT SUBROUTINE
110 M = 1
120 M = 3 * M + 1: IF M < N THEN 120
130 M = (M - 1) / 3: IF M < 1 THEN RETURN
140 FOR J = M + 1 TO N: I = J - M: D = D(J)
150 IF D(I) > D THEN D(I + M) = D(I): I = I - M: IF I > 0 THEN 150
160 D(I + M) = D: NEXT J: GOTO 130

Knuth also suggests that it is better to choose an an initial distance no larger than 1/3 of the list length. Adding the following line to the Faster Shell Sort makes it even faster, shaving the 189 seconds to sort 800 items down to 181 seconds.

125 IF M > 1 THEN M = (M-1)/3
- 9 -

Quick Sort: This is a "divide and conquer" approach, first developed by C. A. R. Hoare in 1962. The list is first split into two pieces, so that all the items in the first piece are smaller than all the items in the other piece. Then one of the pieces is in turn split the same way. Each piece is split into two pieces, with one being stacked while the other is in turn split. The splitting and stacking continues until at last a segment with only one item in the results. This single item is of course by this time in its proper position in the list. The last piece stacked can now be unstacked and split, and so on. Here is a little program to implement the process.

10 S$ = "QUICK": GOTO 1000
100 REM SORT SUBROUTINE
130 P = 0: L = 1: R = N
140 I = L: J = R: S = 0
150 IF D(I) > D(J) THEN T=D(I):D(I)=D(J):D(J)=T: S = NOT S
160 I = I + S: J = J - NOT S: IF I < J THEN 150
170 IF I + 1 < R THEN P = P + 1: L(P) = I + 1: R(P) = R
180 R = I - 1: IF L < R THEN 140
190 IF P THEN L = L(P): R = R(P): P = P - 1: GOTO 140
999 RETURN

The stack of pointers to the first and last items of each piece are kept on the two arrays L(I) and R(I). The stack index is P, and it is initially 0, meaning nothing is on the stack. The simple variables L and R point to the Left and right ends of the current list or sub-list about to be split.

I and J are traveling pointers, that point to the current items being compared. These start at the two ends of the current sub-list, and proceed toward the middle as we move smaller items towards the left and larger ones toward the right. When they meet, the sub-list has been split.

Line 170 stacks one of the pieces and advances the stack pointer. Line 180 prepares to split the other piece; if it is at least two items long, it transfers back to the splitting loop. Otherwise, it unstacks the last thing stacked, and goes to split it. When the stack is empty, the sort is finished.

Quicker Sort: The original Quick Sort works very well for random lists. However, it is very poor on lists that are already sorted, or nearly sorted. If the list is already sorted, each split produces one piece of one item, and another piece which is the rest of the items. Hoare suggested several ways to get around this problem. Quicker Sort does so by using the median of three items as the partitioning point for the splitting operation. The result is to nearly divide each sub-list in half at each split. Another savings is realized by always stacking the longer of the two pieces after a split. This minimizes the depth of the stack. The stack will never contain more than the log2 of N. That is, for 1000 items, the stack need only have room for nine pointer pairs. In the programs given here, the automatic dimensioning ability of Applesoft is used. The first reference of an array which has not been DIMmed causes it to be automatically DIMed to eleven items, O through 10.

- 10 -

Another improvement in Quicker Sort is to stop the splitting when a segment is less than M items long. The value of M is determined by testing, and the optimum for this particular program was M=1O. Segments of 10 or fewer items can be more rapidly sorted by the insertion method than they can by Quicker Sort, due to the overhead in Quicker Sort. Each piece could be sorted as soon as it is isolated, but it is even faster to wait until after all the Quicker Sort work is finished. Then a final pass using the insertion method sorts all the items within each small piece. (One danger in deferring the insertion sort is that a bug in the Quicker Sort portion would be covered over by the insertion sort. Only an unreasonably long sorting time would reveal that the Quicker Sort had any bugs.) I debugged the Quicker Sort using M=O, to be sure it was working properly. Then I tried timing a sort of 200 items with values of M from 1 to 12. and chose 10 as the best.

10 S$ = "QUICKER": GOTO 1000
100 REM SORT SUBROUTINE
110 P = 0: I = 1: J = N: M = 10
120 IF I >= J THEN 220
130 K = I: L = J: IJ = INT ((I + J) / 2): IF D(I) > D(IJ) THEN T=D(I): D(I)=D(IJ): D(IJ)=T
140 IF D(IJ) > D(J) THEN T=D(J):D(J)=D(IJ):D(IJ)=T: IF D(I) > D(IJ) THEN T=D(I):D(I)=D(IJ):D(IJ)=T 
143 D = T
145 IF K = IJ THEN D = D(L)
147 IF L = IJ THEN D = D(K)
150 L = L - 1: IF D < D(L) THEN 150
160 K = K + 1: IF D(K) < D THEN 160
170 IF K < L THEN T = D(K): D(K) = D(L): D(L) = T: GOTO 145
180 IF K = L THEN K = K + 1: L = L - 1
190 P = P + 1: S = (L - I) < (J - K): IF S THEN L(P) = K: R(P) = J : J = L
200 IF NOT S THEN L(P) = I: R(P) = L: I = K
210 IF J - I > M THEN 130
220 IF P THEN I = L(P): J = R(P): P = P - 1: GOTO 210
230 REM STRAIGHT INSERTION SORT
240 FOR J = 2 TO N: IF D(J - 1) <= D(J) THEN 300
250 D = D(J): I = J - 1
260 D(I + 1) = D(I): I = I - 1: IF D < D(I) THEN 260
270 D(I + 1) = D
300 NEXT
999 RETURN

Singleton Sort: Richard C. Singleton developed the Quicker Sort algorithm a little more, making it even faster. I believe some of the improvements in the program above are due to his work, also. The difference in the last version is that the loops are handled a little differently. The optimum cutoff for segment size turned out to be at M=7 in this program.

- 11 -
10 S$ = "SINGLETON": GOTO 1000
100 REM SORT SUBROUTINE
110 P = 0: I = 1: J = N: M = 7
120 IF I >= J THEN 220
130 K = I: L = J: IJ = INT ((I + J) / 2): T = D(IJ): IF D(I) > T THEN D(IJ)=D(I): D(I)=T: T=D(IJ)
140 IF T > D(J) THEN D(IJ)=D(J): D(J)=T: T=D(IJ): IF D(I) > T THEN D(IJ)=D(I): D(I)=T: T=D(IJ)
150 L = L - 1: IF T < D(L) THEN 150
160 K = K + 1: IF D(K) < T THEN 160
180 IF K <= L THEN D = D(L): D(L) = D(K): D(K) = D: GOTO 150
190 P = P + 1: S = (L - I) < (J - K): IF S THEN L(P) = K: R(P) = J : J = L
200 IF NOT S THEN L(P) = I: R(P) = L: I = K
210 IF J - I > M THEN 130
220 IF P THEN I = L(P): J = R(P): P = P - 1: GOTO 210
230 REM STRAIGHT INSERTION SORT
240 FOR J = 2 TO N: IF D(J - 1) <= D(J) THEN 300
250 D = D(J): I = J - 1
260 D(I + 1) = D(I): I = I - 1: IF D < D(I) THEN 260
270 D(I + 1) = D
300 NEXT
999 RETURN

Timing the Sorts: I got out my trusty stopwatch and ran each sort program for 50, 100, 200, 400, and 800 items. Even though I used random numbers, I used the same seed every time so that the same list is being sorted by each method. Here are the results, in seconds.

Sort Algorithm50100200400800
Bubble2189326------
Search-for-Smallest1668240------
Insertion1041135612---
Shell81948114291
Hibbard71743106246
Quick6153588194
Faster Shell5133578189
Quicker6133167140
Singleton4.5102454118

All the times above should be taken as merely representative, because they will be different for different lists of values. Also, they will be longer if the sort subroutine is not near the front of your program, because of Applesoft's line number searching time.

All of the sort methods can easily be changed from ascending to descending order, by just changing "less than" to "greater than" and vice versa.

All of the sorts can also be modified to sort strings rather than floating point values. They can also be changed to carry along other parts of a "record" while you sort on a "key" within the record. So have fun modifying them! Happy sorting to you!

- 12 -