Edit: Having reread this, I feel I must emphasize I really was talking about the worst case (ie, naive tree code with no balancing whatsoever). It took me a few minutes to realize what I was referring to.
If you wish to use an array-based binary search tree in assembly your worst case (on n elements) = (2^(n – 1)) + 1 … so for a trivial number of elements (n=100) you need an array of about six hundred billion billion billion if I did the conversion correctly (ie, 2^99 -> 6.xxx * 10^29, 29 / 9 = 3 r 2)
And that’s just not happy. I can see why we climbed the ladder of abstraction. Its best case is 2^7 = 128 for that n. Doable with a heap structure. Problem there is that for a green assemblyman such as myself heap would be much more difficult to write than BST would’ve been.
So that leaves slower sorts such as insertion that will require a ton of pushing around elements, or something along the lines of radix with bucket(aka bin). Still a pain. A pure bucket sort would work if you track the elements as they’re added to the initial array, but I’m not keen about dynamically allocating the second array (of |max-min| items).
Insertion is the only amongst that isn’t too hairy to get done even if it’s agonizing to resort to for the entire job. Just be glad that if it came down to it gcc could convert a good implementation of radix or heap or even BST to assembly for you/me. That and hope that Java and other high languages will begin to improve speed-wise (especially as Java’s source gets tweaked by the FOSS community).