• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Old Computer Architectures

Gosh, this thread brought back memories.

My first machine language was 8088. I liked it (I used to be weird that way) but I always thought it sad that there were not more registers with which to work.

From (very long term) memory, which oft fails

Code segment, Data Segment, Extra Segment, ...there's another I forget
AX, BX, CX (doubled as a counter) and DX. (I thought an additional four would have made my life eaiser)
Oh, and the Instruction Pointer

I know I'm forgetting several. It was fun as I remember it. I never got very good at it but I had a good time trying.
 
Of course, the top of the stack on desktops (intel/amd) is almost always in the L0 cache and on those rare times that it isnt, it is almost always in the L1/2 cache.

Further, the stack already has special addressing modes as a displacement from the current stack pointer as an 8-bit, 16-bit, or 32-bit signed value. Also, a special addressing mode dealing with a SECOND (implicit) displacement (the EBP register) from the stack pointer (the ESP register) is there.

And the most important point is that on the x86 derivatives, the stack has absolutely no implicit size. Its up to the software to decide that sort of thing (ie, the be-all-end-all generic solution)

off the top of my head, the current x86 has these basic addressing schemes:

....
And I know I left out the fact that all of these can be prefixed by a segment/selector override (most instructions presume the DS segment/selector, while ES, FS, and GS, and SS selectors are also an option with the proper override prefix)

I dont see how a registerless system can mimick all these addressing modes without A) using more than one instruction to do what currently can be done with one instruction, or B) being just as complex as the current system in which case its just a transformation of the current situation. :eye-poppi

Edited to add:

On top of it all, under the hood a modern CISC processor is really a RISC processor. The transformation level is virtualy 'free' in terms of performance because the transformations take place on seperate circuitry in parallel while previously transformed instructions are being executed. The only time it matters is when the instruction pipeline has to be flushed, and in those cases the penalty is already so large (think of an 8-man bucket brigade) that the extra clock cycle (using 8 instead of 7 men) to do the transform is only a minor issue. Idealy you never flush the instruction pipeline and thats the real solution.

The big RISC thing was that all addressing was just direct to memory, as the many addressing modes of an architecture like the x86 and 68xxx are just not used often enough to make the overhead of having them and the chip real estate worthwhile. RISC then went to the other extreme, of having lots of registers, which were just as hard to manage as redundant addressing modes. The register free architecture would actually registers, or 0 level cache, but these would be purely the top of stack, say. Addressing to the stack could be made quite flexible, with offsets hard coded in the instruction, say, 8bits, which would give you quick access to the top 256 words of stack, using 256 registers.
 
Gosh, this thread brought back memories.

My first machine language was 8088. I liked it (I used to be weird that way) but I always thought it sad that there were not more registers with which to work.

From (very long term) memory, which oft fails

Code segment, Data Segment, Extra Segment, ...there's another I forget
AX, BX, CX (doubled as a counter) and DX. (I thought an additional four would have made my life eaiser)
Oh, and the Instruction Pointer

I know I'm forgetting several. It was fun as I remember it. I never got very good at it but I had a good time trying.
There were also SS (stack segment), SP (stack pointer), BP (base pointer), SI (source index) and DI (destination index).

AX-DX were 16-bit compositions of 8-bit pairs (e.g. AH & AL), the others were 16-bit only.

In compiler generated code and some assembly-language, BP usually pointed to a function's call frame, so regardless of how much gunk you pushed on the stack, you could always find parameters and local variables relative to BP. Also tremendously useful for debugging when trying to figure out where the hell you were. SI and DI were used for indirect addressing modes in the data segment (as was BX; BP defaulted to the stack segment) as well as "string" operations like movs and scas.

I did a lot of x86 assembly language, and it's stuck in there for good unless I get some serious brain damage.
 
I read Adam Osbornes books on micro-processor architecture from cover to cover, and the only one that stood out in my mind as being insanely stupid was the 8086. Just about everything else had some sort of logic or elegance to it. 68000, TMS9900, PDP-11, 8080, 6502, etc.
 
The register free architecture would actually registers, or 0 level cache, but these would be purely the top of stack, say. Addressing to the stack could be made quite flexible, with offsets hard coded in the instruction, say, 8bits, which would give you quick access to the top 256 words of stack, using 256 registers.

I disagree that an 8-bit offset should give access to the top 256 machine words because thats too restrictive and it is a built-in backward compatability bomb. It should give access to the top 256 items of the fundamental memory unit which is the machine byte (8-bit on most processors, although there are exceptions such as the PDP)

Machine words have changed size three times (8 -> 16 -> 32 -> 64) in the last 30 years and there is no reason to believe that this trend wont continue.

Now consider this situation:

The processor, due to its deep pipeline, could have 7 or 8 instructions "in transit" all somewhere between being fetched to being finalized.

Now imagine that one of those instructions makes a change to the stack pointer!!

Is your head spinning yet?

Edited to add:

Intel is pushing hard to move away from the stack-based FPU on its machines, recommending that compilers use the SSE registers and instructions for floating point work (SSE isnt just a SIMD extension)
 
Last edited:
Although 8 and 16 bits are pretty small for integer sizes, the main motivation to go 16->32->64 bits is pointer size, not integer size. 4GB of memory is too small for some applications, as was 64KB (or various other hacks) in previous generations.
 
Although 8 and 16 bits are pretty small for integer sizes, the main motivation to go 16->32->64 bits is pointer size, not integer size. 4GB of memory is too small for some applications, as was 64KB (or various other hacks) in previous generations.

There are plenty of other advantages to 64-bit integers, and by my estimation those advantages are more widely leveraged than addressing beyond 4GB. For example, one 64-bit integer can hold eight 8-bit integers and these can be manipulated in parallel (similar to SIMD, except not explicit.) On top of that, modern machines can execute multiple integer instructions per cycle so you can actualy manipulate sixteen or even twenty-four 8-bit integers simultaneously. Completely blows MMX out of the water.

Now jump up to 128-bit integers... 32 to 48 bytes processed per clock cycle! I drool for that day and I suspect so do many low level programmers.
 

Back
Top Bottom