• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

The Processor Thread

steve_bank

Diabetic retinopathy and poor eyesight. Typos ...
Joined
Nov 9, 2017
Messages
13,972
Location
seattle
Basic Beliefs
secular-skeptic
An ARM 64 bit instruction set. All processors have the same basic instructions.

Logic
Comparison
Boy operations.Arithmetic
Comparison
Branching
Memory access


What separated Intel x86 and Morola 68k was memory architecture.

The 65k was flat, any user had full access to all memory.

x86 used segmented memory. Don't know about current processors, originally you had 64k segments. In the early days 64k was a large program. It was intended to support a multi-user OS where users or apps could be quickly switched in and out. It was called context switching, the entire state of an app was stored and another switched in.

With the 386 the complexity of the instructions to manage apps became too commentated to do by direct coding and required tools.

Real/Protected Mode. Early 286 386 users used te as fast 8086 processors, protected mode was complicated.

 
An interesting feature of computer architectures is that pre-chip computers had much more variation in data sizes than chip ones. A chip computer is one with a Turing-complete CPU on a single computer chip.

 Word (computer architecture) - "In computing, a word is the natural unit of data used by a particular processor design."

Some early ones used decimal data, but most pre-chip computers used binary data or both binary and decimal data, and all chip computers use binary data.

I'll do fixed integer sizes - some architectures support varying numbers of decimal digits.
  • Decimal ones: 1, 2, 6, 8, 10, 12, 15, 16, 23, 24, (50) - the last decimal one was in 1960
  • Binary ones: 4, 6, 8, 9, 11, 12, 15, 16, 18, 20, 24, 25, 26, 27, 30, 31, 32, 33, 34, 36, 39, 40, 48, 50, 60, 64, 65, 72, 75, 79, 100
All the chip computers have power-of-2 sizes: 4, 8, 16, 32, 64

 Signed number representations Pre-chip computers used all three main possibilities:
  • Sign-magnitude: one bit the sign and the rest the magnitude
  • Ones complement: for negative, flip all the bits
  • Twos complement: for negative, flip all the bits and add 1 in unsigned fashion
Chip computers use only twos complement.

 Floating-point arithmetic - all the more recent chip computers use the IEEE 754 standard, which was established in 1985. I won't discuss that here, but it is various conventions for the sizes of the exponent part and the fractional part, combined with various conventions for behavior under arithmetic.
 
Early computers were binary and decimal, and there was one ternary computer:  Setun (1959–1965) Yes, base-3. But after 1960, they were all binary, with some of them optionally supporting decimal numbers.

As I'd posted earlier, pre-chip computers had a variety of integer sizes and the three main signed-integer formats, but chip computers all use power-of-2 integer sizes and twos-complement signed integers.

Early computers were one-off designs, and the first computers to share instruction-set compatibility were IBM's System/360 ones, introduced in 1964. Its successors are the System/370 series, introduced in 1970, the System/390 series, in 1990, and now the Z series, in 2000.

There were some other pre-chip architecture families, like the VAX.

Early chip computers were also one-offs, though later ones formed families like the Motorola 68K, the Intel x86/x64, the SPARC, the DEC Alpha, the POWER/PowerPC, and the ARM.
 
Don't forget the Z80, 8051 and 6502. They were important.

The Commodore computer was 6502.

The DEC VAX was an 9mrtant computer. PDP.

The first company I word at used a VAX for software dependent, it had an open bus architecture and 3rd party plug in boards. You might call it the PC of the day.


 
Miscellaneous musings about processors in the 1960's.

The IBM 1620 -- the first machine I ever programmed -- was a decimal machine. It did multiplications and even additions via lookup tables in low memory! Those tables were loaded by the run deck for every job in case the previous job inadvertantly (or maliciously!) over-wrote those tables.

The CDC 6600 was the premier supercomputer of that era. (The CDC 6400 was a slower-speed variant.) It had a CPU to do the number-crunching and ten PPUs to handle I/O and job control. The CPU used 60-bit words; the PPU used 12-bit words; so ten or two 6-bit characters could be fit into the respective words. There was only one physical PPU, slow and primitive: It rotated through ten register sets to provide ten virtual PPUs. The CPU had 18-bit address registers and up to 217 60-bit words of main memory could be attached: that's almost 1 megabyte, very large for that era. Even more memory could be attached via an ECS (Extended Core Storage) option, but its contents had to be transferred to the main memory before use.

The CDC machines used one's-complement arithmetic, so there was both a positive zero and a negative zero. The only way to get negative zero as an integer arithmetic result was to add negative zero to negative zero (or equivalently to subtract positive zero from negative zero.)

The format of floating point numbers had nice properties. If the numbers were normalized they could be compared as integers and give the correct floating-point compare. The results of operating on normalized numbers were always normalized. The floating-point multiply could also be used for integer multiply. Like IEEE-754 numbers, CDC could represent and generate ±Infinity and ±Indefinite.

During the mid 1970's I became very familiar with IBM mainframes from the 370/135 all the way up to the 370/3033. If there's interest I may summarize those models.
 
The 68k and x86 had the same basic instructions. If you learned one processor you leaned them all.

Intel won the battle with Motorola for PC processor dominance because they mantled software compatibility.

Those using the 6800-09 had to rewrite code from scratch for the 68k.
 
Back
Top Bottom