• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

The Processor Thread

steve_bank

Diabetic retinopathy and poor eyesight. Typos ...
Joined
Nov 9, 2017
Messages
13,945
Location
seattle
Basic Beliefs
secular-skeptic
An ARM 64 bit instruction set. All processors have the same basic instructions.

Logic
Comparison
Boy operations.Arithmetic
Comparison
Branching
Memory access


What separated Intel x86 and Morola 68k was memory architecture.

The 65k was flat, any user had full access to all memory.

x86 used segmented memory. Don't know about current processors, originally you had 64k segments. In the early days 64k was a large program. It was intended to support a multi-user OS where users or apps could be quickly switched in and out. It was called context switching, the entire state of an app was stored and another switched in.

With the 386 the complexity of the instructions to manage apps became too commentated to do by direct coding and required tools.

Real/Protected Mode. Early 286 386 users used te as fast 8086 processors, protected mode was complicated.

 
An interesting feature of computer architectures is that pre-chip computers had much more variation in data sizes than chip ones. A chip computer is one with a Turing-complete CPU on a single computer chip.

 Word (computer architecture) - "In computing, a word is the natural unit of data used by a particular processor design."

Some early ones used decimal data, but most pre-chip computers used binary data or both binary and decimal data, and all chip computers use binary data.

I'll do fixed integer sizes - some architectures support varying numbers of decimal digits.
  • Decimal ones: 1, 2, 6, 8, 10, 12, 15, 16, 23, 24, (50) - the last decimal one was in 1960
  • Binary ones: 4, 6, 8, 9, 11, 12, 15, 16, 18, 20, 24, 25, 26, 27, 30, 31, 32, 33, 34, 36, 39, 40, 48, 50, 60, 64, 65, 72, 75, 79, 100
All the chip computers have power-of-2 sizes: 4, 8, 16, 32, 64

 Signed number representations Pre-chip computers used all three main possibilities:
  • Sign-magnitude: one bit the sign and the rest the magnitude
  • Ones complement: for negative, flip all the bits
  • Twos complement: for negative, flip all the bits and add 1 in unsigned fashion
Chip computers use only twos complement.

 Floating-point arithmetic - all the more recent chip computers use the IEEE 754 standard, which was established in 1985. I won't discuss that here, but it is various conventions for the sizes of the exponent part and the fractional part, combined with various conventions for behavior under arithmetic.
 
Early computers were binary and decimal, and there was one ternary computer:  Setun (1959–1965) Yes, base-3. But after 1960, they were all binary, with some of them optionally supporting decimal numbers.

As I'd posted earlier, pre-chip computers had a variety of integer sizes and the three main signed-integer formats, but chip computers all use power-of-2 integer sizes and twos-complement signed integers.

Early computers were one-off designs, and the first computers to share instruction-set compatibility were IBM's System/360 ones, introduced in 1964. Its successors are the System/370 series, introduced in 1970, the System/390 series, in 1990, and now the Z series, in 2000.

There were some other pre-chip architecture families, like the VAX.

Early chip computers were also one-offs, though later ones formed families like the Motorola 68K, the Intel x86/x64, the SPARC, the DEC Alpha, the POWER/PowerPC, and the ARM.
 
Back
Top Bottom