• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Encoding Numbers

steve_bank

Diabetic retinopathy and poor eyesight. Typos ...
Joined
Nov 9, 2017
Messages
13,769
Location
seattle
Basic Beliefs
secular-skeptic
A Gray Code is a binary code that changes only 1 bit at a time. One modern usage is in rotary postilion encoders. Shining light through holes in a disk with straight binary is problematic, had to unambiguously determine multiple simultaneous bit changes.

A simple math technique with wide usage.



The reflected binary code (RBC), also known as reflected binary (RB) or Gray code after Frank Gray, is an ordering of the binary numeral system such that two successive values differ in only one bit (binary digit).

For example, the representation of the decimal value "1" in binary would normally be "001" and "2" would be "010". In Gray code, these values are represented as "001" and "011". That way, incrementing a value from 1 to 2 requires only one bit to change, instead of two.

Gray codes are widely used to prevent spurious output from electromechanical switches and to facilitate error correction in digital communications such as digital terrestrial television and some cable TV systems. The use of Gray code in these devices helps simplify logic operations and reduce errors in practice.[3]


BCD Binary Coded Decimal

You can do arithmetic in BCD . There were early processors that implemented BCD.


In computing and electronic systems, binary-coded decimal (BCD) is a class of binary encodings of decimal numbers where each digit is represented by a fixed number of bits, usually four or eight. Sometimes, special bit patterns are used for a sign or other indications (e.g. error or overflow).

In byte-oriented systems (i.e. most modern computers), the term unpacked BCD[1] usually implies a full byte for each digit (often including a sign), whereas packed BCD typically encodes two digits within a single byte by taking advantage of the fact that four bits are enough to represent the range 0 to 9. The precise four-bit encoding, however, may vary for technical reasons (e.g. Excess-3).


Positional notation (or place-value notation, or positional numeral system) usually denotes the extension to any base of the Hindu–Arabic numeral system (or decimal system). More generally, a positional system is a numeral system in which the contribution of a digit to the value of a number is the value of the digit multiplied by a factor determined by the position of the digit. In early numeral systems, such as Roman numerals, a digit has only one value: I means one, X means ten and C a hundred (however, the value may be negated if placed before another digit). In modern positional systems, such as the decimal system, the position of the digit means that its value must be multiplied by some value: in 555, the three identical symbols represent five hundreds, five tens, and five units, respectively, due to their different positions in the digit string.
 
There are lots of ways to do encoding of numerical values.

Integers are the easiest to start on, and other representations are composed of integers.

Unsigned integers are easy: 0, 1, 2, 3, ... in whatever number base one wants. Two-state circuitry is the easiest multistate circuitry to make, and that is why binary encoding is the most common kind. But some ternary computers have been built, like  Setun in the Soviet Union in 1958.

There are various ways of encoding decimal digits with binary numbers. A naive way is to have ten bits, with one bit on and all the other bits off. A related system is  Bi-quinary coded decimal - break up the digits into 0 to 4: (0, 0 to 4) and 5 to 9: (1, 0 to 4), and then use two bits for the fives part and five bits for the ones part. But a common system is  Binary-coded decimal - use 4 bits and encode the digits as binary 0 to 9.

For binary itself, a naive system is to use two bits, one for 0 and one for 1, but that is hardly ever done. All one needs is one bit.
 
Should one use fixed or variable numbers of bits for each number? Fixed numbers of bits make the circuit design MUCH easier. But which fixed numbers?  Word (computer architecture)

Pre-CPU-chip computers had a lot of variation in bits per integer, including not only powers of two but also lots of other factors: 3, 5, 11, 13, 31. But chip CPU's universally use powers of 2: 2^3 = 8, 2^4 = 16, 2^5 = 32, 2^6 = 64.

How should one represent negative numbers?  Signed number representations notes several ways of doing that. Here also, pre-CPU-chip computers had a lot of variation, while chip CPU's universally use twos-complement.

Sign-magnitude is the easiest to picture, with a sign bit and with the rest of the bits being the magnitude:

-3: 111, -2: 110, -1: 101, 0: 000, 1: 001, 2: 010, 3: 011

There is also a "minus zero": 100. Some pre-CPU-chip computers used it.

Ones complement has the negative numbers represented by positive numbers with the bits flipped:

-3: 100, -2: 101, -1: 110, 0: 000, 1: 001, 2: 010, 3: 011

A minus zero appears there also: 111. Some pre-CPU-chip computers used it.

Twos complement is like ones complement, but with 1 added to the negative numbers' representations, and with an additional negative number:

-4: 100, -3: 101, -2: 110, -1: 111, 0: 000, 1: 001, 2: 010, 3: 011

Some pre-CPU-chip computers used it, and all chip CPU's do so.

Offset binary:

-4: 000, -3: 001, -2: 010, -1: 011, 0: 100, 1: 101, 2: 110, 3: 111

This is twos-complement with the top bit flipped. Mainly used for exponents in floating-part numbers.
 
 Floating-point arithmetic

Floating-point numbers are universally represented in scientific notation, with bits divided up

(sign) (exponent) (fractional part or mantissa or significand)

Here also, pre-CPU-chip computers had a lot of variation, while chip CPU's almost universally use the  IEEE 754 specifications.


That covers hardware-supported numbers. Software can go much further, of course.


Bignums -  Arbitrary-precision arithmetic - let integers grow arbitrarily large and specify some number of floating-point digits that can be much greater than what the hardware supports.

Fractions can be handled as pairs of integers, the numerator and the denominator, with arithmetic defined appropriately.

Complex numbers can be handled as pairs of whatever numbers go into them, the real part and the imaginary part, with arithmetic defined appropriately.

Other algebraic extensions of the integers or the rational numbers can be handled similarly.
 
 Word (computer architecture) - hardware integer sizes:

4, 6, 8, 9, 11, 12, 15, 16, 18, 20, 22, 24, 25, 26, 27, 30, 31, 32, 33, 34, 36, 39, 40, 48, 50, 60, 64, 65, 72, 75, 100

Factoring:

2^(2 to 6)
3 * 2^(1 to 4)
3^2 * 2^(0 to 3)
3^3
5 * 2^(2 to 3)
5 * 3 * 2^(0 to 2)
5^2 * 2^(0 to 2)
5^2 * 3
11 * 2^(0 to 1)
11 * 3
13 * 2
13 * 3
13 * 5
17*2
31

A big mess.
 
Back
Top Bottom