• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Calculability

Mathematicians recognize three different kinds of infinity:
  • The limit of some arbitrarily large finite number.
  • The cardinality or number of members of a set.
  • The ordinal or number in sequence.

Models of computation: from memoryless to Turing machines
  • Memoryless system
  • Finite-state machine: internal state register. New value = f(input, old value).
  • Pushdown automaton: a FSM with a pushdown stack, a LIFO (last-in-first-out) memory.
  • Turing machine: a FSM with a memory tape that can be moved arbitrarily in either direction.
Turing machines are equivalent to some other models of computation: the lambda calculus and "general recursive" functions - Doing everything with functions: the lambda calculus
 Turing completeness
To show that something is Turing-complete, it is enough to show that it can be used to simulate some Turing-complete system. For example, an imperative language is Turing-complete if it has conditional branching (e.g., "if" and "goto" statements, or a "branch if zero" instruction; see one-instruction set computer) and the ability to change an arbitrary amount of memory (e.g., the ability to maintain an arbitrary number of data items). Of course, no physical system can have infinite memory; but if the limitation of finite memory is ignored, most programming languages are otherwise Turing-complete.
I must say that I haven't seen Turing-completeness assessments for any of these systems, but these systems are credited with being the first Turing-complete systems of their kind:
  • Proposed hardware: Charles Babbage's Analytical Machine
  • Actual hardware: ENIAC
  • Integrated-circuit chip: Intel 4004
 
An interesting curiosity of finite-state machines is their relationship with abstract algebra.

A FSM's operation can be expressed as an input causing a change of internal state: (final state) = f(initial state)

Applying input f and then input g gives (final) = g(f(initial)). This can be interpreted as a kind of input: function composition, h = f*g

Since function composition is associative, FSM inputs form an abstract-algebra entity called a semigroup, and one can use the mathematics of semigroups to analyze FSM's.  Automata theory -  Semiautomaton -  Semigroup -  Krohn–Rhodes theory -  Green's relations

There is a special kind of operation that one can add, a do-nothing operation or identity: (input) = e(input). If it is present, then this semigroup is a monoid.
 
Arithmetic with infinite cardinal numbers is fun!

Suppose that a and b are both cardinal numbers; 2 ≤ b ≤ a; and a is infinite. Then
  • a + b = b + a = a
  • a × b = b × a = a
  • ba = 2a
  • |a!| = 2a // there are as many subsets of an infinite set a, as there are permutations of a

The fourth of these propositions may be the least obvious. We can confirm it for the simplest case (a = 0) by constructing an injection from the permutations of = {1,2,3,4,...} to the subsets of . I don't see how to do that off-hand, so will leave it as an exercise.

An earlier post defines injective function, etc.

A bijection is a function \(f: D \rightarrow E\) which is one-to-one and onto. This means
f must be
  • a proper function, i.e.
    \(\forall x \in D, f(x) \in E\)
  • injective, i.e.
    \(\forall x, x' \in E, f(x) = f(x') \Rightarrow x = x'\)
  • surjective, i.e.
    \(\forall y \in E, \exists x, f(x) = y\)


When D and E are finite, a bijection exists if and only if D and E have the same size (cardinality). Any injection f will be a bijection, with its reversed expression being \(f^{-1}: E \rightarrow D\). The craft is to find an elegant such bijection, e.g. as in the solution to the problem posed.

If D and E are infinite, the existence of injections in both directions guarantees there is a bijection, but neither injection need be a bijection. From the two injections a bijection can be constructed using the Cantor-König constructive version of CSB. The guarantee is made by the very famous and important Schröder-Bernstein (CSB) Theorem ...

... the Schröder-Bernstein Theorem's very name is confused. Dedekind and Cantor each proved it before either Bernstein or Schröder did; and Schröder's proof wasn't even correct. Bernstein does deserve good credit: His was the first proof not to use the Axiom of Choice. (This was in the days before AC was well recognized.)

Find a bijection from the positive rationals Q to the counting numbers N.
\(p/q \rightarrow 2^p3^q\) (p,q in lowest terms) is one injection, and \(m \rightarrow m/1\) an injection the other way, but their König composite is grotesque. Find a more elegant and straightforward bijection between rationals and integers. (As an example of some of the Hundred Greatest Theorems being too easy, this one is #3 !)

Another exercise, harder than it may seem at first glance, is to construct a bijection from the subsets of ℕ to
the real numbers or an interval thereof. Use the real-valued points in (0,1) or [0,1) or all the reals, whichever is convenient.
 
On the topic of Turing-completeness, cellular automata with that property are interesting. IIRC, Conway's Game of Life was shown to be computationally complete. Stephen Wolfram has done a lot of work with such automata.

I once had fun constructing a Turing-complete automaton on the hexagon grid. It used three states; the rules were rotationally invariant. Briefly:
  • Since a computer can be built from Nand-gates and wires, all you have to do is implement those.
  • The wires need to bend and fan-out, so are actually much harder to implement than the Nand-gates! In my design, one of the three states denoted a plus-valued signal on a wire; another state did double-duty: minus-valued signal or marking at wire-bends; the third state did double-duty: insulation and plus-to-minus transition (needed to give signals directionality). There were enough left-over combinations to implement the Nand-gate.
  • A memory cell is constructed by cross-wiring two Nand-gates.
  • Computers often have or need oscillators, but these can be built from an invertor and a wire.
  • In a circuit diagram, a wire often needs to be shown crossing another wire without touching it. How do you implement such "crossovers" in a cellular automaton drawn on the plane?

The cross-over problem looks severe. But decades ago Martin Gardner showed, in Scientific American, how to implement a cross-over in the plane using Nand-gates!
 
Suppose that a and b are both cardinal numbers; 2 ≤ b ≤ a; and a is infinite. Then
  • |a!| = 2a // there are as many subsets of an infinite set a, as there are permutations of a

The fourth of these propositions may be the least obvious. We can confirm it for the simplest case (a = 0) by constructing an injection from the permutations of = {1,2,3,4,...} to the subsets of . I don't see how to do that off-hand, so will leave it as an exercise.


A permutation f is a set of ordered pairs (x,f(x)). The set of integers {2x·3f(x) | x ∈ } defines the desired injection.

 
To be pedantic, any logic can be constructed by NAND-NOR combinations, or AND=OR combinations. The truth tables are the same.

Look at DeMorgan's Theorem. Also Mealy and Moore finite state machines. Basic models for sequential logic.

It is now reduced to software algorithms, the question of how to find the minimum number of logic gates to implement a boolean algebra equation was once a hot theoretical research topic.
 
To be pedantic, any logic can be constructed by NAND-NOR combinations, or AND=OR combinations. The truth tables are the same.

Not quite.
Nand gates by themselves are enough, as are Nors. (You don't need both Nands and Nors.)

On the other hand, And's or Or's are not enough, even if you combine them: Ands and Ors. (How do you build an invertor? :) )

ETA: Four-plus decades ago I designed high-speed digital logic. We used the MECL 10K family (NPN bipolar). Ands and Nands were slower (3 nanosec on one input leg, 4.3 nanosec for the other) and wanted 5.2 volts, while Ors and Nors were 3 nanosec for either leg, and could get by with 4.2 volt power supply.
 
To be pedantic, any logic can be constructed by NAND-NOR combinations, or AND=OR combinations. The truth tables are the same.

Not quite.
Nand gates by themselves are enough, as are Nors. (You don't need both Nands and Nors.)

On the other hand, And's or Or's are not enough, even if you combine them: Ands and Ors. (How do you build an invertor? :) )

ETA: Four-plus decades ago I designed high-speed digital logic. We used the MECL 10K family (NPN bipolar). Ands and Nands were slower (3 nanosec on one input leg, 4.3 nanosec for the other) and wanted 5.2 volts, while Ors and Nors were 3 nanosec for either leg, and could get by with 4.2 volt power supply.

Yes I should have said and, or, not.

Interesting, I designed high speed digital video systems and serial video distribution systems. SMPTE256. Are you familiar with FPGAs like Xilinx or Cypress?


How would you implement the following truth table with only nor

a,b,c inputs ad out
a b c d
0 0 0 1
1 0 1 1
1 0 0 1
0 1 1 0
1 1 1 0
1 1 1 0
 
To be pedantic, any logic can be constructed by NAND-NOR combinations, or AND=OR combinations. The truth tables are the same.

Not quite.
Nand gates by themselves are enough, as are Nors. (You don't need both Nands and Nors.)

On the other hand, And's or Or's are not enough, even if you combine them: Ands and Ors. (How do you build an invertor? :) )
Good question. Good answer: you carry around every signal in the design in both polarities. Whether a given logical signal is a 1 or a 0 is represented by whether the two physical nets embodying it are set to 0-1 or 1-0. Then you can build an invertor simply by crossing the wires. Steve was perfectly right.
 
Yes, I should have said and,or, not.

and,or, not and – nand,nor,not are functionaly equivalent in terms of implenting logic at the Booleamn equation level, they are connecd through DeMorgan’s Theoem. The theorem is foundational in g didtal logic.

https://en.wikipedia.org/wiki/De_Morgan's_laws



b = !a not
a b
0 1
1 0

(a&b) = c and
(a|b) = c or
!(a&b) = c nand
!(a|b) = c nor

DeMorgan’s Theorem

!(a&b) = (!a|!b) = nand
!(a|b) = (!a&!b) = nor



a b a&b !(a&b) (a|b) !(a|b) (!a&!b) (!a|!b)
0 0 0 1 0 1 1 1
0 1 0 1 1 0 0 1
1 0 0 1 1 0 0 1
1 1 1 0 1 0 0 0
 
Turing Machines are constructed of wires and logic gates.,s microprocessors . However it can not be constructed by linear logic alone, there has to be a stack or push down automata.


When I was at Intel it was said the 8008 was actually breadboarded.
 
To be pedantic, any logic can be constructed by NAND-NOR combinations, or AND=OR combinations. The truth tables are the same.

Not quite.
Nand gates by themselves are enough, as are Nors. (You don't need both Nands and Nors.)

On the other hand, And's or Or's are not enough, even if you combine them: Ands and Ors. (How do you build an invertor? :) )
Good question. Good answer: you carry around every signal in the design in both polarities. Whether a given logical signal is a 1 or a 0 is represented by whether the two physical nets embodying it are set to 0-1 or 1-0. Then you can build an invertor simply by crossing the wires. Steve was perfectly right.
How do you carry all signals with both polarities? Consider an And-gate which develops (A and B). Is it your intent that (NOT(A and B)) also be carried around? Then you have just incorporated a Nand-gate to get your (NOT(A and B)).
 
Four-plus decades ago I designed high-speed digital logic. We used the MECL 10K family (NPN bipolar). Ands and Nands were slower (3 nanosec on one input leg, 4.3 nanosec for the other) and wanted 5.2 volts, while Ors and Nors were 3 nanosec for either leg, and could get by with 4.2 volt power supply.

Interesting, I designed high speed digital video systems and serial video distribution systems. SMPTE256. Are you familiar with FPGAs like Xilinx or Cypress?
In the 1970's there were no FPGA's suitable for our needs. A few burnable Gate Arrays appeared in some of our boards. (I forget the acronym; they were a little like very small FPGA's but could only be programmed once.)

I was involved with high speed imaging systems in the 1990's, though not working on detailed circuit design. One project involved a huge array of Xilinx(?) FPGA's.

How would you implement the following truth table with only nor
(A Nor A) provides (Not A); (Not_A Nor Not_B) provides (A And B); thus brute-force quickly leads you to any logic you want. (Minimizing the gate count may make it more challenging.)
 
Good question. Good answer: you carry around every signal in the design in both polarities. Whether a given logical signal is a 1 or a 0 is represented by whether the two physical nets embodying it are set to 0-1 or 1-0. Then you can build an invertor simply by crossing the wires. Steve was perfectly right.
How do you carry all signals with both polarities? Consider an And-gate which develops (A and B). Is it your intent that (NOT(A and B)) also be carried around?
Yes.

Then you have just incorporated a Nand-gate to get your (NOT(A and B)).
Not necessary. NOT(A and B) == NOT(A) or NOT(B). Since you were already carrying NOT(A) around and likewise NOT(B), the extra gate you need to generate the inverse polarity is an Or gate instead of a Nand gate.
 
So you'd be duplicating (providing both polarities of) all the registers, and all main memory! Wouldn't you need to duplicate all your disk drives as well? I suppose maybe it could "work." Any plan on how to implement an oscillator for clock generation?
 
First  Bijection, injection and surjection

Consider a function f that maps elements a of set A into elements b of set B: f(a) = b.
  • Injection: every element of A maps onto at most one element of B.
  • Surjection: every element of B has at least one element of A that maps onto it.
  • Bijection: both injection and surjection.

Swammerdami's argument about Z+ and Q+ (positive integers and positive rational numbers):

Z+ is an injection into Q+: rational number with form a/1 = integer a
Q+ is an injection into Z+: a/b with a,b in lowest terms maps on to 2^a*3^b

I'll now turn to the power set 2^A vs. the permutation set A!.

For finite sizes,
  • 0: 1 1
  • 1: 2 1
  • 2: 4 2
  • 3: 8 6
  • 4: 16 24
  • n > 4: n! > 2^n

Injection from the permutation set into the power set. For permutation P, construct the set
{2^k*3^P(k) for k a positive integer}

It is obviously a member of the power set of the positive integers. Thus |N!| <= |2^N| for the set ofpositive integers N = Z+.

For infinite sets, is that true in general?

Going the opposite way, consider the power set of set A. One of that set's elements is a set where some elements of A are present and others are absent. This maps onto a permutation with two cycles, one for the present elements and one for the absent elements. That overcounts the permutations by a factor of 2, but that is no problem for infinite sets.

Thus, |2^A| <= |A!| for any infinite set A.

For N at least, |2^N| = |N!| = cardinality of the real numbers.
 
So you'd be duplicating (providing both polarities of) all the registers, and all main memory! Wouldn't you need to duplicate all your disk drives as well?
Yep, or duplicate the bits within one disk drive, or whatever. But then nobody claimed you could build a disk drive out of Nand-gates either.

I suppose maybe it could "work." Any plan on how to implement an oscillator for clock generation?
Ooh, that's a toughie! If you do the transformation in the straightforward way you get a nasty race condition and the oscillator will almost certainly slip into a non-oscillating mode. Best bet is to generate the clocks externally. (Which is what real computers do anyway since you can't really control the frequency of a ring oscillator.)

(Of course you don't really need a clock to do computation; but whether you could set up all the acknowledge signals needed for asynchronous computation without using any inverters is something I haven't explored...)
 
The threads here get very divergent! Cardinal numbers and digital logic are both topics of interest to me, but these two topics seem rather unrelated to each other.

Thanks to Bomb#20 for this diversion! Until his posts, I'd never thought about building logic with no inverting logic; I still question the possibility.

I was NOT suggesting that disk drives needed to be made from And-gates or Nand-gates, but if the circuitry which senses disk bits has no inversion capability, you'll need to write and read the data at both polarities.

I suppose maybe it could "work." Any plan on how to implement an oscillator for clock generation?
Ooh, that's a toughie! If you do the transformation in the straightforward way you get a nasty race condition and the oscillator will almost certainly slip into a non-oscillating mode. Best bet is to generate the clocks externally. (Which is what real computers do anyway since you can't really control the frequency of a ring oscillator.)

(Of course you don't really need a clock to do computation; but whether you could set up all the acknowledge signals needed for asynchronous computation without using any inverters is something I haven't explored...)

I'm not sure what you mean about computers using external oscillators. The main ingredients in a computer clock generator were inverters and a piece of quartz crystal, I think. Are you giving those circuits a dispensation to use inverters?

As for asynchronous logic, in a dual-polarity system one signal will make a low-to-high transition at about the same time its twin makes the high-to-low transition. BUT the precise timing will be unknown: the former can occur before or after the latter. I think this will make "self-clocking" circuitry VERY difficult. Glitches, some rather long, will abound!


One common circuit which does NOT actually require inverters is a flip-flop, a memory cell. It can be built from one And and one Or. To show this, assume three signals: +set, +keep, +data_out. ('+keep' could also be called '-reset'.)

Flip-flop with only And and Or:
. . . +data = +set OR (+data AND +keep)

I don't know how common this simple And-Or flip-flop is. It was used for an important, needs-to-be-fast 72-bit bus in a system I worked with, where the 72 latches were called "zero prop-delay latches." (The OR was actually just a wire-or in our NPN emitter-coupled system.)
 
Last edited:
Another exercise, harder than it may seem at first glance, is to construct a bijection from the subsets of ℕ to the real numbers or an interval thereof. Use the real-valued points in (0,1) or [0,1) or all the reals, whichever is convenient.

Constructing two injections A 1:1 B and B 1:1 A guarantees the existence of a bijection A 1:1 B, but constructing a nice such bijection may be a challenge, even when the injections are constructed easily.

This was brought home to me by a letter (which can be read on-line) written from Georg Cantor to Richard Dedekind before Cantor published his discovery of cardinal numbers. He complained that he hadn't yet been able to construct a bijection from the power-set of integers to the reals, while correctly reporting this to be an unimportant detail. What makes the bijection tricky to construct is

identities like 0.2 = 0.1999999....

! :)
 
My brain needs a major overhaul. :(

One common circuit which does NOT actually require inverters is a flip-flop, a memory cell. It can be built from one And and one Or. To show this, assume three signals: +set, +keep, +data_out. ('+keep' could also be called '-reset'.)

Flip-flop with only And and Or:
. . . +data = +set OR (+data AND +keep)

... were called "zero prop-delay latches." (The OR was actually just a wire-or in our NPN emitter-coupled system.)

While the circuit shown "works" it wasn't the "zero prop-delay latch" we used 44 years ago. I should have written
. . . +data = +data OR (+data AND +keep)
The +data signal itself functions as +set, as clocked by +keep.
 
Back
Top Bottom