• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Doing everything with functions: the lambda calculus

lpetrich

Contributor
Joined
Jul 27, 2000
Messages
25,193
Location
Eugene, OR
Gender
Male
Basic Beliefs
Atheist
I was inspired by this video: Lambda Calculus - Computerphile - YouTube - YouTube

It was invented by mathematician Alonzo Church in his investigations into the foundations of mathematics. Its name comes from the Greek letter λ commonly used in writing expressions in it. I'll use L for convenience. It essentially expresses everything as a function:

f(x) = operations on x
f = Lx.(operations on x)

The possible valid return values of a function are defined recursively:

(Valid return value) = (a variable) or (a function of valid return values)

Thus, one can do a function of a function of a function... though it must be a finite number of function evaluations.

Multiple arguments can be expressed similarly:
f(x,y) ... f = Lx.Ly. ... f = Lxy

Multiple-argument functions can be expressed as a chain of single-argument functions that get data from their surroundings, a process called currying.

Much of that video discussed the lambda calculus rather abstractly, but a later part of it introduced some definitions of some boolean operations: true, false, not. Alonzo Church himself showed that one can define true and false in the lambda calculus by treating them as if-then-else evaluations:

TRUE = Lx.Ly.x
FALSE = Lx.Ly.y
or
TRUE(x,y) = x
FALSE(x,y) = y

What makes the lambda calculus mathematically useful is the ability to pass functions to functions. That enables us to define "NOT"

NOT = Lp.p FALSE TRUE
NOT(p) = p(FALSE,TRUE)

It should be easy to show that NOT(TRUE) = FALSE and NOT(FALSE) = TRUE.

One can continue with more Boolean-algebra operations:
AND = Lp.Lq.p q p
OR = Lp.Lq.p p q
XOR = Lp.Lq.p (NOT q) q
EQUIV = Lp.Lq.p q (NOT q)
IMPLIES = Lp.Lq.OR (NOT p) q
IFTHENELSE = Lp.Lx.Ly.p x y
 
What might one do with the lambda calculus? Prove computability theorems. Lambda-calculus functions turn out to be equivalent to Turing machines, something called the Church-Turing thesis. What is computable in one is computable in the other, and vice versa. Alonzo Church also showed that there is no algorithm that can decide whether or not two lambda-calculus functions are equivalent. That is, there is no completely general version of EQUIV. It also turns out that various programming-language constructs can be expressed in the lambda calculus.

lambda.pdf
The Lambda Calculus (Stanford Encyclopedia of Philosophy)
t2n.dvi - geuvers.pdf
Chapter5.pdf
The Lambda Calculus
 
One can also do arithmetic with the lambda calculus, and one does that by using Peano's axioms. One starts with "number", "zero", and a function, "successor", with these properties:
  • Zero is a number
  • The successor of every number is also a number
  • Zero is not a successor of any number
  • If the successors of two numbers are equal, then those two numbers are equal.
  • If something is true for zero, and its truth for a number implies its truth for that number's successor, then it is true for all numbers (mathematical induction)

Thus,
0 = Lf.Ln.n
1 = Lf.Ln.f n
2 = Lf.Ln.f (f n)
3 = Lf.Ln.f (f (f n))
etc.

and then defining all the familiar operations.

Addition:
x + zero = x
x + (successor of y) = successor of (x + y)

Multiplication:
x * zero = zero
x * (successor of y) = (x * y) + x

Ordering:
x <= y if there is some a such that x + a = y, where a is natural number, as defined here.
 
Some more lambda-calculus results.

The successor operation, accepting (n) and returning (n+1):
SUCC = Lf.Ln.Lx.f (n f x)

Addition of m and n:
PLUS = Lm.Ln.Lf.Lx.m f (n f x)
or
PLUS = Lm.Ln.m SUCC n

Multiplication of m and n:
TIMES = Lm.Ln.Lf.m (n f)
or
TIMES = Lm.Ln.m (PLUS n) 0

Exponentiation of b by e:
POWER = Lb.Le.e b

The predecessor function is much more difficult:
PRED = Ln.Lf.Lx.n (Lg.Lh.h (g f)) (Lu.x) (Lu.u)

For 0, it returns 0.

Subtraction of n from m:
MINUS = Lm.Ln.n PRED m

Returns m - n if m >= n, 0 otherwise.

Test if a number n is zero:
ISZERO = Ln.n (Lx.FALSE) TRUE

Is m <= n?
LEQ = Lm.Ln.ISZERO (MINUS m n)

Is m == n?
EQUAL = Lm.Ln.AND (LEQ m n) (LEQ n m)

with other inequalities being defined from these ones.

Note that both booleans and numbers each require special algorithms for equality.

-

One can define data structures, like pairs:

PAIR = Lx.Ly.Lp.p x y
FIRST = Lp.p TRUE
SECOND = Lp.p FALSE

One can then construct linked lists:
{a b c d e} = {a {b {c {d {e {}}}}}}
where
{a b} = PAIR a b

The empty list {}:
EMPTY = Lx.TRUE
ISEMPTY = Lp.p (Lx.Ly.FALSE)

-

One can even do recursion and fixed points.

The simplest fixed-point function is
Y = Lg.(Lx g (x x)) (Lx.g (x x))

It does Y g = g (Y g).

For example, one can do the factorial function on n, n!, by doing
(Y FAC) n

where
FAC = Lr.Ln.(ISZERO n) 1 (TIMES n (r (MINUS n 1)))

For n > 0,
(Y FAC) n = FAC (Y FAC) n = n * ((Y FAC) (n-1))
For n = 0,
(Y FAC) 0 = FAC (Y FAC) 0 = 1
 
lambda.pdf has a list of projects for its readers.
1. Define the functions "less than" and "greater than" of two numerical arguments.
2. Define the positive and negative integers using pairs of natural numbers.
3. Define addition and subtraction of integers.
4. Define the division of positive integers recursively.
5. Define the function n! = n * (n-1) * * * 1 recursively.
6. Define the rational numbers as pairs of integers.
7. Define functions for the addition, subtraction, multiplication and division of rationals.
8. Define a data structure to represent a list of numbers.
9. Define a function which extracts the first element from a list.
10. Define a recursive function which counts the number of elements in a list.
11. Can you simulate a Turing machine using λ calculus?
1. If one knows about <=, one can define all the others.
x >= y : y <= x
x < y: not (x >= y)
x > y: not (x <= y)
x == y: (x <= y) and (x >= y)
x != y: not (x == y)

2. Integers are natural numbers (a,b) with value a - b, extended for b > a.

3. (a1,a2) + (b1,b2) = (a1+b1,a2+b2) ... (a1,a2) + (b2,b1) = (a1+b2,a2+b1) ... (a1,a2) * (b1,b2) = (a1*b1+a2*b2,a1*b2+a2*b1)

(a1,a2) <= (b1,b2) if there is some natural number c such that a1+b2+c = a2+b1
Equality if c = 0

4. This recursive step for dividing n by m:
DIV = Lr.Ln.Lm.(ISZERO n) 0 ((r (n-m) m) + 1)

5. That's the factorial function, and it was already done here

6. Rational numbers are integera (a,b) with value a/b extended for a not evenly divisible by b.

7. (a1,a2) + (b1,b2) = (a1*b2+a2*b1,a2*b2) ... - (a1,a2) = (-a1,a2) ... (a1,a2)*(b1,b2) = (a1*b1,a2*b2) ... 1/ (a1,a2) = (a2,a1) ... equality of (a1,a2) and (b1,b2) if a1*b2 = a2*b1 ... (a1,a2) > 0 if a1*a2 > 0, likewise for <.

8. The linked list I did earlier, of pairs (list element, rest of the list). It ends with (), the empty list.

9. First element of a pair. For list x, FIRST x

10. The recursive step for list x:

COUNT = Lr.Lx.(ISEMPTY x) 0 ((r (SECOND x)) + 1)

11. For Turing completeness, one needs some way of conditional repetition and some way of accessing an arbitrarily-long array. One can do both with the lambda calculus, thus the lambda calculus is Turing complete.
 
The lambda calculus is, in a sense, nothing but functions and uninterpreted variables. The simplest l-c functions are

IDENTITY = Lx.x
TRUE = Lx.Ly.x
FALSE = Lx.Ly.y


The first hints of incompleteness of mathematical and logical systems were recognized in antiquity: versions of the liar paradox and what happens when an irresistible force meets an immovable object. Then in the late 19th cy., Georg Cantor worked on infinite sets, showing that there is an infinity of infinities, with no largest one. That was very controversial, with some mathematicians denying the validity of some of his arguments, but that controversy has abated.

Two problems appeared with the set of all sets.

The first one was with sets of sets that do not contain themselves. The set of all ideas is an idea, but the set of all physical objects is not a physical object. Let us call the set of sets that contain themselves

N: every set S such that S is not in S

Is N in N? If it is, then it cannot be one of those sets that is an element of itself. Thus, N is not in N. But if N is not in N, then it must be one of those sets that is an element of itself. Thus, N is in N. Thus, a contradiction. This paradox was discovered by Bertrand Russell when he was looking for mistakes in GC's work.

The second one is with power sets, sets of all subsets of a set. GC showed that the cardinality or number of elements in a power set P(S) of a set S is greater than that of S itself. To see why, consider a mapping function m(a) from an element a in S to an element of P(S). Now consider the set of all elements of S that map onto elements of P(S) that do not contain them.

M: all a in S such that a is not in m(a)

Let's now consider b in S such that m(b) = M. That would make b not be in m(b) and thus not an element of M. But if b is not in m(b) then b must be an element of M. Thus, a contradiction. If S is finite, then it is easy to prove that if S has cardinality n, then P(S) has cardinality 2^n, a quantity greater than n. But GC's proof also applies to infinite sets. Thus, there can be no set of all sets.
 
In 1928, mathematician David Hilbert posed the  Entscheidungsproblem (German: "decision problem") about whether it is possible to decide whether some mathematical statement is universally valid.

Then in 1931, mathematician Kurt Goedel published his  Gödel's incompleteness theorems. Those theorems state that if a formal system contains the natural numbers, then one can construct a true statement in it that cannot be shown to be true in the system. In particular,

Theorem T: "T is not a theorem"

Thus being a version of the liar paradox.

Part of his proof involved Goedel numbering, translation of mathematical statements into numbers. How computers work with data can be interpreted as Goedel numbering.

In 1936, Alonzo Church showed that there is no algorithm that can show that two lambda-calculus expressions are equivalent. His proof closely follows Goedel's incompleteness theorems, complete with using Goedel numbering of lambda-calculus expressions.

Also in 1936, Alan Turing published his solution to the  Halting problem, whether it is possible to devise a Turing-machine algorithm that can return whether or not a Turing machine will either halt after a finite number of steps or else continue indefinitely. The proof involves a way of feeding the algorithm to itself and then finding a contradiction.

Try to find a function h(i,x) that is 1 if program i with input x will halt, and 0 otherwise.

For any computable function f(i,j), one can find a partial function g that is computable with some function e:
g(i) = 0 if f(i,i) = 0 otherwise undefined (like not halting)

Does h(e,e) have the same value as f(e,e)?

If f(e,e) = 0, then g(e) = 0, but h(e,e) = 1
If f(e,e) != 0, then g(e) is undefined, and h(e,e) = 0

But since f is any computable function, h cannot be computable. This argument is much like Cantor's diagonal argument.

-

Combined, these results give the  Church-Turing thesis, that (lambda-calculus computability) == (Turing-machine computability) == (general recursive functions)
 
Douglas Hofstadter wrote about these ideas and related ones in his book Gödel, Escher, Bach: An Eternal Golden Braid. At one point, he introduces certain sorts of functions with the help of the simplified programming languages BlooP, FlooP, and GlooP. These "are not trolls, talking ducks, or the sounds made by a sinking ship," and they are intended for illustrating algorithm concepts rather than for everyday programming use.

BlooP does primitive recursive functions ( Primitive recursive function).

FlooP does general recursive functions ( μ-recursive function), a superset of BlooP.

GlooP is a superset of FlooP that can calculate anything, like whether two lambda-calculus expressions are equivalent or whether a Turing machine will halt.


Primitive recursive functions:
  • Constant: returns a constant value
  • Successor: S(k) = k+1
  • Projection: P(i,x1,...,xn) = x(i)
  • Composition: (f (compose) g)(x) = f(g(x) with multiarg extensions
  • Primitive recursion
That last one is a bit complicated. Given a function f with n args and g with (n+2) args, one can define a function h with (n+1) args in this fashion:
h(0,x) = f(x)
h(S(k),x) = g(k,h(k,x),x)
with easy generalization for n > 1.

General recursive functions include a minimization function:
mu(f)(x) = z if f(z,x) = 0 and f(k,x) > 0 if k < z (these are all natural numbers, none less than zero)

It's a "mu loop", thus the name "mu-recursive".


BlooP only allows finite upper limits on the number of repetitions in its loops. Thus, a BlooP program always halts. Not surprisingly, BlooP is not Turing complete.

FlooP relaxes that restriction, thus having while(true) {} loops. This means that a FlooP program is not guaranteed to halt, though it also means that FlooP is Turing-complete.


GlooP would require super-Turing and super-lambda abilities, and it does not seem to be feasible. Pimc Pifl Pire lists three instructions that could implement much of GlooP's extra capabilities.
  • Pimc (Parallel Infinite MapCar) (mapcar is the name of a Lisp instruction that applies a function to the members of a list -- it's a foreach function)
  • Pifl (Parallel Infinite FiLter) (makes a list from all members of a list that make some predicate function true)
  • Pire (Parallel Infinite REduce) (repeatedly applies a two-argument function to members of a list to come up with a single value)
Like how multi-argument addition or multiplication is handled:
a+b+c+d = ((a+b)+c)+d
 
What might one do with the lambda calculus? Prove computability theorems. Lambda-calculus functions turn out to be equivalent to Turing machines, something called the Church-Turing thesis. What is computable in one is computable in the other, and vice versa.
That's not the Church-Turing thesis. The Church-Turing thesis is the unprovable opinion that there's no method of expressing algorithms that's more powerful than Turing machines and lambda-calculus -- that anything that can't be computed by those formalisms, can't be computed period.
 
Back in the 90s I took a continuing ed course in Theory Of Computaion. You start with graphs, trees, and language generators.

As I remember Hilbert's question, was there an algorithm that could prove all mathematical truths. The underlying issue was whether or not there were ambiguities lurking in the foundations of math. I believe in the 90s there was a comprehensive review looking for overlooked problems.

You come up against problems that can not be solved with trees and graph. One I remember is a universal parser. Embedded parenthesis can not be parsed by trees and graphs.

Some problems require push down automata as CS calls it. or in programming a stack.

At the end of the book it was sated without proof that for a problem to be solvable it must be Turing Computable.

No time limit and no memory limits, ie infinite paper tape in the original context.

For the Incompleteness Theorem I think of Euclidean Geometry.

One of its basis is a point being an infinitely small massless point, and a line an infinity number of points. Not provable in geometry.

Geometry is logically consistent, That means practically speaking that no matter how you construct a problem in geometry out of many possible approaches you can get only one solution. Solutions are unique and not path dependent.

It is also related to software test and validation. You can not in a general way test an algorithm with an algorithm. Beyond simple evaluation, there is no algorithm to validate software. Software is only as good as the test cases run against it.
 
Back
Top Bottom