• Welcome to the Internet Infidels Discussion Board.

The Halting Problem

Even though the halting problem in general is insoluble, one can construct algorithms that give results in special cases.

For example, this halting tester will work on the above problem: "If there are no control-flow statements, then the program will halt. Otherwise, this tester cannot say whether it will halt or not." Note that this tester can return "can't say", something not allowed for Turing's halting tester.


If control-flow statements are present, then here is some pseudocode for a halting tester:

Attempt to turn control-flow statements into repeat blocks that may be nested in other repeat blocks.
If that is not possible (spaghetti code), then return "can't say".

Are there repeat blocks?
Yes:
- In every repeat block, is there an upper limit on how many repeats that it can do?
- Yes:
- - Return "yes"
- No:
- - "Return "can't say"
No:
- Return "yes"
 
Church, who gets dual credit for the theorem, showed the result in the context of lambda calculus. His calculus, as a model for logic, is broken: it proves a contradiction. The actual proof involves the Y-combinator, a lambda term that gives you arbitrary recursion and so infinite looping, and which can be used to create a term that switches back and forth between true and false.

The lambda calculus, as a logic, gets fixed by the introduction of types. Church's types give the simply typed lambda calculus, which forbids terms like the Y-combinator. It's possible, as first shown by Hindley and Milner, to write an algorithm that takes arbitrary untyped lambda terms and, if they can be simply typed, to assign them a type. This algorithm is present in a number of typed functional programming languages, including Standard ML, Haskell, Ocaml and F#. Simply typed terms are always strongly normalising: they never go into infinite loops. As Milner put it "typed terms cannot go wrong."

The Hindley-Milner type inference algorithm is therefore possibly the most famous algorithm for showing that a class of programs halt, but as lpetrich hints, there are infinitely many.

I need to get round to writing a blog post about these connections between logic and computation, but I'm still trying to figure out how to pitch it.
 
Church, who gets dual credit for the theorem, showed the result in the context of lambda calculus. His calculus, as a model for logic, is broken: it proves a contradiction. The actual proof involves the Y-combinator, a lambda term that gives you arbitrary recursion and so infinite looping, and which can be used to create a term that switches back and forth between true and false.

The lambda calculus, as a logic, gets fixed by the introduction of types. Church's types give the simply typed lambda calculus, which forbids terms like the Y-combinator. It's possible, as first shown by Hindley and Milner, to write an algorithm that takes arbitrary untyped lambda terms and, if they can be simply typed, to assign them a type. This algorithm is present in a number of typed functional programming languages, including Standard ML, Haskell, Ocaml and F#. Simply typed terms are always strongly normalising: they never go into infinite loops. As Milner put it "typed terms cannot go wrong."

The Hindley-Milner type inference algorithm is therefore possibly the most famous algorithm for showing that a class of programs halt, but as lpetrich hints, there are infinitely many.

I need to get round to writing a blog post about these connections between logic and computation, but I'm still trying to figure out how to pitch it.

If you are posting off the top of your head I am impressed.
 
I take the inference front the problem that you can not write an algorithm to validate an algorithm. It becomes infinite regression. How do you validate the validation algorithm? Which is why software validation became such a big problem as size and complexity grew.
 
I take the inference front the problem that you can not write an algorithm to validate an algorithm. It becomes infinite regression. How do you validate the validation algorithm? Which is why software validation became such a big problem as size and complexity grew.
I couldn't quite put my finger on exactly why you can't solve the Halting Problem, though the fact that if such a program could be efficiently implemented would give one absolutely unbelievable computational power is a reason to intuitively think it doesn't exist. Damn thing would be too good to be true.

The proof against it utilises something that's recursive, and involves "diagonalisation", a style of argument that's commonly used to get negative results in logic, set theory, computability theory, and complexity theory.

The basic proof of the Halting problem requires that you already know how to do universal computation. That is, you have a program "eval" that inputs a program P (its source code or some other encoding), an input x, and then outputs P(x) provided that P actually halts on x.

The diagonalisation is then formed by considering a program "diag" such that diag(x) = 1 - x(x) when x halts on x and outputs 0 otherwise. This program exists on the supposition that the Halting Problem is solvable. You get an immediate contradiction because

diag(diag) = 1 - diag(diag) and thus 0 = 1.

The diagonalisation lies in that self application "x(x)". You see it all over the place in these diagonal arguments. For example, Russell's paradox is basically just considering

Russell(x) = ¬(x(x))

Cantor's diagonal argument has a similar form, asking us to consider a list of real numbers xs and then asking us to consider the real number where digit(n) = 1 - xs[n][n].

You see it in the proof that the untyped lambda calculus is inconsistent, and you see it in the fact that self-application is key to the Y-combinator and precisely what is ruled out by type systems.

It's called "diagonal" because if you tabulate all possible applications, then the self-applications are on the diagonal.
 
Back
Top Bottom