I take this to mean you don't object to the principles of first-order logic as it's taught in universities today.
And, presumably, that you're unlikely to find there's some other system that would somehow be better. Except the system of doing less of it.
Sorry if I'm not clear. I think systems of higher-order logic are better.
Yes, I would agree that the history of logical thought is particularly messy and confusing compared to for example Quantum Physics at broadly the same period (see if you would agree with the presentation given here
http://mcps.umn.edu/philosophy/11_4Moore.pdf).
Yeah, that Moore paper's decent, in my opinion.
So, I happily concede your point that my formulation in the OP was very approximate and potentially misleading. Still, I wasn't trying to claim first-order logic authorship on behalf of Frege. Rather, I wanted to identify the subject matter and, in my experience, Frege, with Russell, is invariably taken, in commercially available material, as the main starting oint of modern first-order logic.
So, talking of first-order logic, it seems to me you're saying that it is not only useful, but properly fundamental in mathematics. Or did I misunderstand what you said?
And if you think it's fundamental, would you agree with Skolem that first-order logic is "
the proper and natural framework for mathematics":
The Minnesota Center for Philosophy of Science
http://mcps.umn.edu/philosophy/11_4Moore.pdf
1. Introduction
To most mathematical logicians working in the 1980s, first-order logic is the proper and natural framework for mathematics. Yet it was not always so. In 1923, when a young Norwegian mathematician named Thoralf Skolem argued that set theory should be based on first-order logic, it was a radical and unprecedented proposal.
Yet, there is the limitation you've mentioned of undecidability of first-order logic. So I have a rather difficult question for you: Would the incompleteness theorems apply ipso facto to any new system of first-order logic?
EB
The Incompleteness Theorems apply to any system that's expressive enough to talk about Halting problems. For those systems, you can write a program that searches through its theorems, and then throw that program into the jaws of the Halting Problem. The achievement of Goedel's proof is that this actually happens when considering pretty damn simple systems for doing arithmetic. So any logical system that's good for mathematics is going to suffer from incompleteness issues.
As for first-order logic, I think variations of its set theories are an expressive enough target language to translate pretty much all the mathematical theorems and proofs we care about. This is cool, and I can't say I'm not impressed by the translation power. However, there's a few things you'd notice if you actually tried to do it: it would involve translations that mathematicians would consider hacky and artificial, and they might find themselves sympathising with linguists trying to do the same with their beloved natural languages. Furthermore, a lot of the theorems would turn out not to translate into theorems exactly, but would instead turn into classes of theorems that fit some pattern defined outside the system. First-order logic would not be capable of generalising over that pattern. This situation is already present if you look at the axioms of ZF set theory, which features two axioms that are not axioms at all, but instead two
patterns or
schema that describe two infinite classes of axiom. All generic derivations from these axiom classes produce yet more infinite classes, potentially defined by complex algorithms that only exist in the heads of the person doing the translation, and not captured anywhere in the actual logic.
This means that first-order logic fails to capture the full suite of reasoning practices of working mathematics, which is a clear weakness. Higher-order logics do much better, and their ever growing extensions can be regarded as trying to build systems that capture as much of a working mathematician's reasoning practices as possible. Indeed, if you just look at the basic logical vocabulary of higher-order logic, it's obvious that it better aligns with the vocabulary of ordinary maths. More importantly, for me, is the fact that these logics are far more effective when it comes to formalising mathematics on a computer: those infinite classes of theorems I mention above mean that translations are possible, in principle, but intractable in practice. They need to be removed if you want to do formalised mathematics for real.
Still, to reiterate, the idea that first-order logic is enough for maths in principle is pretty cool. But if that's our benchmark, then maybe we don't even need first-order logic. Maybe we can get away with something simpler.
Finitists, starting with Hilbert, think we can get away with very simple systems of arithmetic. The translations here are even more work than they are in the first order case, but, if they work in principle, that's friggin' awesome. The full claim is yet to be established, and might require that the mathematicians playing in the more exotic parts of set theory pare back on what they consider the scope of their subject, but what's already been achieved in showing just how much of mathematics can be translated to quantifier free systems for arithmetic is extremely impressive.
In summary, from one perspective, I think first-order set theory is all you need to translate, in principle, all of mathematics, though I suspect an even simpler system would do as well. On the other hand, I don't think first-order set theory tells us much about the actual logic of mathematical practice, that higher-order logics have a much better story here, and that we ultimately need higher-order logics if we want to do the translations for real.