• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Usefulness of the standard system of classical logic

Bilby, what I take you to be saying is that the following expressions would never be considered well-formed by speakers of Australian and British dialects of English. That is, they would always be heard as coming from a some other dialect of English, e.g. American English. My impression is that they are universal in most, if not all, established English dialects.

1) I have concluded that everyone in this room is not an Australian. Some must be Americans.
2) All of the men did not bring guns. Some brought knives.
3) Every farmer did not beat his donkey. Some treated them with kindness.

Unfortunately, I don't possess the tools any longer to search online British and Australian corpora to test your claim, but I cannot recall hearing or reading of a British or Australian linguist denying that such scope ambiguities were naturally occurring in their dialects. They are extremely well-studied. I'll try to remember to ask some of my British or Australian colleagues about your claim and get back to you in a pm, but I also urge you to be sensitive to these constructions and be on the lookout for such usage yourself. One of the first rules of doing linguistics is not to rely strictly on one's own intuitions of well-formedness alone, because people have a tendency to make mistakes when thinking about such expressions outside of contexts. In the absence of evidence from independent sources, we'll just have to disagree for now.

ETA: I have found at least one counterexample to your claim, Bilby. This sentence is from the British newspaper The Telegraph. (link: Has our nation got the politicians it deserves?):

...Of course, just as all politicians are not bad, neither are all our people...

And then there is this passage from The Oxford Handbook of Information Structure. (See the paragraph under "Scope Inversion", which claims that the sentence "All politicians are not corrupt" means "Not all politicians are corrupt" except under certain conditions (called CT-marking in the text).

And here is another from a Guardian article (See Theresa May is latest victim of Britain's newest sport: booing ministers):

And in the view of the Paralympic crowd, all politicians are not the same.

All of the examples you give strike me as jarringly wrong, and a very strong indication that the speaker or writer is American.

They certainly wouldn't have been thought acceptable by any of my teachers at school in Yorkshire in the 1970s and '80s.

The influence of American English on the British and Australian dialects, largely through television shows, might well have made this form more common in recent decades, but it certainly strikes me as very poor English, and is enough to derail my train of thought when I come across it.
 

All of the examples you give strike me as jarringly wrong, and a very strong indication that the speaker or writer is American.
Nonsense. All three were from British sources that had gone through an editing process in British publications. And I just found them from a very superficial google search on the string of words "all politicians are not..." The middle citation was authored by a British linguist and was part of that linguist's research into the very construction that you are denying exists in British and Australian English. Your response was apparently to deny the simplest explanation--that they represent counterexamples to your claim--and jump to the unsupported conclusion that the authors in all three cases were Americans who had somehow snuck through American expressions that their British editors had missed.

They certainly wouldn't have been thought acceptable by any of my teachers at school in Yorkshire in the 1970s and '80s.
I don't know what your teachers would have said, but that is beside the point. Their job was to teach the class to write in a style that others would accept as scholarly, not to tell you how British speakers actually speak and write English. That is, they saw their role as prescriptive, not descriptive. You made an empirical generalization about British and Australian usage, and I was able to come up with counterexamples rather quickly from a very superficial search. I don't think that that bodes well for your general claim. I suspect that a more rigorous search of British and Australian corpora would come up with an overwhelming load of counterexamples. I'm not faulting your teachers for trying to teach you guidelines for writing formal English, but they should not be taken as authorities on usage patterns in English. That likely was never part of their training.

The influence of American English on the British and Australian dialects, largely through television shows, might well have made this form more common in recent decades, but it certainly strikes me as very poor English, and is enough to derail my train of thought when I come across it.
There is no evidence that those constructions were in any way influenced by the American dialect, and your cognitive dissonance over these constructions may have more to do with how you were trained to write formally than in what you have seen and heard people saying. There are very good reasons why traditional school guidelines on English writing mislead people so badly. Such grammars came into being at a time when the middle class was expanding rapidly, and there was a high demand to train large numbers of people to write in an educated style. So a lot of the works gave some rather poor advice. For example, the so-called bible of American prescriptive grammarians, Strunk and White's The Elements of Style was written by well-meaning, but confused, amateurs. They actually gave examples of active sentences that they called "passive", because they didn't really know what passive sentences were. See 50 Years of Stupid Grammar Advice by Geoffrey K. Pullum, a renowned British linguist. He is quite brutal in his review of Strunk and White, but you have to understand that this book has done more damage than just about any other style guide in the history of education.

BTW, for those who are interested in a detailed example of how formal logic is used by linguists to analyze quantifier scope ambiguities, I recommend a glance at Stanford professor Chris Potts' handout on the subject: Quantifier scope. He represents the semantics of scope ambiguities with lambda expressions. I have avoided going into such details, because most folks here would be totally unfamiliar with such logical notation.
 
Wandering around with this damn lantern looking for reasons to justify rational as empirical and finding none I conclude there is absolutely no use for the standard system of classical logic.

What took you so long, Babe? :confused:
EB
 
All of the examples you give strike me as jarringly wrong, and a very strong indication that the speaker or writer is American.

They certainly wouldn't have been thought acceptable by any of my teachers at school in Yorkshire in the 1970s and '80s.

The influence of American English on the British and Australian dialects, largely through television shows, might well have made this form more common in recent decades, but it certainly strikes me as very poor English, and is enough to derail my train of thought when I come across it.

And yet.

"All that glitters is not gold".

Shakespeare.

An English dramatist. :D

Published 1596.

All that glitters is not gold
https://www.phrases.org.uk/meanings/all-that-glitters-is-not-gold.html

Shakespeare is the best-known writer to have expressed the idea that shiny things aren't necessarily precious things. The original editions of The Merchant of Venice, 1596, have the line as 'all that glisters is not gold'. 'Glister' is usually replaced by 'glitter' in modern renditions of the play:

O hell! what have we here?
A carrion Death, within whose empty eye
There is a written scroll! I'll read the writing.
All that glitters is not gold;
Often have you heard that told:
Many a man his life hath sold
But my outside to behold:
Gilded tombs do worms enfold.
Had you been as wise as bold,
Young in limbs, in judgment old,
Your answer had not been inscroll'd:
Fare you well; your suit is cold.

Would that be good enough?



Still, I myself learned my very British English essentially through the 1990's, essentially listening to BBC Radio 4, and I also find the "all..not" structure to be substandard. I had to refrain myself from commenting on Copernicus' examples before bilby did.

However, since broadly 2000, I noticed more and more similarly substandard on-air occurrences. I concluded that the prescriptive will of the no longer so upper-class British establishment was just weakening beyond repair. That's still my "theory" (rather than just the influence of American sitcoms).
EB
 
BTW, for those who are interested in a detailed example of how formal logic is used by linguists to analyze quantifier scope ambiguities, I recommend a glance at Stanford professor Chris Potts' handout on the subject: Quantifier scope. He represents the semantics of scope ambiguities with lambda expressions. I have avoided going into such details, because most folks here would be totally unfamiliar with such logical notation.

Thanks.

Some hard work there.
EB
 
I think that all logical expressions are just notational variants of mathematical ones. At least, that was what Principia Mathematica tried to show.

The real problem with logical representations is that logicians have no way to formalize the conversion of natural language expressions into logical notation, although all textbooks on logic imply such a methodology. To some extent, generative semanticists in the 1970s tried to formalize such a methodology with their concept of "natural logic". (See Jim McCawley's Everything that Linguists have Always Wanted to Know about Logic...But Were Ashamed to Ask). McCawley was a friend and a mentor of mine. The problem was that he could never figure out how exactly to deal with presuppositions, which logical expressions are designed to exclude. Generative semanticists took the position that the meaning of an expression was essentially its "deep structure" and that all "surface" expressions were derived from the underlying logic by derivational rules. Before Jim died, I recall asking him what his thoughts were on handling presupposition, and he told me that I would be rich and famous, if I could come up with a solution. What happened, though, was that his school of Generative Semantics essentially collapsed because of that problem (and others). (Jim was one of the founders of that school, along with George Lakoff and John R Ross.) What came to replace it was the school of Cognitive Linguistics, especially as developed by a number of linguists at Berkeley. Basically, what happened was that a large number of semanticists came to accept that the linguistic signal was defective as a "container" of the meaning of a sentence. There had to be some way to interpret sentence meaning within the framework of a full-fledged discourse. Hence, a number of new semantic theories evolved, e.g. Charles Fillmore's  Frame Semantics.

Yes, to me it's clear that the best that can be done would be to give a comprehensive account of how logical relations are standardly expressed in a given language. That's definitely not the same idea as generative grammar but that would still be very useful.
EB
 
What would you say is the usefulness of the system of formal logic proposed initially by Gottlob Frege and Bertrand Russell, and later developed in the 20th century to become the de facto standard system of classical logic?
I don't think I'd agree with a history that says that Gottlob Frege or Bertrand Russell did this. I also don't think either of them had very much to say about the so-called "three classical laws" (but I'd love to hear otherwise!)

Frege and Russell both worked on higher-order classical logics which were supposed to be sufficiently expressive to get you most of mathematics. Frege's system didn't work. Russell's appears to work, with the introduction of the idea of types.

The average modern day working mathematician who knows about logical foundations isn't likely to know much about Russell's system, and will instead tell you that the appropriate logic is not higher order, but is instead first-order, together with the axioms of Zermelo-Fraenkel set theory. This axiom system has a parallel development to Frege and Russell's, and is not particularly easy to compare. That said, those mathematicians are not actually carrying out the projects started by Russell and Frege.

The true inheritors of Frege and Russell's research programmes now work in a mixture of logics, including numerous constructive logics which do not admit the law of excluded middle. Some of the justifications given for working in such logics is that they are more useful. One argument that gets proposed is that, when you try to carry out a project like Russell's or Frege's, you will need a computer, and if you need a computer, you will need a logic that foremost respects the needs of computers, and that will be a constructive logic where proofs all have a computational interpretation. This argument has its detractors.
 
What would you say is the usefulness of the system of formal logic proposed initially by Gottlob Frege and Bertrand Russell, and later developed in the 20th century to become the de facto standard system of classical logic?
I don't think I'd agree with a history that says that Gottlob Frege or Bertrand Russell did this. I also don't think either of them had very much to say about the so-called "three classical laws" (but I'd love to hear otherwise!)

I'm not entirely clear what it is you're saying you disagree with here.

Still, I'm no specialist in the history of logic, so it's probably fair to say my notions in this area are probably a bit fuzzy, but if what I say in my OP shows I am somehow mistaken, I don't seem to be the only one around as shown by quick probes into Wikipedia.
EB

Classical logic
https://en.wikipedia.org/wiki/Classical_logic

Examples of classical logics
- Aristotle's theory of syllogisms
- George Boole's algebraic reformulation of logic, "Boolean logic";
- The first-order logic found in Gottlob Frege's Begriffsschrift.
First-order logic
https://en.wikipedia.org/wiki/First-order_logic

The foundations of first-order logic were developed independently by Gottlob Frege and Charles Sanders Peirce.[4] For a history of first-order logic and how it came to dominate formal logic, see José Ferreirós (2001).
Gottlob Frege
https://en.wikipedia.org/wiki/Gottlob_Frege

Frege's work in logic had little international attention until 1903 when Russell wrote an appendix to The Principles of Mathematics stating his differences with Frege. The diagrammatic notation that Frege used had no antecedents (and has had no imitators since). Moreover, until Russell and Whitehead's Principia Mathematica (3 vols.) appeared in 1910–13, the dominant approach to mathematical logic was still that of George Boole (1815–64) and his intellectual descendants, especially Ernst Schröder (1841–1902). Frege's logical ideas nevertheless spread through the writings of his student Rudolf Carnap (1891–1970) and other admirers, particularly Bertrand Russell and Ludwig Wittgenstein (1889–1951).
 
What would you say is the usefulness of the system of formal logic proposed initially by Gottlob Frege and Bertrand Russell, and later developed in the 20th century to become the de facto standard system of classical logic?

Frege and Russell both worked on higher-order classical logics which were supposed to be sufficiently expressive to get you most of mathematics. Frege's system didn't work. Russell's appears to work, with the introduction of the idea of types.

The average modern day working mathematician who knows about logical foundations isn't likely to know much about Russell's system, and will instead tell you that the appropriate logic is not higher order, but is instead first-order, together with the axioms of Zermelo-Fraenkel set theory. This axiom system has a parallel development to Frege and Russell's, and is not particularly easy to compare. That said, those mathematicians are not actually carrying out the projects started by Russell and Frege.

The true inheritors of Frege and Russell's research programmes now work in a mixture of logics, including numerous constructive logics which do not admit the law of excluded middle. Some of the justifications given for working in such logics is that they are more useful. One argument that gets proposed is that, when you try to carry out a project like Russell's or Frege's, you will need a computer, and if you need a computer, you will need a logic that foremost respects the needs of computers, and that will be a constructive logic where proofs all have a computational interpretation. This argument has its detractors.

I'm only interested in classical logic and, more particularly, first-order logic.

Still, thanks for the insight on constructive logics.
EB
 
I'm only interested in classical logic and, more particularly, first-order logic.

Still, thanks for the insight on constructive logics.
EB
Okay. Well, I'd be happy to talk about why I think first-order logic isn't so useful for formalising mathematics, which would seem relevant to your OP given that formalising mathematics was the goal for Russell and Frege. But if that's not what you had in mind, fair enough.

PS: I find Wikipedia pretty unreliable on this stuff.
 
Well, I'd be happy to talk about why I think first-order logic isn't so useful for formalising mathematics

Isn't that a straightforward consequence of Gödel's incompleteness theorems?

PS: I find Wikipedia pretty unreliable on this stuff.

Sure, but for any question about which you're not a specialist, Wikipedia has to be good enough until proven otherwise. Personally, I couldn't fault the few links I provided.
EB
 
Well, I'd be happy to talk about why I think first-order logic isn't so useful for formalising mathematics

Isn't that a straightforward consequence of Gödel's incompleteness theorems?
No. Higher-order logics are just as susceptible to Goedel's theorems. In fact, they suffer harder: the incompleteness theorems say that first-order logic is undecidable. They say that second-order and higher isn't even axiomatisable!

My complaints about first-order logic for formalising mathematics are more practical. You end up, in practice, having to repeat yourself and run the same proofs over and over, because you can't abstract over the bits you're interested in.

PS: I find Wikipedia pretty unreliable on this stuff.

Sure, but for any question about which you're not a specialist, Wikipedia has to be good enough until proven otherwise. Personally, I couldn't fault the few links I provided.
EB
Oh, I'm not slagging off wikipedia in general. In fact, on pure technical stuff, I generally haven't found much fault with the site. But this history business is a bit messy.

I will just repeat that Frege's formal system was not first-order, so the idea that he lay the foundations of first-order logic is pretty odd. The dominance, in certain spheres, of first-order logic involves a careful separation of ideas that Frege was (unknowingly) conflating.
 
Isn't that a straightforward consequence of Gödel's incompleteness theorems?

No. Higher-order logics are just as susceptible to Goedel's theorems. In fact, they suffer harder: the incompleteness theorems say that first-order logic is undecidable. They say that second-order and higher isn't even axiomatisable!

My complaints about first-order logic for formalising mathematics are more practical. You end up, in practice, having to repeat yourself and run the same proofs over and over, because you can't abstract over the bits you're interested in.

I take this to mean you don't object to the principles of first-order logic as it's taught in universities today.

And, presumably, that you're unlikely to find there's some other system that would somehow be better. Except the system of doing less of it. :p

PS: I find Wikipedia pretty unreliable on this stuff.

Sure, but for any question about which you're not a specialist, Wikipedia has to be good enough until proven otherwise. Personally, I couldn't fault the few links I provided.
EB
Oh, I'm not slagging off wikipedia in general. In fact, on pure technical stuff, I generally haven't found much fault with the site. But this history business is a bit messy.

I will just repeat that Frege's formal system was not first-order, so the idea that he lay the foundations of first-order logic is pretty odd. The dominance, in certain spheres, of first-order logic involves a careful separation of ideas that Frege was (unknowingly) conflating.

Yes, I would agree that the history of logical thought is particularly messy and confusing compared to for example Quantum Physics at broadly the same period (see if you would agree with the presentation given here http://mcps.umn.edu/philosophy/11_4Moore.pdf).

So, I happily concede your point that my formulation in the OP was very approximate and potentially misleading. Still, I wasn't trying to claim first-order logic authorship on behalf of Frege. Rather, I wanted to identify the subject matter and, in my experience, Frege, with Russell, is invariably taken, in commercially available material, as the main starting point of modern first-order logic.

So, talking of first-order logic, it seems to me you're saying that it is not only useful, but properly fundamental in mathematics. Or did I misunderstand what you said?

And if you think it's fundamental, would you agree with Skolem that first-order logic is "the proper and natural framework for mathematics":
The Minnesota Center for Philosophy of Science
http://mcps.umn.edu/philosophy/11_4Moore.pdf

1. Introduction
To most mathematical logicians working in the 1980s, first-order logic is the proper and natural framework for mathematics. Yet it was not always so. In 1923, when a young Norwegian mathematician named Thoralf Skolem argued that set theory should be based on first-order logic, it was a radical and unprecedented proposal.

Yet, there is the limitation you've mentioned of undecidability of first-order logic. So I have a rather difficult question for you: Would the incompleteness theorems apply ipso facto to any new system of first-order logic?
EB
 
I take this to mean you don't object to the principles of first-order logic as it's taught in universities today.

And, presumably, that you're unlikely to find there's some other system that would somehow be better. Except the system of doing less of it. :p
Sorry if I'm not clear. I think systems of higher-order logic are better.

Yes, I would agree that the history of logical thought is particularly messy and confusing compared to for example Quantum Physics at broadly the same period (see if you would agree with the presentation given here http://mcps.umn.edu/philosophy/11_4Moore.pdf).
Yeah, that Moore paper's decent, in my opinion.

So, I happily concede your point that my formulation in the OP was very approximate and potentially misleading. Still, I wasn't trying to claim first-order logic authorship on behalf of Frege. Rather, I wanted to identify the subject matter and, in my experience, Frege, with Russell, is invariably taken, in commercially available material, as the main starting oint of modern first-order logic.

So, talking of first-order logic, it seems to me you're saying that it is not only useful, but properly fundamental in mathematics. Or did I misunderstand what you said?

And if you think it's fundamental, would you agree with Skolem that first-order logic is "the proper and natural framework for mathematics":
The Minnesota Center for Philosophy of Science
http://mcps.umn.edu/philosophy/11_4Moore.pdf

1. Introduction
To most mathematical logicians working in the 1980s, first-order logic is the proper and natural framework for mathematics. Yet it was not always so. In 1923, when a young Norwegian mathematician named Thoralf Skolem argued that set theory should be based on first-order logic, it was a radical and unprecedented proposal.

Yet, there is the limitation you've mentioned of undecidability of first-order logic. So I have a rather difficult question for you: Would the incompleteness theorems apply ipso facto to any new system of first-order logic?
EB
The Incompleteness Theorems apply to any system that's expressive enough to talk about Halting problems. For those systems, you can write a program that searches through its theorems, and then throw that program into the jaws of the Halting Problem. The achievement of Goedel's proof is that this actually happens when considering pretty damn simple systems for doing arithmetic. So any logical system that's good for mathematics is going to suffer from incompleteness issues.

As for first-order logic, I think variations of its set theories are an expressive enough target language to translate pretty much all the mathematical theorems and proofs we care about. This is cool, and I can't say I'm not impressed by the translation power. However, there's a few things you'd notice if you actually tried to do it: it would involve translations that mathematicians would consider hacky and artificial, and they might find themselves sympathising with linguists trying to do the same with their beloved natural languages. Furthermore, a lot of the theorems would turn out not to translate into theorems exactly, but would instead turn into classes of theorems that fit some pattern defined outside the system. First-order logic would not be capable of generalising over that pattern. This situation is already present if you look at the axioms of ZF set theory, which features two axioms that are not axioms at all, but instead two patterns or schema that describe two infinite classes of axiom. All generic derivations from these axiom classes produce yet more infinite classes, potentially defined by complex algorithms that only exist in the heads of the person doing the translation, and not captured anywhere in the actual logic.

This means that first-order logic fails to capture the full suite of reasoning practices of working mathematics, which is a clear weakness. Higher-order logics do much better, and their ever growing extensions can be regarded as trying to build systems that capture as much of a working mathematician's reasoning practices as possible. Indeed, if you just look at the basic logical vocabulary of higher-order logic, it's obvious that it better aligns with the vocabulary of ordinary maths. More importantly, for me, is the fact that these logics are far more effective when it comes to formalising mathematics on a computer: those infinite classes of theorems I mention above mean that translations are possible, in principle, but intractable in practice. They need to be removed if you want to do formalised mathematics for real.

Still, to reiterate, the idea that first-order logic is enough for maths in principle is pretty cool. But if that's our benchmark, then maybe we don't even need first-order logic. Maybe we can get away with something simpler. Finitists, starting with Hilbert, think we can get away with very simple systems of arithmetic. The translations here are even more work than they are in the first order case, but, if they work in principle, that's friggin' awesome. The full claim is yet to be established, and might require that the mathematicians playing in the more exotic parts of set theory pare back on what they consider the scope of their subject, but what's already been achieved in showing just how much of mathematics can be translated to quantifier free systems for arithmetic is extremely impressive.

In summary, from one perspective, I think first-order set theory is all you need to translate, in principle, all of mathematics, though I suspect an even simpler system would do as well. On the other hand, I don't think first-order set theory tells us much about the actual logic of mathematical practice, that higher-order logics have a much better story here, and that we ultimately need higher-order logics if we want to do the translations for real.
 
PS: I'm also a constructivist, so I have beef with classical logic in general. I think classical logic is a fairly recent mistake we made in mathematics.
 
I just want to point out in an aside here that natural languages are very different from formal languages such as symbolic logical notation or programming languages. i.e. artificial languages. Mathematical and logical expressions are essentially conduits of information. They transform input values into output. Linguists have historically treated natural languages as if they were also conduits of information. That treatment has come to be known as the  Conduit Metaphor in the linguistic literature. That is, sentences fully "contain" their meaning.

The Conduit Metaphor has led to a lot of advances in our understanding of natural language semantics, but its weaknesses began to be exposed in the 1970s, when theoreticians were unable to explain meaning (and even some aspects of syntactic structure) outside of discourse or narrative contexts. Although logical expressions seem isomorphic with natural language expressions, they are very different types of information conveyors. The school of  Generative Semantics collapsed in the 1970s precisely because the vision of its proponents was to transform first order logical structure into natural language expressions. Generative semantics was grounded in the Conduit Metaphor, so it was unable to deal with elements of linguistic meaning that were external to the linguistic signal.
 
Sorry if I'm not clear. I think systems of higher-order logic are better.

Ok, and I'm not going there!

Yeah, that Moore paper's decent, in my opinion.

Thanks.

The Incompleteness Theorems apply to any system that's expressive enough to talk about Halting problems. For those systems, you can write a program that searches through its theorems, and then throw that program into the jaws of the Halting Problem. The achievement of Goedel's proof is that this actually happens when considering pretty damn simple systems for doing arithmetic. So any logical system that's good for mathematics is going to suffer from incompleteness issues.

As for first-order logic, I think variations of its set theories are an expressive enough target language to translate pretty much all the mathematical theorems and proofs we care about. This is cool, and I can't say I'm not impressed by the translation power. However, there's a few things you'd notice if you actually tried to do it: it would involve translations that mathematicians would consider hacky and artificial, and they might find themselves sympathising with linguists trying to do the same with their beloved natural languages. Furthermore, a lot of the theorems would turn out not to translate into theorems exactly, but would instead turn into classes of theorems that fit some pattern defined outside the system. First-order logic would not be capable of generalising over that pattern. This situation is already present if you look at the axioms of ZF set theory, which features two axioms that are not axioms at all, but instead two patterns or schema that describe two infinite classes of axiom. All generic derivations from these axiom classes produce yet more infinite classes, potentially defined by complex algorithms that only exist in the heads of the person doing the translation, and not captured anywhere in the actual logic.

This means that first-order logic fails to capture the full suite of reasoning practices of working mathematics, which is a clear weakness. Higher-order logics do much better, and their ever growing extensions can be regarded as trying to build systems that capture as much of a working mathematician's reasoning practices as possible. Indeed, if you just look at the basic logical vocabulary of higher-order logic, it's obvious that it better aligns with the vocabulary of ordinary maths. More importantly, for me, is the fact that these logics are far more effective when it comes to formalising mathematics on a computer: those infinite classes of theorems I mention above mean that translations are possible, in principle, but intractable in practice. They need to be removed if you want to do formalised mathematics for real.

Still, to reiterate, the idea that first-order logic is enough for maths in principle is pretty cool. But if that's our benchmark, then maybe we don't even need first-order logic. Maybe we can get away with something simpler. Finitists, starting with Hilbert, think we can get away with very simple systems of arithmetic. The translations here are even more work than they are in the first order case, but, if they work in principle, that's friggin' awesome. The full claim is yet to be established, and might require that the mathematicians playing in the more exotic parts of set theory pare back on what they consider the scope of their subject, but what's already been achieved in showing just how much of mathematics can be translated to quantifier free systems for arithmetic is extremely impressive.

In summary, from one perspective, I think first-order set theory is all you need to translate, in principle, all of mathematics, though I suspect an even simpler system would do as well. On the other hand, I don't think first-order set theory tells us much about the actual logic of mathematical practice, that higher-order logics have a much better story here, and that we ultimately need higher-order logics if we want to do the translations for real.

Thanks, it helps a lot even if I wouldn't swear I understand everything you say in here.

Still, aren't you just saying that algorithmic processes can't deal with infinite sets?

Aren't you also saying that the kind of machines we have now can't do some of the things the human brain does just using intuition?

Me, I certainly don't see how that could be done.
EB
 
I just want to point out in an aside here that natural languages are very different from formal languages such as symbolic logical notation or programming languages. i.e. artificial languages. Mathematical and logical expressions are essentially conduits of information. They transform input values into output. Linguists have historically treated natural languages as if they were also conduits of information. That treatment has come to be known as the  Conduit Metaphor in the linguistic literature. That is, sentences fully "contain" their meaning.

The Conduit Metaphor has led to a lot of advances in our understanding of natural language semantics, but its weaknesses began to be exposed in the 1970s, when theoreticians were unable to explain meaning (and even some aspects of syntactic structure) outside of discourse or narrative contexts. Although logical expressions seem isomorphic with natural language expressions, they are very different types of information conveyors. The school of  Generative Semantics collapsed in the 1970s precisely because the vision of its proponents was to transform first order logical structure into natural language expressions. Generative semantics was grounded in the Conduit Metaphor, so it was unable to deal with elements of linguistic meaning that were external to the linguistic signal.

Yes, that's broadly how I see the problem. I think of it as the difference between the "closed" systems of formal languages and the "openness" of natural languages. Closed systems contains their semantics, open systems do not.

I would also assume that the potential for infinity in open systems requires some effective mechanism dealing with it, mechanism that would look intuitive to us, subjectively. Something we would have yet to emulate in the design of our machines.

If I'm making sense.
EB
 
Thanks, it helps a lot even if I wouldn't swear I understand everything you say in here.
Cheers! That's as much as I hope from experts in other fields trying to explain stuff to me.

Still, aren't you just saying that algorithmic processes can't deal with infinite sets?

Aren't you also saying that the kind of machines we have now can't do some of the things the human brain does just using intuition?
I'm not saying either of those. I personally think our computers are about as capable of dealing with the infinite as we humans. As for the idea that computers can't do stuff that humans can, that's pretty obvious, right? No-one's claiming to have replicated human cognition in machines at this stage. Whether that'll always be the case, I couldn't say. I think formal systems are the appropriate language for expressing computerised mathematics. But the task of finding proofs has always required artificial intelligence, and the computers are only going to get better at that.
 
Computer programming languages all allow for iteration and recursion, both of which can serve to ground the concept of infinity. Human cognition is associative, and we ground new ideas in experiences. So what kinds of experiences might ground our concept of infinity? Those would be experiences of repetition that lack a termination condition. Numbering systems are a good example, but they are a subset of linguistic structures, which are riddled with iterative and recursive processes. (See Lakoff and Nunez's  Where Mathematics Comes From: How the Embodied Mind Brings Mathematics into Being)
 
Back
Top Bottom