• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Usefulness of the standard system of classical logic

Speakpigeon

Contributor
Joined
Feb 4, 2009
Messages
6,317
Location
Paris, France, EU
Basic Beliefs
Rationality (i.e. facts + logic), Scepticism (not just about God but also everything beyond my subjective experience)
I'm going to have a bit more time to invest on fundamental research on logic. I think some of you here have real expertise on the subject to share.

I'm only really interested in the kind of logic that normally intelligent human beings seem to be able to apply, or use, intuitively, what I would call our "logical sense", or "sense of logic", something I believe we have without having first to think about it in any formal way. If you disagree with that, please explain.

So, if you know of any theory of that kind of logic, beyond the one proposed initially by Gottlob Frege and Bertrand Russell, that you happen to like and value, I'd like to hear any reasons you may have for that.

And, additionally, I'm just curious to see how many people around here will be interested!
EB

So, eventually, I guess I have to admit this one didn't elicit much interest!

So, I'll try a different angle. Here it is:

What would you say is the usefulness of the system of formal logic proposed initially by Gottlob Frege and Bertrand Russell, and later developed in the 20th century to become the de facto standard system of classical logic?
EB

________________________

It may help to have a few of the relevant terms and characters pinned down here.

Logicians

Ludwig Wittgenstein
The early Wittgenstein was concerned with the logical relationship between propositions and the world and believed that by providing an account of the logic underlying this relationship, he had solved all philosophical problems.
The later Wittgenstein rejected many of the assumptions of the Tractatus, arguing that the meaning of words is best understood as their use within a given language-game.

Gottlob Frege
(Biography). 1848–1925, German logician and philosopher, who laid the foundations of modern formal logic and semantics in his Begriffsschrift (1879)

Bertrand Russell
1872-1970. British philosopher, mathematician, social critic and writer who had a profound influence on the development of symbolic logic <snip>

Terminology

logic)
n.
1. The study of principles of reasoning, especially of the structure of propositions as distinguished from their content, and of method and validity in deductive reasoning.
2. a. A system of reasoning: Aristotle's logic.
3. any particular formal system in which are defined axioms and rules of inference.

deduction
n.
4. Logic
a. The process of reasoning in which a conclusion follows necessarily from the stated premises; inference by reasoning from the general to the specific.

formal system
n
(Logic) an uninterpreted symbolic system whose syntax is precisely defined, and on which a relation of deducibility is defined in purely syntactic terms. Also called: formal theory or formal calculus

formal logic
n
1. (Logic) Also called: symbolic logic the study of systems of deductive argument in which symbols are used to represent precisely defined categories of expressions. Compare philosophical logic
2. (Logic) a specific formal system that can be interpreted as representing a fragment of natural argument

philosophical logic
n
(Logic) the branch of philosophy that studies the relationship between formal logic and ordinary language, esp the extent to which the former can be held accurately to represent the latter

The three laws of logic

The three laws of logic
The law of non-contradiction, along with its complement, the law of excluded middle (the third of the three classic laws of thought), are correlates of the law of identity (the first of the three laws). Because the law of identity partitions its logical Universe into exactly two parts, it creates a dichotomy wherein the two parts are "mutually exclusive" and "jointly exhaustive". The law of non-contradiction is merely an expression of the mutually exclusive aspect of that dichotomy, and the law of excluded middle, an expression of its jointly exhaustive aspect.

1. The law of identity
In logic, the law of identity is the first of the three classical laws of thought. It states that "each thing is the same with itself and different from another". By this it is meant that each thing (be it a universal or a particular) is composed of its own unique set of characteristic qualities or features, which the ancient Greeks called its essence. Consequently, things that have the same essence are the same thing, while things that have different essences are different things.
In its symbolic representation, "A is A", the first element of the proposition represents the subject (thing) and the second element represents the predicate (its essence), with the copula "is" signifying the relation of "identity".
Further, since a definition is an expression of the essence of that thing with which the linguistic term is associated, it follows that it is through its definition that the identity of a thing is established. For example, in the definitive proposition: "A lawyer is a person qualified and authorized to practice law", the subject (lawyer) and the predicate (person qualified and authorized to practice law) are declared to be one and the same thing (identical). Consequently, the Law of Identity prohibits us from rightfully calling anything other than "a person qualified and authorized to practice law" a "lawyer".

2. The law of non-contradiction
In classical logic, the law of non-contradiction (LNC) (or the law of contradiction (PM) or the principle of non-contradiction (PNC), or the principle of contradiction) is the second of the three classic laws of thought. It states that contradictory statements cannot both be true in the same sense at the same time, e.g. the two propositions "A is B" and "A is not B" are mutually exclusive.

3. The law of excluded middle
For any proposition, either that proposition is true, or its negation is true.
The law of excluded middle (or the principle of excluded middle) is the third of the three classic laws of thought.
The earliest known formulation is in Aristotle's discussion of the principle of non-contradiction, first proposed in On Interpretation, where he says that of two contradictory propositions (i.e. where one proposition is the negation of the other) one must be true, and the other false. He also states it as a principle in the Metaphysics book 3, saying that it is necessary in every case to affirm or deny, and that it is impossible that there should be anything between the two parts of a contradiction.
 
What is a proposition?

What does it mean when a sentence makes grammatical sense?

Is it because the sentence follows some man-made rules?

Or is it because some "program" attached to our minds can make sense of it? An instinct basically.
 
SP, regarding your question "What would you say is the usefulness of the system of formal logic proposed initially by Gottlob Frege and Bertrand Russell, and later developed in the 20th century to become the de facto standard system of classical logic?", I am not quite sure about how to construe your term "useful". The question is just too broad. One could just as well as "What is the usefulness of science?" or "What is the usefulness of mathematics?"

As a linguist, I find formal logic extremely useful as a tool for exploring the nature of human thought and language. In fact, both Russell and Frege were intensely interested in the meaning of sentences in English and German. Russell, in particular, was obsessed with the nature of presupposition and how presupposition failure could generate paradoxes. So he devised a formal system that simply eliminated expressions that could have presuppositions. In a sense, he was constructing what he regarded as an "ideal language", and his solution to philosophical paradoxes was simply to make it impossible to express them. His early partner in this endeavor was Ludwig Wittgenstein, who later turned around and proposed essentially that philosophers who worried about paradoxes were essentially misusing natural language. So he went from being one of the progenitors of the "ideal language" movement to being a progenitor of the "ordinary language" movement. Hence, some have characterized the 20th century as the heyday of so-called "linguistic philosophy".

The so-called school of "Ordinary Language Philosophy" generated some truly brilliant work on the nature of presuppositions, starting with John Austin's "How to Do Things With Words". But I think that Grice's "Cooperative Principle" was one of the most brilliant outcomes of that work by OL philosophers, because it opened up the linguistic community to the systematic study of language discourse, as opposed to just the study of the semantics of phrases and clauses. Ultimately, both the "ideal" and "ordinary" language philosophers really changed the way we look at the nature of human language, so, speaking as a linguist, I have found their work to be tremendously useful.
 
SP, regarding your question "What would you say is the usefulness of the system of formal logic proposed initially by Gottlob Frege and Bertrand Russell, and later developed in the 20th century to become the de facto standard system of classical logic?", I am not quite sure about how to construe your term "useful". The question is just too broad. One could just as well as "What is the usefulness of science?" or "What is the usefulness of mathematics?"

My real interest is in how accurate so-called "standard logic" is in formalising our intuitive sense of logic. However, that's a difficult question and most people would have little to say on that subject unless they had themselves an interest in logic, and I would guess that few people have. So, I'm instead looking for real examples of people applying modern standard logic to solve whatever problems they may have. So, in this sense, I'll take useful to mean whatever people see as useful from their own perspective. The example you give yourself about your use of logic seems to fit the bill.

As a linguist, I find formal logic extremely useful as a tool for exploring the nature of human thought and language.

Could you provide a few examples of your use of formal logic? Any example would be interesting, but I'm especially concerned about complex formulae, if you have any.

Keep in mind I know little about linguistic. You may have to explain.

In fact, both Russell and Frege were intensely interested in the meaning of sentences in English and German. Russell, in particular, was obsessed with the nature of presupposition and how presupposition failure could generate paradoxes. So he devised a formal system that simply eliminated expressions that could have presuppositions. In a sense, he was constructing what he regarded as an "ideal language", and his solution to philosophical paradoxes was simply to make it impossible to express them. His early partner in this endeavor was Ludwig Wittgenstein, who later turned around and proposed essentially that philosophers who worried about paradoxes were essentially misusing natural language. So he went from being one of the progenitors of the "ideal language" movement to being a progenitor of the "ordinary language" movement. Hence, some have characterized the 20th century as the heyday of so-called "linguistic philosophy".

I would see myself as proponent of both the ideal and the ordinary languages. I see logic as a set of rules inherent to rational thinking but not mandatory to language. As I see it, ordinary language does two important things in that respect. First, people choose whether they express themselves in a strictly logical way or not, and most of the time they don't. Second, language has, and can only have, other rules beyond those of logic. Restraining ourselves to logical statements would make most linguistic communications very nearly useless. Whatever logical thinking comes into our linguistic communication is usually not formally expressed. Logic is essentially conveyed by, and inferred from, the semantic of our communications rather than by its syntax. It works because we all have logical minds. This makes whatever logic is conveyed dependent on the syntax and semantics of our communications, and therefore dependent on the meaning each one gives to specific words. A complicated, and by nature a cooperative, business.

People like Russell came at a time when we were only starting to realise how complex the human mind, and therefore language, is. We're now full square into it, notably with the difficulties we face in producing AIs. I think it's significant that Russell was the older man and Wittgenstein the younger one in the pair they made at the time. Russell opened the way with Frege, whereas Wittgenstein at some point realised the problem was more complex than initially envisioned.

The so-called school of "Ordinary Language Philosophy" generated some truly brilliant work on the nature of presuppositions, starting with John Austin's "How to Do Things With Words". But I think that Grice's "Cooperative Principle" was one of the most brilliant outcomes of that work by OL philosophers, because it opened up the linguistic community to the systematic study of language discourse, as opposed to just the study of the semantics of phrases and clauses. Ultimately, both the "ideal" and "ordinary" language philosophers really changed the way we look at the nature of human language, so, speaking as a linguist, I have found their work to be tremendously useful.

We're not going to look into the whole thing here but a bit of how you use formal logic in your work as a linguist would certainly help.

Thanks.
EB
 
I may have to be brief, because I am about to leave Perth AU for a long trip back to Seattle and am pecking this out on an iPad.

Logical expressions are inherently unambiguous, so linguists typically use a kind of hybrid first-order predicate calculus to represent the different semantic representations that a natural language sentence can have. For example, consider the sentence:

All politicians are not crooked.

This sentence is ambiguous, and the ambiguity can be represented by two different semantic representations that mean roughly "It is true that not (all politicians are crooked)" (i.e. some are honest) OR "It is true that all politicians are (not crooked)" (i.e. all are honest). So we use logical notation to tease out the scope ambiguity between the semantic operators for "not" and "all". The meaning difference depends on which operator falls within the scope of the other. However, the natural language expression collapses that distinction. So the job of a linguist is to come up with a theory of how such semantic scope differences can collapse into a single natural language expression. Can we come up with a set of rules or principles that govern such ambiguities?

That is just a small example of how we use formal logic to explore natural language semantics, and it hardly begins to explain what we do. And I didn't bother to try to use formal expressions, since there are different notations that would suffice. However, logic is the tool that allows us to recognize and describe fine semantic distinctions that attach to linguistic expressions.

From the perspective of an AI researcher, one can use logic to analyze the meaning of text and sound expressions. However, there are severe limitations on how well the use of first order logics scale up to handle the needs of theoretical and computational linguists.
 
Last edited:
Only American English uses this ambiguous form.

In British English, one would say "Not all politicians are crooked", or "all politicians are not crooked"; and 'not' would ALWAYS be understood to negate the phrase that immediately follows it.

Only an American could say "All convicts are not guilty", without calling for the immediate release of every single prisoner in our jails.
 
Actually, these scope ambiguities are quite well-studied by linguists, and I believe that both British and Australians have essentially the same grammar. In fact, all languages have scope ambiguities of one sort or another. However, it is sometimes hard to see that in the absence of discouse contexts that resolve the ambiguity.
 
I may have to be brief, because I am about to leave Perth AU for a long trip back to Seattle and am pecking this out on an iPad.

Logical expressions are inherently unambiguous, so linguists typically use a kind of hybrid first-order predicate calculus to represent the different semantic representations that a natural language sentence can have. For example, consider the sentence:

All politicians are not crooked.

This sentence is ambiguous, and the ambiguity can be represented by two different semantic representations that mean roughly "It is true that not (all politicians are crooked)" (i.e. some are honest) OR "It is true that all politicians are (not crooked)" (i.e. all are honest). So we use logical notation to tease out the scope ambiguity between the semantic operators for "not" and "all". The meaning difference depends on which operator falls within the scope of the other. However, the natural language expression collapses that distinction. So the job of a linguist is to come up with a theory of how such semantic scope differences can collapse into a single natural language expression. Can we come up with a set of rules or principles that govern such ambiguities?

That is just a small example of how we use formal logic to explore natural language semantics, and it hardly begins to explain what we do. And I didn't bother to try to use formal expressions, since there are different notations that would suffice. However, logic is the tool that allows us to recognize and describe fine semantic distinctions that attach to linguistic expressions.

Thanks, this is definitely a good example of a useful application of logic. As I see it, however, the scope of logical operators isn't specifically a logical issue. It's a syntax issue you have to settle first, just as you have to settle lexical and semantic issues before you can address the properly logical question of what kind of logical calculus you're going to use to represent our sense of logic. Formal logic systems as languages are unlike ordinary languages in having only tight rules, but you could make up formal logic systems with the same lexical, semantic and syntactic rules as standard logic but with completely different rules as to calculus, and consequently different results as to which formulae would be regarded as true and false. Lexical, semantic and syntactic rules are not properly logical issues because they are only necessary because we use a language to represent our sense of logic. Logical calculus, however, is inherently a logical issue as a part of our sense of logic. I would even say, our sense of logic is entirely the default logical calculus our brain performs.

So, I'd be interested in more complex examples, where the issue becomes whether standard calculus works at all.

From the perspective of an AI researcher, one can use logic to analyze the meaning of text and sound expressions. However, there are severe limitations on how well the use of first order logics scale up to handle the needs of theoretical and computational linguists.

I would think those go beyond the scope of logic proper. It's seems something for mathematical and algorithmic models.
EB
 
Actually, these scope ambiguities are quite well-studied by linguists, and I believe that both British and Australians have essentially the same grammar. In fact, all languages have scope ambiguities of one sort or another. However, it is sometimes hard to see that in the absence of discouse contexts that resolve the ambiguity.

Having spent over two decades speaking English in Britain, and a similar amount of time doing the same in Australia, I can assure you that misplacing 'not' so that it doesn't immediately precede the phrase it is intended to modify is not acceptable in either form of English.

In both nations, "All convicts are not guilty" would be said only as a declaration of the universal innocence of every convict; while an American would use it to mean that there is merely at least one innocent amongst their number.

There are certainly many ambiguities in British and Australian English, but the misplacement of the 'not' modifier isn't one of them.

Of course, in both countries, American television is sufficiently ubiquitous that we understand the meaning, and it is unlikely that many people would be impolite enough to correct the error; But few Englishmen or Australians would make that mistake themselves.
 
Only American English uses this ambiguous form.

In British English, one would say "Not all politicians are crooked", or "all politicians are not crooked"; and 'not' would ALWAYS be understood to negate the phrase that immediately follows it.

Only an American could say "All convicts are not guilty", without calling for the immediate release of every single prisoner in our jails.

TBH, I think it's the same in American English. If someone said "all politicians are not crooked" and meant "not all politicians are crooked" then they are saying the wrong thing. It's not ambiguous.
 
All language is not ambiguous, but when it is, I drink to exes.

 
From one of untermensche's infinite threads on infinity:

Stop this. This thread will never end.
From the answers it is obvious that untermensche is either a bot,mentally ill, performing a social experiment, or making artistic installation/performance.

Why couldn't it be all four?

I think these are ambiguities in language where we'd definitely want an understanding of logical operators to be able to analyze what is meant.

"Get at least a 65 on the final or you'll fail the course"
"You must have passed Course A or Course B to enroll in this class"
"He couldn't read or write"
 
we'd definitely want an understanding of logical operators to be able to analyze what is meant.

No, we know what logical operators mean and we understand them well enough, I think.

What we may need to better understand are coordinating conjunctions, because that's what there will be in an English sentence.

Fortunately, we have dictionaries...

or
conj.
1.
a. Used to indicate an alternative, usually only before the last term of a series: hot or cold; this, that, or the other.
b. Used to indicate the second of two alternatives, the first being preceded by either or whether: Your answer is either ingenious or wrong. I didn't know whether to laugh or cry.
c. Archaic Used to indicate the first of two alternatives, with the force of either or whether.
2. Used to indicate a synonymous or equivalent expression: acrophobia, or fear of great heights.
3. Used to indicate uncertainty or indefiniteness: two or three.

and
conj.
1. Together with or along with; in addition to; as well as. Used to connect words, phrases, or clauses that have the same grammatical function in a construction.
2. Added to; plus: Two and two makes four.
3. Used to indicate result: Give the boy a chance, and he might surprise you.
4. (Informal) Used after a verb such as come, go, or try to introduce another verb describing the purpose of the action: come and see; try and find it.
5. Archaic If: and it please you.
EB
 
"You must have passed Course A or Course B to enroll in this class"

Penn State academic advisor "You technically have the prerequisites, but I don't think linear algebra was included with the one prerequisite when you took it."
Me "Linear algebra, isn't that just algebra dealing with lines? I learned that stuff in high school."
Penn State academic advisor "I don't know, you can take the course if you want."
Me "Sure, sounds good."

Professor, 1/3 way through the semester (past late pull out date), aced the first test: "Now we're back to the part you all know and love, linear algebra...."
Me, 2 minutes into that class... "I'm fucked."

Later that day (after talking to the prof, who said I was fucked if they let me take this course without LA), Penn State academic advisor "It's too late to pull out and reschedule. You're going to lose funding retroactively (and permanently) if you do. We don't have any tutors for math higher than calculus."

So I walked away from higher education with some debt, and a big chip on my shoulder. I learned that my unwillingness to be corrupt guaranteed me a life on the bottom, and that I wouldn't be able to do anything interesting or highly beneficial to mankind that I didn't discover on my own. Which led to the creating useless fractals trap (which, given infinite time and energy, would be something that was actually VERY useful, but damnit, I'm human).
 
No, we know what logical operators mean and we understand them well enough, I think.

What we may need to better understand are coordinating conjunctions, because that's what there will be in an English sentence.

Fortunately, we have dictionaries...



and
conj.
1. Together with or along with; in addition to; as well as. Used to connect words, phrases, or clauses that have the same grammatical function in a construction.
2. Added to; plus: Two and two makes four.
3. Used to indicate result: Give the boy a chance, and he might surprise you.
4. (Informal) Used after a verb such as come, go, or try to introduce another verb describing the purpose of the action: come and see; try and find it.
5. Archaic If: and it please you.
EB

Which of those definitions is the one in "He couldn't read or write"?

Ask a random passerby in the street about the meaning of that sentence and I'd bet you'd get a pretty decent intuitive explanation of De Morgan's laws.
 
No, we know what logical operators mean and we understand them well enough, I think.

What we may need to better understand are coordinating conjunctions, because that's what there will be in an English sentence.

Fortunately, we have dictionaries...



and
conj.
1. Together with or along with; in addition to; as well as. Used to connect words, phrases, or clauses that have the same grammatical function in a construction.
2. Added to; plus: Two and two makes four.
3. Used to indicate result: Give the boy a chance, and he might surprise you.
4. (Informal) Used after a verb such as come, go, or try to introduce another verb describing the purpose of the action: come and see; try and find it.
5. Archaic If: and it please you.
EB

Which of those definitions is the one in "He couldn't read or write"?

???

Sorry, but I would rather use definitions for "or".

And here, the 1.a. will do:
1.a. Used to indicate an alternative, usually only before the last term of a series: hot or cold; this, that, or the other.

Ask a random passerby in the street about the meaning of that sentence and I'd bet you'd get a pretty decent intuitive explanation of De Morgan's laws.

I would hope so.
EB
 
Actually, these scope ambiguities are quite well-studied by linguists, and I believe that both British and Australians have essentially the same grammar. In fact, all languages have scope ambiguities of one sort or another. However, it is sometimes hard to see that in the absence of discouse contexts that resolve the ambiguity.

Having spent over two decades speaking English in Britain, and a similar amount of time doing the same in Australia, I can assure you that misplacing 'not' so that it doesn't immediately precede the phrase it is intended to modify is not acceptable in either form of English.

In both nations, "All convicts are not guilty" would be said only as a declaration of the universal innocence of every convict; while an American would use it to mean that there is merely at least one innocent amongst their number.

There are certainly many ambiguities in British and Australian English, but the misplacement of the 'not' modifier isn't one of them.

Of course, in both countries, American television is sufficiently ubiquitous that we understand the meaning, and it is unlikely that many people would be impolite enough to correct the error; But few Englishmen or Australians would make that mistake themselves.

Bilby, what I take you to be saying is that the following expressions would never be considered well-formed by speakers of Australian and British dialects of English. That is, they would always be heard as coming from a some other dialect of English, e.g. American English. My impression is that they are universal in most, if not all, established English dialects.

1) I have concluded that everyone in this room is not an Australian. Some must be Americans.
2) All of the men did not bring guns. Some brought knives.
3) Every farmer did not beat his donkey. Some treated them with kindness.

Unfortunately, I don't possess the tools any longer to search online British and Australian corpora to test your claim, but I cannot recall hearing or reading of a British or Australian linguist denying that such scope ambiguities were naturally occurring in their dialects. They are extremely well-studied. I'll try to remember to ask some of my British or Australian colleagues about your claim and get back to you in a pm, but I also urge you to be sensitive to these constructions and be on the lookout for such usage yourself. One of the first rules of doing linguistics is not to rely strictly on one's own intuitions of well-formedness alone, because people have a tendency to make mistakes when thinking about such expressions outside of contexts. In the absence of evidence from independent sources, we'll just have to disagree for now.

ETA: I have found at least one counterexample to your claim, Bilby. This sentence is from the British newspaper The Telegraph. (link: Has our nation got the politicians it deserves?):

...Of course, just as all politicians are not bad, neither are all our people...

And then there is this passage from The Oxford Handbook of Information Structure. (See the paragraph under "Scope Inversion", which claims that the sentence "All politicians are not corrupt" means "Not all politicians are corrupt" except under certain conditions (called CT-marking in the text).

And here is another from a Guardian article (See Theresa May is latest victim of Britain's newest sport: booing ministers):

And in the view of the Paralympic crowd, all politicians are not the same.
 
Last edited:
I may have to be brief, because I am about to leave Perth AU for a long trip back to Seattle and am pecking this out on an iPad.

Logical expressions are inherently unambiguous, so linguists typically use a kind of hybrid first-order predicate calculus to represent the different semantic representations that a natural language sentence can have. For example, consider the sentence:

All politicians are not crooked.

This sentence is ambiguous, and the ambiguity can be represented by two different semantic representations that mean roughly "It is true that not (all politicians are crooked)" (i.e. some are honest) OR "It is true that all politicians are (not crooked)" (i.e. all are honest). So we use logical notation to tease out the scope ambiguity between the semantic operators for "not" and "all". The meaning difference depends on which operator falls within the scope of the other. However, the natural language expression collapses that distinction. So the job of a linguist is to come up with a theory of how such semantic scope differences can collapse into a single natural language expression. Can we come up with a set of rules or principles that govern such ambiguities?

That is just a small example of how we use formal logic to explore natural language semantics, and it hardly begins to explain what we do. And I didn't bother to try to use formal expressions, since there are different notations that would suffice. However, logic is the tool that allows us to recognize and describe fine semantic distinctions that attach to linguistic expressions.

Thanks, this is definitely a good example of a useful application of logic. As I see it, however, the scope of logical operators isn't specifically a logical issue. It's a syntax issue you have to settle first, just as you have to settle lexical and semantic issues before you can address the properly logical question of what kind of logical calculus you're going to use to represent our sense of logic. Formal logic systems as languages are unlike ordinary languages in having only tight rules, but you could make up formal logic systems with the same lexical, semantic and syntactic rules as standard logic but with completely different rules as to calculus, and consequently different results as to which formulae would be regarded as true and false. Lexical, semantic and syntactic rules are not properly logical issues because they are only necessary because we use a language to represent our sense of logic. Logical calculus, however, is inherently a logical issue as a part of our sense of logic. I would even say, our sense of logic is entirely the default logical calculus our brain performs.

So, I'd be interested in more complex examples, where the issue becomes whether standard calculus works at all.

From the perspective of an AI researcher, one can use logic to analyze the meaning of text and sound expressions. However, there are severe limitations on how well the use of first order logics scale up to handle the needs of theoretical and computational linguists.

I would think those go beyond the scope of logic proper. It's seems something for mathematical and algorithmic models.
EB

I think that all logical expressions are just notational variants of mathematical ones. At least, that was what Principia Mathematica tried to show.

The real problem with logical representations is that logicians have no way to formalize the conversion of natural language expressions into logical notation, although all textbooks on logic imply such a methodology. To some extent, generative semanticists in the 1970s tried to formalize such a methodology with their concept of "natural logic". (See Jim McCawley's Everything that Linguists have Always Wanted to Know about Logic...But Were Ashamed to Ask). McCawley was a friend and a mentor of mine. The problem was that he could never figure out how exactly to deal with presuppositions, which logical expressions are designed to exclude. Generative semanticists took the position that the meaning of an expression was essentially its "deep structure" and that all "surface" expressions were derived from the underlying logic by derivational rules. Before Jim died, I recall asking him what his thoughts were on handling presupposition, and he told me that I would be rich and famous, if I could come up with a solution. What happened, though, was that his school of Generative Semantics essentially collapsed because of that problem (and others). (Jim was one of the founders of that school, along with George Lakoff and John R Ross.) What came to replace it was the school of Cognitive Linguistics, especially as developed by a number of linguists at Berkeley. Basically, what happened was that a large number of semanticists came to accept that the linguistic signal was defective as a "container" of the meaning of a sentence. There had to be some way to interpret sentence meaning within the framework of a full-fledged discourse. Hence, a number of new semantic theories evolved, e.g. Charles Fillmore's  Frame Semantics.
 
Back
Top Bottom