• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Logic & Science

Now please tell me that how do you know that A is A and that A is not non A.

I will but first I really want to know what's on your mind. Can you explain why you think it may be a problem to do what you just asked?
EB

I have asked you the question first, in all fairness I think you should answer that question first.
 
I will but first I really want to know what's on your mind. Can you explain why you think it may be a problem to do what you just asked?
EB

I have asked you the question first, in all fairness I think you should answer that question first.

That's pretty dumb as justifications go. My question was not the same as your question.

Anyway.

So I could go all inductive and notice that I can't seem to see (observe) any difference ever between whatever thing I want to consider and itself. After some time I'll decide that it must be because there's a general law A = A which applies across reality (and there, fromderinside, it will apply deductively). And I'll regard this law as valid until such a time as it's shown to be wrong. Not ideal but we can live with less than ideal, until we die, that is.

I could also go the formal route. A is A because irrespective of what A may be used to refer to in the concrete world, the form "A is A" is obviously formally true, as in, I can see it's true just by looking at it. Specifically, I look at the first "A" and then I look at the second "A" and I see it's bloody obvious there's no difference between these two symbol therefore they are the same, at least on a formal level, i.e. the first symbol is "A" and the second symbol is "A" and I can't see any difference between these two symbols so I take them to be the same symbol. The rest I take to be trivial. For example, two instances of the same symbol refer to one thing and one thing only so if A = A formally then it's also true that the thing referred to by the first symbol A is identical to the thing referred to by the second symbol A. That's true by convention and without it we could not have any sensible conversation ever. Etc.

Your turn.
EB
 
The earliest moanings I read about it were in Quine's Mathematical Logic. It's not new, and it still trips students up, myself included back in 2004.

Whatever complaints you muster, you ultimately want something like the material conditional, whether or not it is the appropriate translation for the English "implies."

???

No, personally, I don't want to end up with the material conditional. Why should anybody?

Me, I just want a formal method to express logical relations such that calculation is not restricted by the complexity of the formulae and still the formal method is clearly seen as entirely compatible with all our basic intuitions about logic. That's what modern logic hasn't done.


The truth table for implication is another matter. One basic requirement of implication is that it should be the "exponential" for the lattice of truth values. In the case of Boolean logic, where there are basically only two truth values, you're in this stupidly trivial world where you get the strange equivalence (p → q) ↔ (¬p ∨ q), which makes implication kind of pathetic. It has me suggesting to students that they should simply read classical implication "p → q" as nothing more than "either p is false or q is true."

As I see it, the intuitive concept of implication doesn't reduce to "either p is false or q is true" and I haven't seen any conclusive justification to the contrary. That is, I've read justifications but I found them lacking.

I don't think formal logic should strictly adhere to our intuitions, our psychology, or our language. All of modern mathematics starts at something intuitive and then abstracts towards some implied concept that can easily be a long distance from where you started. Logic is the same, IMO. What you end up with as the correct logic is probably not what you instantly thought of, and we shouldn't be trying to justify our informed end state based on how intuitive it is to the beginner.

I think we should start from our intuitions because that's all we have to start from that has any demonstrable value. Then, obviously, our little brain has its limitations so we want to go all formal to be able to deal with any complex formulae we like. But the formal method we select should be completely in line with our intuitions wherever we can have them.

And (2 = 3) → (10/2 = 5) isn't in line with my intuition. So, that's it. If you know of a formal method that says that this formula is false then you may have something.
EB
 
???

No, personally, I don't want to end up with the material conditional. Why should anybody?
Because it's the exponential. Call it "the exponential" if you like, and reserve "conditional" for something that you can find that keeps your intuition happy. Whatever alternative formula you come up with, I'll bet it'll be less useful than the exponential, which will retain it's place as a fundamental connective.

As I see it, the intuitive concept of implication doesn't reduce to "either p is false or q is true" and I haven't seen any conclusive justification to the contrary. That is, I've read justifications but I found them lacking.
I didn't say it reduces. I ask students to read the formula "p → q" as "either p is false or q is true", and not as "p implies q", since it avoids just this sort of confusion. If we're doing intuitionistic logic, this reading isn't possible, however.

I think we should start from our intuitions because that's all we have to start from that has any demonstrable value.
What has demonstrable value are the formal results of mathematical logic and computer science, and their contributions to automated reasoning and formal verification, despite the offence caused to the odd student's intuition. Indeed, the demonstrable value suggests to me that we haven't missed anything here.

Then, obviously, our little brain has its limitations so we want to go all formal to be able to deal with any complex formulae we like. But the formal method we select should be completely in line with our intuitions wherever we can have them.
If that was our criteria, we'd lose most of mathematics.

Logicians get over this by the end of their first undegraduate logic courses, and never think about it again, unless they have to teach undergrads and want to find the quickest way to deal with the odd student who struggles with it.

I'm happy to try helping you figure out "→", but it's not going to happen if you declare that your unspecified intuitions can trump any of my explanations. Judging by the other thread, you'll have similar problems trying to understand why 0.999...=1.
 
Speakpigeon said:
???

No, personally, I don't want to end up with the material conditional. Why should anybody?

Because it's the exponential. Call it "the exponential" if you like, and reserve "conditional" for something that you can find that keeps your intuition happy. Whatever alternative formula you come up with, I'll bet it'll be less useful than the exponential, which will retain it's place as a fundamental connective.

I'm concerned with the formalisation of human logic, not some arbitrary calculus no one can explain how it relates to the human mind.

As I see it, the intuitive concept of implication doesn't reduce to "either p is false or q is true" and I haven't seen any conclusive justification to the contrary. That is, I've read justifications but I found them lacking.
I didn't say it reduces. I ask students to read the formula "p → q" as "either p is false or q is true", and not as "p implies q", since it avoids just this sort of confusion. If we're doing intuitionistic logic, this reading isn't possible, however.

I'm not interested in reading "p implies q" as "either p is false or q is true" and still can't see why I should.

What has demonstrable value are the formal results of mathematical logic and computer science, and their contributions to automated reasoning and formal verification, despite the offence caused to the odd student's intuition. Indeed, the demonstrable value suggests to me that we haven't missed anything here.

I don't see how the Boolean calculus used in computers would exemplify the value of Modern Logic as representative of human logic, which is what I'm interesting in. Computers are calculating machines and the rules they follow are not restricted to the rules of human logic.

And I also don't know what the value of mathematical logic is in terms of practical applications. I doubt very much that any value it may have proves anything as to whether Modern Logic properly accounts for human logic.

So unless you can explain briefly what it is I will retain my point that material implication has nothing to do with human logic.

Then, obviously, our little brain has its limitations so we want to go all formal to be able to deal with any complex formulae we like. But the formal method we select should be completely in line with our intuitions wherever we can have them.
If that was our criteria, we'd lose most of mathematics.

No. There's no good reason to insist on using the same criteria for mathematics and logic.

Human logic is not a part of mathematics. If there's an analogy, I would use arithmetic, as taught in primary schools. Arithmetic is a straightforward formal extension of what all humans do using their fingers and personally I don't see you can poo-poo this achievement. This kind of arithmetic works fine and is completely in line with our intuition. What could be wrong with that?

I understand that modern logicians, having given up on logic, have gone all mathematical and that's their right but if you read textbooks that distinction is never clearly admitted to. It's not healthy and I find this even unethical.

Logicians get over this by the end of their first undegraduate logic courses, and never think about it again, unless they have to teach undergrads and want to find the quickest way to deal with the odd student who struggles with it.

And you think it's a good thing?! I certainly understand why undergraduates should choose to toe the line to get a meal but there's no reason that this should also apply to me or anybody alse.

I'm happy to try helping you figure out "→", but it's not going to happen if you declare that your unspecified intuitions can trump any of my explanations.

I didn't ask you to help me figure out what the implication is as this is already pretty clear to me, thank you.

What I asked you is if you could justify the Modern Logic's interpretation of the implication as material implication. I don't see where you've done that yet.

Then, again, I'm convinced it can't be done so no bother, really.

Anyway, thanks for the effort.

Judging by the other thread, you'll have similar problems trying to understand why 0.999...=1.

Good.
EB
 
I'm not interested in reading "p implies q" as "either p is false or q is true" and still can't see why I should.
Again, this is not what I'm asking!

I'm suggesting people read the three arbitrary symbols "p → q" as "either p is false or q is true." You don't need to mention "implies", or associate the arrow with the word.

I understand that modern logicians, having given up on logic, have gone all mathematical and that's their right but if you read textbooks that distinction is never clearly admitted to. It's not healthy and I find this even unethical.
I don't really know what you mean by "human logic." It might not even exist as far as I can tell.

I'm all for discouraging students from thinking that mathematical logic has anything to say about psychology or correct reasoning, and emphasising it as a machine readable calculus that is capable of expressing arbitrary properties of formal specifications. This isn't generally a great way to begin though, since it is far too abstract, so I prefer to start teaching by bootstrapping on the idea that the formalism is inspired by elements of natural language or other intuitive concepts.

This is how we teach programming languages also, by starting with something intuitive even if the final word on the meaning of a programming language is in its abstract specification.

Even if intuition can continue to play a role in developing new ideas, the goal is not to say something psychologically meaningful, and I generally try to make clear that I don't think of logic this way. Besides which, I suggest intuition changes once you've grokked a system. I honestly see nothing counterintuitive in the fact that falsehoods imply anything. I mean, of course they do!

Philosophers of logic might disagree, but I don't have a lot of time for them anyway.
 
Logic is the study of types of inference. Knowing what kinds of inference are applicable is useful because we often have partial knowledge about a domain of discourse and yet we want to say
something about the whole domain. I.e., what possible worlds are consistent with what I currently know about the world? At one extreme, you can believe that anything is consistent with
what you know. If this is the case, then what you know tells you nothing about the rest of the domain/universe. The rules of classical logic or intuitionistic logic are well-thought out proposals
of things we can say given what we know. There are situations where you might prefer one logic over another, for example, maybe its slightly easier to extract computer programs from
intuitionistic proofs. People are all the time studying new domains, the inferences that are useful for reasoning about them, and coming up with new logics. Usually the starting point, though, is
a well-studied logic such as classical logic. Most logics where you restrict classical logic, such as not believing things like A OR NOT A, will mean you believe more worlds are
consistent with your current knowledge, so it is harder to infer anything. Logic is empirical in that you can look at a domain, compare two logics, and say, "hey, this logic
allowed me to better predict the space of likely outcomes."
 
Logic is the study of types of inference. Knowing what kinds of inference are applicable is useful because we often have partial knowledge about a domain of discourse and yet we want to say
something about the whole domain. I.e., what possible worlds are consistent with what I currently know about the world? At one extreme, you can believe that anything is consistent with
what you know. If this is the case, then what you know tells you nothing about the rest of the domain/universe. The rules of classical logic or intuitionistic logic are well-thought out proposals
of things we can say given what we know. There are situations where you might prefer one logic over another, for example, maybe its slightly easier to extract computer programs from
intuitionistic proofs. People are all the time studying new domains, the inferences that are useful for reasoning about them, and coming up with new logics. Usually the starting point, though, is
a well-studied logic such as classical logic. Most logics where you restrict classical logic, such as not believing things like A OR NOT A, will mean you believe more worlds are
consistent with your current knowledge, so it is harder to infer anything. Logic is empirical in that you can look at a domain, compare two logics, and say, "hey, this logic
allowed me to better predict the space of likely outcomes."

I'm sure that it must be very interesting to study logic in the sense you suggest here but then I wonder how it is shown that any of these various types of inference, including classical logic, is what human beings do when they reason logically. I can't fault the All-men-are-mortal kind of syllogisms and I haven't seen anybody doing it. The Frege-Russell solution, on the other hand, doesn't look much like what humans do, it seems to me. What do you think?
EB
 
Logic is the study of types of inference. Knowing what kinds of inference are applicable is useful because we often have partial knowledge about a domain of discourse and yet we want to say
something about the whole domain. I.e., what possible worlds are consistent with what I currently know about the world? At one extreme, you can believe that anything is consistent with
what you know. If this is the case, then what you know tells you nothing about the rest of the domain/universe. The rules of classical logic or intuitionistic logic are well-thought out proposals
of things we can say given what we know. There are situations where you might prefer one logic over another, for example, maybe its slightly easier to extract computer programs from
intuitionistic proofs. People are all the time studying new domains, the inferences that are useful for reasoning about them, and coming up with new logics. Usually the starting point, though, is
a well-studied logic such as classical logic. Most logics where you restrict classical logic, such as not believing things like A OR NOT A, will mean you believe more worlds are
consistent with your current knowledge, so it is harder to infer anything.
A point I've probably made already in this thread is that, if you go back to Euclid, it's clear that most of his proofs are constructive (especially those that end QEF rather than QED), and that classical logic doesn't capture those reasoning practices.

Arguments over classical and intuitionist logic didn't really get going until the late 19th and early 20th century, when Kronecker's generation were complaining about what they saw as the abuse of the actual infinite (a notion criticised since antiquity) by the set theorists, and then the fallout from the paradoxes that emerged. You only really have to care about intuitionism versus classical logic when you start playing fast and loose with infinities, and suddenly, it's not entirely clear what the logic should be. The logic most mathematicians adopt is not fairly called "classical logic", but "classical set theory", and it's a thoroughly modern invention and not particularly well studied.

The joke is that classical logic isn't really classical at all, at least in the sense of "classical period."
 
Back
Top Bottom