• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Role of Logic

What role does DEDUCTIVE LOGIC play in the way you reason in your everyday life?

  • P - Now that you mention it I don't actually remember using it, ever.

    Votes: 0 0.0%
  • Q - You need to have a reality check here! It's been proven wrong time and time again so I wouldn't

    Votes: 0 0.0%
  • R - I'm working very hard to make money and I don't have any time to spend on wasteful moronic games

    Votes: 0 0.0%
  • T - Hey! I'm not a logician, Ok?

    Votes: 0 0.0%
  • H - It doesn't even exist at all. As a matter of that, human beings can't actually 'reason'. Only my

    Votes: 0 0.0%
  • S - Actually, I would use it if I knew what it's for.

    Votes: 0 0.0%
  • U - How would I know?

    Votes: 0 0.0%
  • J - I'm the only one I know who happens to reason properly. Other people somehow always get their pr

    Votes: 0 0.0%
  • V - I refuse to answer silly questions.

    Votes: 0 0.0%
  • I - It's too many answers. I can't possibly read them all.

    Votes: 0 0.0%
  • W - Same as Jesus.

    Votes: 0 0.0%
  • D - I don't know what you're talking about.

    Votes: 0 0.0%
  • B - It's very important. It's always there somehow. It has always a role, whatever my reasoning may

    Votes: 0 0.0%
  • Y - It's illogical to use logic.

    Votes: 0 0.0%
  • K - Fortunately, I can choose when to use it and it's not very often because it's not very effective

    Votes: 0 0.0%
  • A - None whatsoever.

    Votes: 0 0.0%
  • N - It's a completely illogical answer.

    Votes: 0 0.0%
  • M - It's a completely illogical question.

    Votes: 0 0.0%
  • O - It's a delusion people have. It's an imaginary construct that doesn't exist. It's a philosophica

    Votes: 0 0.0%
  • L - I haven't studied it so I wouldn't know how to use it properly. My judgement is good enough to t

    Votes: 0 0.0%
  • X - All the answers above.

    Votes: 0 0.0%

  • Total voters
    6
  • Poll closed .
For the last decade, I've been using computer based programs for stating and verifying deductive arguments for mathematical theorems, and for checking the correctness of computer programs, so I work with deductive logic quite aggressively day-to-day. To be fair, the computer does a lot of the work, and is much better when it comes to fiddly details. I mostly think about the proofs a few levels up.
Ah, that's interesting. Any chance you could give us an example?

Specifically, I'm interested in the kind of typical deductive arguments you have to deal with.

Modern science and modern mathematics were already well developed when modern logic got fully formalised, say around the 1930's. So, basically, before that, scientists and mathematicians had to rely on their logical intuition and they seem to have done really well. I haven't found in the literature any grand claim by contemporary logicians that formal logic would be crucial to the current development of science and maths (I found just one very modest claim, which suggests that's all I'll ever find). It's my understanding that mathematical theorems are usually not even fully formalised, let alone demonstrated formally. Rather, there is a lot a reliance on intuition. I also expect that all demonstrations come down to A & B & C etc. imply Z. So, basically, formal proof would consist largely in making explicit the conjunction of premises. I don't see why you'd need modern logic at all for that. Do you?

Concerning the correctness of computer programmes, as I see it, the critical point is that the computer does exactly what the programmer intended it to do. I'm not sure how you could ever check that it is the case outside using stringent protocols for writing and testing programmes in real time, including identifying all possible states the programme could reach, and often there are too many of those and we can't even identify all of them in a reasonable amount of time. In other words, there's no formal method to prove a programme. Am I wrong on this?
EB
 
Deductive reasoning is inseparable from inductive reasoning. They are both critical elements of modeling which is what we do when we need to override autopilot.
Strictly speaking, they are separable since that's precisely what philosophers and logicians have done for more than two millennia. Also, there's no difficulty in making explicit the distinction between the two. Also, we can do one and not the other. I can determine empirically some general law and then stop there without making any deduction based on that. I can also make deductions from rules invented and therefore not empirically determined, the logic will be perfectly good although the conclusion will be useless.

That being said, I agree if that's what you mean that anything useful we do requires both inductive and deductive logic.

Also, isn't logic also in operation when on autopilot? Why not?
EB
 
For the last decade, I've been using computer based programs for stating and verifying deductive arguments for mathematical theorems, and for checking the correctness of computer programs, so I work with deductive logic quite aggressively day-to-day. To be fair, the computer does a lot of the work, and is much better when it comes to fiddly details. I mostly think about the proofs a few levels up.
Ah, that's interesting. Any chance you could give us an example?

Specifically, I'm interested in the kind of typical deductive arguments you have to deal with.
It's pretty impenetrable, but here's a two line snippet of code that is typical of some of the code I've written, a snippet which advertises that it's about logic:

Code:
assume "between P1 X P2 /\ between Q1 X Q2" at [4]
hence "~(P1 = P2) /\ ~(Q1 = Q2)" at [5] by [BET_NEQS]

You'll have to excuse the ASCII art. The software was designed before Unicode. "/\" is ∧ and means "logical and".

The snippet deduces that P1 is not equal to P2, and Q1 is not equal to Q2, given that X lies (strictly) between the respective pairs of points, using a lemma called "BET_NEQS". I was mostly sad about having to write this code, since I consider the matter obvious, but when you use computerised theorem provers, you often have to spell stuff out.

Modern science and modern mathematics were already well developed when modern logic got fully formalised, say around the 1930's. So, basically, before that, scientists and mathematicians had to rely on their logical intuition and they seem to have done really well. I haven't found in the literature any grand claim by contemporary logicians that formal logic would be crucial to the current development of science and maths (I found just one very modest claim, which suggests that's all I'll ever find). It's my understanding that mathematical theorems are usually not even fully formalised, let alone demonstrated formally. Rather, there is a lot a reliance on intuition. I also expect that all demonstrations come down to A & B & C etc. imply Z. So, basically, formal proof would consist largely in making explicit the conjunction of premises. I don't see why you'd need modern logic at all for that. Do you?
This paragraph is all correct. Maths is almost entirely unformalised (thank god, given the current state of formalised mathematics!) But we pretty much work on the philosophy that, even if we almost never formalise, we potentialy could, if we were so inclined, and we assume that our informal proofs would be a reasonable high-level account of the fully formalised proof.

That said, there are now a few high profile cases where practising mathematicians have been forced to formalise. In these case, the proofs are so subtle, and have so many case-splits, that you can't trust a human to account for it reliably. This was the situation with the proof of the Four Colour Theorem and the Kepler Conjecture, proofs that rely on big computations to eliminate cases, and which are way too laborious to check by hand. Mathematicians make mistakes, and we've had big cases of erroneous proofs getting through review (the first "proof" of the Four Colour Theorem was floating around for a decade before someone pointed out that it was broke). By contrast, a machine verified proof is pretty much beyond all possible reproach.

So I would stress that formal logic and verification here are used to help you check proofs. Otherwise, you mostly figure out the proof using your sloppy ape brain, with good-old-fashioned intuition. You just bring in the formal logic to vet your logical acumen. This, so I've read, is exactly how Frege intended formal logic: it wasn't the way to do maths. It was the way to vet maths. Indeed, my experience of working in formal logic tells me that formal logic is a really poor match for the way I actually think about maths problems. It's mostly for the computers, not for the humans.

Concerning the correctness of computer programmes, as I see it, the critical point is that the computer does exactly what the programmer intended it to do.
That almost never happens. What do you think a bug is?

I'm not sure how you could ever check that it is the case outside using stringent protocols for writing and testing programmes in real time, including identifying all possible states the programme could reach, and often there are too many of those and we can't even identify all of them in a reasonable amount of time. In other words, there's no formal method to prove a programme. Am I wrong on this?
EB
For lots of programs, we already have engineering specifications. If you want to write a web server, there are a whole bunch of engineering documents which tell you what your program needs to do before it can be called a "conforming implementation." It's not that hard to rewrite these implementations in formal logic, after which, you can use a machine verifier to absolutely prove that a given implementation is absolutely a conforming implementation of the specification.

There is a lot more code out there that could be matched to formal specifications, and it's usually the code that most needs it.
 
Ah, that's interesting. Any chance you could give us an example?

Specifically, I'm interested in the kind of typical deductive arguments you have to deal with.
It's pretty impenetrable, but here's a two line snippet of code that is typical of some of the code I've written, a snippet which advertises that it's about logic:

Code:
assume "between P1 X P2 /\ between Q1 X Q2" at [4]
hence "~(P1 = P2) /\ ~(Q1 = Q2)" at [5] by [BET_NEQS]

You'll have to excuse the ASCII art. The software was designed before Unicode. "/" is ∧ and means "logical and".

The snippet deduces that P1 is not equal to P2, and Q1 is not equal to Q2, given that X lies (strictly) between the respective pairs of points, using a lemma called "BET_NEQS". I was mostly sad about having to write this code, since I consider the matter obvious, but when you use computerised theorem provers, you often have to spell stuff out.
Thanks, it's exactly what I needed to see.


Modern science and modern mathematics were already well developed when modern logic got fully formalised, say around the 1930's. So, basically, before that, scientists and mathematicians had to rely on their logical intuition and they seem to have done really well. I haven't found in the literature any grand claim by contemporary logicians that formal logic would be crucial to the current development of science and maths (I found just one very modest claim, which suggests that's all I'll ever find). It's my understanding that mathematical theorems are usually not even fully formalised, let alone demonstrated formally. Rather, there is a lot a reliance on intuition. I also expect that all demonstrations come down to A & B & C etc. imply Z. So, basically, formal proof would consist largely in making explicit the conjunction of premises. I don't see why you'd need modern logic at all for that. Do you?
This paragraph is all correct. Maths is almost entirely unformalised (thank god, given the current state of formalised mathematics!) But we pretty much work on the philosophy that, even if we almost never formalise, we potentialy could, if we were so inclined, and we assume that our informal proofs would be a reasonable high-level account of the fully formalised proof.
Excellent!

Still, what is the likelihood of a not so distant future with mostly formalised proof in maths? I would assume the need for that has to become irresistible at some point in the near future.

That said, there are now a few high profile cases where practising mathematicians have been forced to formalise. In these case, the proofs are so subtle, and have so many case-splits, that you can't trust a human to account for it reliably. This was the situation with the proof of the Four Colour Theorem and the Kepler Conjecture, proofs that rely on big computations to eliminate cases, and which are way too laborious to check by hand. Mathematicians make mistakes, and we've had big cases of erroneous proofs getting through review (the first "proof" of the Four Colour Theorem was floating around for a decade before someone pointed out that it was broke). By contrast, a machine verified proof is pretty much beyond all possible reproach.
Where could I find the official proof of the four colour theorem or that of Kepler's conjecture?

So I would stress that formal logic and verification here are used to help you check proofs. Otherwise, you mostly figure out the proof using your sloppy ape brain, with good-old-fashioned intuition. You just bring in the formal logic to vet your logical acumen. This, so I've read, is exactly how Frege intended formal logic: it wasn't the way to do maths. It was the way to vet maths. Indeed, my experience of working in formal logic tells me that formal logic is a really poor match for the way I actually think about maths problems. It's mostly for the computers, not for the humans.
So formal logic can be used to validate proofs in maths. Presumably, it could be used in the same way in science. Are you aware of any effort to try to use it to discover new theorems in maths? Somebody must be trying that, no?

Concerning the correctness of computer programmes, as I see it, the critical point is that the computer does exactly what the programmer intended it to do.
That almost never happens. What do you think a bug is?
I did come across a few of those a long time ago! I'm sure they're still around.

Do you think what you do is typical of university research? I would expect different organisations to try out any avenue of research someone can think of. It seems to me that using formal logic to discover new theorems, rather than just validate known ones, would prove very useful. What's keeping you?
EB
 
Excellent!

Still, what is the likelihood of a not so distant future with mostly formalised proof in maths? I would assume the need for that has to become irresistible at some point in the near future.
Right now, the problem is that formalised mathematics is mostly done by research computer scientists, and computer science departments aren't that engaged with maths departments, and we're really bad at user-interfaces. The project I've just come from had an MO which was basically "design a machine formal logic verifier that the ordinary mathematician would use."

We didn't succeed.

Where could I find the official proof of the four colour theorem or that of Kepler's conjecture?
Kepler's Conjecture and Four Colour Theorem.

So formal logic can be used to validate proofs in maths. Presumably, it could be used in the same way in science. Are you aware of any effort to try to use it to discover new theorems in maths? Somebody must be trying that, no?
Oh sure. Lemma discovery has been a big deal since the get-go, and I've written code to do some myself. Getting machines to conjecture the theorems by themselves is a huge deal. It's something we all want automated.

Do you think what you do is typical of university research? I would expect different organisations to try out any avenue of research someone can think of. It seems to me that using formal logic to discover new theorems, rather than just validate known ones, would prove very useful. What's keeping you?
EB
Formal verification (using formal logic to vet) is a big deal in computer science globally, and certainly not peculiar to my uni.

The problem of conjecturing and automatically proving theorems is way more in its infancy than formal verification, but I expect that to change as we start tackling bigger projects. If you're faced with a huge verification task, you don't just want the computer doing your proofs. You want your computer suggesting the theorems. I mean, I want to get to a stage where I can write down a mathematical definition, and the machine spits out a load of interesting theorems about that definition without me having to intervene at all.

You might have spotted the technical issue with that last sentence: what, technically, is interesting? Most theorems that come out of formal logic are trivial, boring and worthless. Mathematics is about carving out a highly refined subset of logical consequences that somehow appeal to mathematicians. How do we impart the search for that subset to a computer? We've known for a century how to completely formalise the notion of "valid" using symbolic logic. I've no idea how we're supposed to formalise the notion of "mathematically interesting." And indeed, there's a cute Godel-style paradox saying that such a notion cannot be formalised.
 
Deductive logic plays an important functional role in cognition. It assures consistency of belief and allows us to weigh both sides of an argument.
 
Deductive logic plays an important functional role in cognition.
How would you know?

What do you mean 'cognition'? Do you think our beliefs are in effect cognitive ideas? Or that beliefs are logically derived? From what?

It assures consistency of belief
How would two distinct beliefs be consistent with each other if not because one follows logically from the other? If so, then your point seems circular: logic allows you to make sure your beliefs follow logically from each other. So, presumably, you're saying that logic is good because of that and then, what, that it's good that our beliefs be logically consistent with each other because logical consistency is good? That may be true but why is that such a good thing?

It <...> allows us to weigh both sides of an argument.
How so exactly?


Also, you had option B and C available but didn't select any of them:
B - It's very important. It's always there somehow. It has always a role, whatever my reasoning may be. I couldn't reason in any way without it.
C - It's fundamental. It's constitutive of the way humans happen to reason.

Why? Are you talking about logic used somehow by our brain outside reasoning proper? Something unconscious perhaps?
EB
 
Deductive logic plays an important functional role in cognition.
How would you know?

What do you mean 'cognition'? Do you think our beliefs are in effect cognitive ideas? Or that beliefs are logically derived? From what?
Those questions are too vague and open-ended for me to give a specific answer. I'll assume that you know what "cognition" means and can do your own research on that subject if you feel unsure. You can think of beliefs in terms of the propositions we use to describe them, but the role of deduction is not to guarantee truth, but consistency of truth across a set of beliefs. When you prove a conclusion through deduction, all you are doing is establishing that it must be true or false in relation to the truth of an argument's premises. Therefore, if you trust the truth of the premises, you can trust the truth of the conclusion. Bear in mind that we don't have to arrive at beliefs logically, but we can test them through logic. That is, we can establish that they are consistent with what else we think is true.

It assures consistency of belief
How would two distinct beliefs be consistent with each other if not because one follows logically from the other? If so, then your point seems circular: logic allows you to make sure your beliefs follow logically from each other. So, presumably, you're saying that logic is good because of that and then, what, that it's good that our beliefs be logically consistent with each other because logical consistency is good? That may be true but why is that such a good thing?
It would certainly be a bad thing if you could not resolve contradictory beliefs. I doubt you would live very long. Your brain builds causal predictive models that help you navigate in a chaotic environment. It allows you to consider alternative courses of action. To give a painfully obvious example, if someone holds a gun to your head and says "Your money or your life", it is very helpful to be able to figure out what course of action is consistent with those two things.

It <...> allows us to weigh both sides of an argument.
How so exactly?
I think that I just explained that. Deductive logic works in imaginary situations. So you can use logic to make predictions about future (imagined) events. You can also deduce actions, behavior, and consequences in fictional scenarios. That is, you can enjoy a book or a movie.

Also, you had option B and C available but didn't select any of them:
B - It's very important. It's always there somehow. It has always a role, whatever my reasoning may be. I couldn't reason in any way without it.
C - It's fundamental. It's constitutive of the way humans happen to reason.

Why? Are you talking about logic used somehow by our brain outside reasoning proper? Something unconscious perhaps?
Oh, I chose Z. It was hard to take your list seriously, especially with some of the frivolous options, and you left out the kitchen sink. :) Multiple choice quizzes are about simple answers to simple questions. The role of logic is an essay question.
 
Right now, the problem is that formalised mathematics is mostly done by research computer scientists, and computer science departments aren't that engaged with maths departments, and we're really bad at user-interfaces. The project I've just come from had an MO which was basically "design a machine formal logic verifier that the ordinary mathematician would use."

We didn't succeed.
I'm suspecting university mathematicians may not all have the same business model as most research computer scientists...

Where could I find the official proof of the four colour theorem or that of Kepler's conjecture?
Kepler's Conjecture and Four Colour Theorem.
Thanks.

So formal logic can be used to validate proofs in maths. Presumably, it could be used in the same way in science. Are you aware of any effort to try to use it to discover new theorems in maths? Somebody must be trying that, no?
Oh sure. Lemma discovery has been a big deal since the get-go, and I've written code to do some myself. Getting machines to conjecture the theorems by themselves is a huge deal. It's something we all want automated.
I'm wondering what would be the use of that beyond relieving mathematicians of some of their burden. Specifically, I'm thinking in terms of scientific and engineering applications, possibly even financial ones (I understand the financial sector is using quite a share of the mathematically-minded workforce). Possibly also things like strategy etc. Well, pretty much any economic sector where reasoning is required. Wait! Isn't that all of them?

If so, deduction would have to be said seriously useful, yes? Or am I going too far here?

Do you think what you do is typical of university research? I would expect different organisations to try out any avenue of research someone can think of. It seems to me that using formal logic to discover new theorems, rather than just validate known ones, would prove very useful. What's keeping you?
EB
Formal verification (using formal logic to vet) is a big deal in computer science globally, and certainly not peculiar to my uni.

The problem of conjecturing and automatically proving theorems is way more in its infancy than formal verification, but I expect that to change as we start tackling bigger projects.
So, how long before that happens?

If you're faced with a huge verification task, you don't just want the computer doing your proofs. You want your computer suggesting the theorems. I mean, I want to get to a stage where I can write down a mathematical definition, and the machine spits out a load of interesting theorems about that definition without me having to intervene at all.
I'm suspecting something like "combinatorial explosion" here, so that the more interesting the result would be, the more explosion you'd have to deal with. And, by explosion, I mean way beyond the geometric increase in computer capacity and speed. Too bad. Maybe there's a reason that the human brain is content to apply itself to fairly modest conjectures. Or is there a way to rain in the explosive aspect of it?

You might have spotted the technical issue with that last sentence: what, technically, is interesting? Most theorems that come out of formal logic are trivial, boring and worthless. Mathematics is about carving out a highly refined subset of logical consequences that somehow appeal to mathematicians. How do we impart the search for that subset to a computer? We've known for a century how to completely formalise the notion of "valid" using symbolic logic. I've no idea how we're supposed to formalise the notion of "mathematically interesting." And indeed, there's a cute Godel-style paradox saying that such a notion cannot be formalised.
I have another theorem for you: The brain is a material system, if the brain can do it, it's necessarily possible to make computers that could do it too. Do you like that theorem?
EB
 
Isn't the entire subject of math, set theory in particular, built solely on deductive logic?

Also, I would say that language requires deductive logic, especially nomenclature.
 
I'm suspecting university mathematicians may not all have the same business model as most research computer scientists...
My department had little contact with industry, so I don't really think in those terms.

Oh sure. Lemma discovery has been a big deal since the get-go, and I've written code to do some myself. Getting machines to conjecture the theorems by themselves is a huge deal. It's something we all want automated.
I'm wondering what would be the use of that beyond relieving mathematicians of some of their burden. Specifically, I'm thinking in terms of scientific and engineering applications, possibly even financial ones (I understand the financial sector is using quite a share of the mathematically-minded workforce). Possibly also things like strategy etc. Well, pretty much any economic sector where reasoning is required. Wait! Isn't that all of them?
Science and engineering applications are still a way off. We know how to and have made good headway in formalising mathematics, but have done comparably very little in terms of formalising science or engineering. Certain aspects of automated finance (such as smart contracts) are readily subject to formalisation, and there's money to be made in automatically verifying those.

If so, deduction would have to be said seriously useful, yes? Or am I going too far here?
Oh certainly not. I'm the last person you'd have to convert if you suggested we formalise the whole world. :)

The problem of conjecturing and automatically proving theorems is way more in its infancy than formal verification, but I expect that to change as we start tackling bigger projects.
So, how long before that happens?
No idea. It's a completely different sort of tech. Formal verification is just computerised formal logic, and the basic ideas were laid down a hundred years ago. Much of the automation still builds off of extremely well understood ideas in symbolic artificial intelligence. I'd suggest formal verification is one of the most fruitful applications of old school AI.

That sort of AI hasn't had as much success outside of that narrow domain, nor specifically as much success in conjecturing, and I'd suggest it's just fundamentally a different domain. These days, the excitement would be directed at stuff like deep-learning algorithms to figure out conjectures, but that's definitely not my area.

I'm suspecting something like "combinatorial explosion" here, so that the more interesting the result would be, the more explosion you'd have to deal with. And, by explosion, I mean way beyond the geometric increase in computer capacity and speed. Too bad. Maybe there's a reason that the human brain is content to apply itself to fairly modest conjectures. Or is there a way to rain in the explosive aspect of it?
Well, again, you want to be smarter that just doing a search through a combinatorial space. The search space of Chess and Go explodes too, but the machines now beat all the humans at it. And you can get some cool stuff without even doing some crazy black-box machine learning stuff. For a lot of mathematical definitions, there's a catalogue of standard and very useful algebraic conjectures you can propose. It's still automatic conjecturing.

You might have spotted the technical issue with that last sentence: what, technically, is interesting? Most theorems that come out of formal logic are trivial, boring and worthless. Mathematics is about carving out a highly refined subset of logical consequences that somehow appeal to mathematicians. How do we impart the search for that subset to a computer? We've known for a century how to completely formalise the notion of "valid" using symbolic logic. I've no idea how we're supposed to formalise the notion of "mathematically interesting." And indeed, there's a cute Godel-style paradox saying that such a notion cannot be formalised.
I have another theorem for you: The brain is a material system, if the brain can do it, it's necessarily possible to make computers that could do it too. Do you like that theorem?
EB
Not as much :) The theorem I'm referring to isn't a theorem which places limits on what machines can do. It's a theorem which suggests that the term "interesting" isn't formalisable. Instead, to a formalist, it's just a silly cooing word that dumb humans sometimes make when they see something that surprises them. We shouldn't be surprised that we're confused about how to get a computer to find things for humans to coo over.
 
You might have spotted the technical issue with that last sentence: what, technically, is interesting? Most theorems that come out of formal logic are trivial, boring and worthless. Mathematics is about carving out a highly refined subset of logical consequences that somehow appeal to mathematicians. How do we impart the search for that subset to a computer? We've known for a century how to completely formalise the notion of "valid" using symbolic logic. I've no idea how we're supposed to formalise the notion of "mathematically interesting." And indeed, there's a cute Godel-style paradox saying that such a notion cannot be formalised.
I have another theorem for you: The brain is a material system, if the brain can do it, it's necessarily possible to make computers that could do it too. Do you like that theorem?
EB

How do you know that you aren't creating a philosophical zombie, assuming that we don't have an exact definition of consciousness?
 
Not as much :) The theorem I'm referring to isn't a theorem which places limits on what machines can do. It's a theorem which suggests that the term "interesting" isn't formalisable. Instead, to a formalist, it's just a silly cooing word that dumb humans sometimes make when they see something that surprises them. We shouldn't be surprised that we're confused about how to get a computer to find things for humans to coo over.
If that was the case then we would have to let go of the idea that formal logic could be useful for making conjectures. Instead, I would suggest we can revisit the meaning of the word 'interesting'. To me, what human beings are interested in is primarily useful stuff. Granted, lots of people indulge in frivolous activities and entire professions are dedicated to frivolous production, but most people are primarily interested in finding a way to have a decent way of life, reproduce themselves, eat, rest, etc. all useful things. So, our ability to experience interest seems to be calibrated for a focus on useful things. If so, there has to be an objective process that allows our interest to focus on useful things. If so, then it's at least conceivable that we could identify what this process is exactly and reproduce its mechanism in a computer. We may have no idea right now how that would work but that doesn't mean we can't do it.
EB
 
I have another theorem for you: The brain is a material system, if the brain can do it, it's necessarily possible to make computers that could do it too. Do you like that theorem?
EB

How do you know that you aren't creating a philosophical zombie, assuming that we don't have an exact definition of consciousness?
What would be the problem exactly with a philosophical zombie which would make himself useful working his ass off to give us new conjectures too difficult for us to find by ourselves?

The issue doesn't concern subjective experience but the brain as a calculus machine. We wouldn't need a machine used for making conjectures to possess subjective experience.
EB
 
Isn't the entire subject of math, set theory in particular, built solely on deductive logic?
As I see it, it's everything we do that's a bit complicated which requires deductive logic. So, maths certainly qualifies.

Yet, I think you're referring rather to the effort by Russell and Whitehead to reset the foundation of maths from logical principles only. They apparently failed although it worked for parts of maths, possibly set theory. But maths didn't have to wait for Russell and Whitehead to get going so this point is moot. What is relevant is that any piece of reasoning we do involves a certain amount of deductive logic, and that includes all maths.

Also, I would say that language requires deductive logic, especially nomenclature.
See above.
EB
 
If that was the case then we would have to let go of the idea that formal logic could be useful for making conjectures.
Right. It's not what formal logic was ever intended for, and there's nothing about it that makes it look, to me, like a good place to get insights on conjecturing.

Instead, I would suggest we can revisit the meaning of the word 'interesting'. To me, what human beings are interested in is primarily useful stuff. Granted, lots of people indulge in frivolous activities and entire professions are dedicated to frivolous production, but most people are primarily interested in finding a way to have a decent way of life, reproduce themselves, eat, rest, etc. all useful things. So, our ability to experience interest seems to be calibrated for a focus on useful things. If so, there has to be an objective process that allows our interest to focus on useful things. If so, then it's at least conceivable that we could identify what this process is exactly and reproduce its mechanism in a computer. We may have no idea right now how that would work but that doesn't mean we can't do it.
EB
I'm not saying we can't. But I'm a pessimist, and right now, I'm not in a position to say when we'll be in a position to start seriously tackling whatever it is you're talking about in this paragraph.

Don't get me wrong. I kinda get what you're talking about in this paragraph, and I probably have similar intuitions. But I realise those intuitions are pretty pathetic in their technical content compared to what we currently use formal logic fr, such as mechanically verifying mathematical arguments. And I'm not sure that'll get fixed any time soon. As in many cases, we might be asking the wrong questions to begin with.
 
They apparently failed although it worked for parts of maths, possibly set theory.
Their failure is wildly exaggerated.

Formal verification is just a continuation of Russell and Whitehead with the help of computers.
 
Back
Top Bottom