• Welcome to the Internet Infidels Discussion Board.

Role of Logic

What role does DEDUCTIVE LOGIC play in the way you reason in your everyday life?

  • P - Now that you mention it I don't actually remember using it, ever.

    Votes: 0 0.0%
  • Q - You need to have a reality check here! It's been proven wrong time and time again so I wouldn't

    Votes: 0 0.0%
  • R - I'm working very hard to make money and I don't have any time to spend on wasteful moronic games

    Votes: 0 0.0%
  • T - Hey! I'm not a logician, Ok?

    Votes: 0 0.0%
  • H - It doesn't even exist at all. As a matter of that, human beings can't actually 'reason'. Only my

    Votes: 0 0.0%
  • S - Actually, I would use it if I knew what it's for.

    Votes: 0 0.0%
  • U - How would I know?

    Votes: 0 0.0%
  • J - I'm the only one I know who happens to reason properly. Other people somehow always get their pr

    Votes: 0 0.0%
  • V - I refuse to answer silly questions.

    Votes: 0 0.0%
  • I - It's too many answers. I can't possibly read them all.

    Votes: 0 0.0%
  • W - Same as Jesus.

    Votes: 0 0.0%
  • D - I don't know what you're talking about.

    Votes: 0 0.0%
  • B - It's very important. It's always there somehow. It has always a role, whatever my reasoning may

    Votes: 0 0.0%
  • Y - It's illogical to use logic.

    Votes: 0 0.0%
  • K - Fortunately, I can choose when to use it and it's not very often because it's not very effective

    Votes: 0 0.0%
  • A - None whatsoever.

    Votes: 0 0.0%
  • N - It's a completely illogical answer.

    Votes: 0 0.0%
  • M - It's a completely illogical question.

    Votes: 0 0.0%
  • O - It's a delusion people have. It's an imaginary construct that doesn't exist. It's a philosophica

    Votes: 0 0.0%
  • L - I haven't studied it so I wouldn't know how to use it properly. My judgement is good enough to t

    Votes: 0 0.0%
  • X - All the answers above.

    Votes: 0 0.0%

  • Total voters
    6
  • Poll closed .
How do you know that you aren't creating a philosophical zombie, assuming that we don't have an exact definition of consciousness?
What would be the problem exactly with a philosophical zombie which would make himself useful working his ass off to give us new conjectures too difficult for us to find by ourselves?

Nothing at all, it just may not generate conscious experiences like humans have. I was really just trying to give a possible example of something that a machine may not do that a human can even though it may not be useful in any way.

The issue doesn't concern subjective experience but the brain as a calculus machine. We wouldn't need a machine used for making conjectures to possess subjective experience.
EB

I think it absolutely does. Interest is very subjective. What is interesting to me is not necessarily interesting to you and vice versa. Moreover, what is interesting to me now may not be interesting to me in a year.
 
What would be the problem exactly with a philosophical zombie which would make himself useful working his ass off to give us new conjectures too difficult for us to find by ourselves?

Nothing at all, it just may not generate conscious experiences like humans have. I was really just trying to give a possible example of something that a machine may not do that a human can even though it may not be useful in any way.
Right. So, that was a straightforward derail.

The issue doesn't concern subjective experience but the brain as a calculus machine. We wouldn't need a machine used for making conjectures to possess subjective experience.
EB

I think it absolutely does. Interest is very subjective.
You apparently still fail to understand the distinction between subjective experience and whatever cognitive content we happen to experience. Philosophical zombies wouldn't have subjective experience at all but their brain would still be able to focus on things it would cognitively identify as interesting just as a computer can be programmed to identify and focus on certain patterns such as people's faces and unattended bags on platforms.

What is interesting to me is not necessarily interesting to you and vice versa. Moreover, what is interesting to me now may not be interesting to me in a year.
Sure, but that's irrelevant. I wish you would stop making up lame arguments all the time. There are clear and important differences between Mars and Jupiter. So, you think it's reason enough not to look for what they may have in common? Bravo, you've just justified stopping all scientific enquiry!!!


So, what is relevant is that there are something like 7 billion human beings and they tend on average to have very similar interests in a way that couldn't possibly be explained by the laws of probability. If you disagree with that please explain.

And in this respect we're not even different from other animal species. Cockroaches tend to be interested in the same kind of stuff and surely there must be an objective process explaining this. Spiders have all similar interests. Cats are so similar to each other in terms of their interests we find it difficult to regard each cat as an individual cat. With old age I also now tend more and more to look at people, including people close to me, in generic terms because they tend to be so similar in terms of their interests. There is obviously some variability in our interests but there is a very similar variability in our physical phenotypes, which surely we want to explain in terms of our objective differences in our DNA and in terms of the physical environment we live and grew up in.

And you need me to explain that?
EB
 
Nothing at all, it just may not generate conscious experiences like humans have. I was really just trying to give a possible example of something that a machine may not do that a human can even though it may not be useful in any way.
Right. So, that was a straightforward derail.

The issue doesn't concern subjective experience but the brain as a calculus machine. We wouldn't need a machine used for making conjectures to possess subjective experience.
EB

I think it absolutely does. Interest is very subjective.
You apparently still fail to understand the distinction between subjective experience and whatever cognitive content we happen to experience. Philosophical zombies wouldn't have subjective experience at all but their brain would still be able to focus on things it would cognitively identify as interesting just as a computer can be programmed to identify and focus on certain patterns such as people's faces and unattended bags on platforms.

What is interesting to me is not necessarily interesting to you and vice versa. Moreover, what is interesting to me now may not be interesting to me in a year.
Sure, but that's irrelevant. I wish you would stop making up lame arguments all the time. There are clear and important differences between Mars and Jupiter. So, you think it's reason enough not to look for what they may have in common? Bravo, you've just justified stopping all scientific enquiry!!!


So, what is relevant is that there are something like 7 billion human beings and they tend on average to have very similar interests in a way that couldn't possibly be explained by the laws of probability. If you disagree with that please explain.

And in this respect we're not even different from other animal species. Cockroaches tend to be interested in the same kind of stuff and surely there must be an objective process explaining this. Spiders have all similar interests. Cats are so similar to each other in terms of their interests we find it difficult to regard each cat as an individual cat. With old age I also now tend more and more to look at people, including people close to me, in generic terms because they tend to be so similar in terms of their interests. There is obviously some variability in our interests but there is a very similar variability in our physical phenotypes, which surely we want to explain in terms of our objective differences in our DNA and in terms of the physical environment we live and grew up in.

And you need me to explain that?
EB

I really don't think you understand what subjectivity means which is probably why my arguments seem "lame" to you. Here are the 2 main definitions from Oxford Dictionary that I was using,

"The quality of being based on or influenced by personal feelings, tastes, or opinions."

"The quality of existing in someone's mind rather than the external world."

So when we talk about subjectivity, we don't talk about the objective properties of it like its physicality.
 
Ryan, P-zombies are interesting philosophical constructs that may become less useful as our understanding of the neurological basis of consciousness advances. We can certainly build robotic systems that become "interested" in certain aspects of their surroundings in the sense that those are relevant to their programmed goals. Like animal brains, these systems build models of their environment, test them, and modify them as more data comes in. Nobody really thinks that these systems have subjective experiences in the same sense that animals do yet, but there is no reason to assume that we can't someday engineer robots that do. We will never know whether or not they are P-zombies, but the fact is, given how philosophers define P-zombies, we cannot know whether or not other human beings are P-zombies. P-zombies are indistinguishable from real humans to outside observers.
 
Ryan, P-zombies are interesting philosophical constructs that may become less useful as our understanding of the neurological basis of consciousness advances. We can certainly build robotic systems that become "interested" in certain aspects of their surroundings in the sense that those are relevant to their programmed goals. Like animal brains, these systems build models of their environment, test them, and modify them as more data comes in. Nobody really thinks that these systems have subjective experiences in the same sense that animals do yet, but there is no reason to assume that we can't someday engineer robots that do.

Although, you could believe in theories like integrated information theory. I am at a point in understanding of the consciousness that practically forces me to believe in kinds of panpsychism such as integrated information theory. I really really trust that reductionism will hold for consciousness. So maybe a bug or even a tree has very few mental states. It may only experience 1 color or maybe it has a few experiences that we don't have. The point is that the strong emergence of the consciousness will probably one day be reduced to its parts like all other things that seemed to have emerged until they understood the parts better.

But that's just where I at with all of this.
 
Last edited:
I really don't think you understand what subjectivity means which is probably why my arguments seem "lame" to you. Here are the 2 main definitions from Oxford Dictionary that I was using,

"The quality of being based on or influenced by personal feelings, tastes, or opinions."

"The quality of existing in someone's mind rather than the external world."

So when we talk about subjectivity, we don't talk about the objective properties of it like its physicality.
That's just irrelevant. I didn't discuss subjectivity in this thread. You did.

Your handle for doing that was to insist that 'interest' can only be considered from the angle of our subjectivity. I addressed this point in my previous post to explain how 'interest' can be considered objectively but you very conveniently elected to ignore my argument to keep digging around the irrelevant issue of subjectivity.

If you don't have anything relevant to say about the topic of this thread please go some place else to do your monologue about subjectivity.
EB
 
I really don't think you understand what subjectivity means which is probably why my arguments seem "lame" to you. Here are the 2 main definitions from Oxford Dictionary that I was using,

"The quality of being based on or influenced by personal feelings, tastes, or opinions."

"The quality of existing in someone's mind rather than the external world."

So when we talk about subjectivity, we don't talk about the objective properties of it like its physicality.
That's just irrelevant. I didn't discuss subjectivity in this thread. You did.

Your handle for doing that was to insist that 'interest' can only be considered from the angle of our subjectivity. I addressed this point in my previous post to explain how 'interest' can be considered objectively but you very conveniently elected to ignore my argument to keep digging around the irrelevant issue of subjectivity.

If you don't have anything relevant to say about the topic of this thread please go some place else to do your monologue about subjectivity.
EB

But being interested in something is also psychological/subjective. A philosophical zombie will seem interested in something; the physicality will match ours, but it doesn't have the full experience of being interested like humans have. That's all I am trying to say.
 
That's just irrelevant. I didn't discuss subjectivity in this thread. You did.

Your handle for doing that was to insist that 'interest' can only be considered from the angle of our subjectivity. I addressed this point in my previous post to explain how 'interest' can be considered objectively but you very conveniently elected to ignore my argument to keep digging around the irrelevant issue of subjectivity.

If you don't have anything relevant to say about the topic of this thread please go some place else to do your monologue about subjectivity.
EB

But being interested in something is also psychological/subjective. A philosophical zombie will seem interested in something; the physicality will match ours, but it doesn't have the full experience of being interested like humans have. That's all I am trying to say.
And that's relevant how to the question of identifying patterns!?

Never mind.
EB
 
Ryan, P-zombies are interesting philosophical constructs that may become less useful as our understanding of the neurological basis of consciousness advances. We can certainly build robotic systems that become "interested" in certain aspects of their surroundings in the sense that those are relevant to their programmed goals. Like animal brains, these systems build models of their environment, test them, and modify them as more data comes in. Nobody really thinks that these systems have subjective experiences in the same sense that animals do yet, but there is no reason to assume that we can't someday engineer robots that do.

Although, you could believe in theories like integrated information theory. I am at a point in understanding of the consciousness that practically forces me to believe in kinds of panpsychism such as integrated information theory. I really really trust that reductionism will hold for consciousness. So maybe a bug or even a tree has very few mental states. It may only experience 1 color or maybe it has a few experiences that we don't have. The point is that the strong emergence of the consciousness will probably one day be reduced to its parts like all other things that seemed to have emerged until they understood the parts better.
I am not very familiar with the literature on IIT, but it seems to approach the subject of consciousness from a top-down perspective rather than bottom-up. That is, it doesn't try to reduce the problem of mental activity to quantum events or the behavior of individual neurons, and I think that that is a very productive way to approach the problem. With the evolution of computing techniques (and cellular automata theory), simulation and modeling are very important tools in the development of scientific theory these days. So I see robotics as one of the main drivers in coming to understand how human cognition works, because it forces researchers to investigate high level cognitive behavior. Two very important topics in robotics are machine learning and self-awareness. Since robots are rudimentary "animals", i.e. they have moving bodies with actuators, sensors, and a "central nervous system" to make sense of their health and their surroundings, they are an ideal testing ground for high level cognitive behaviors.
 
Although, you could believe in theories like integrated information theory. I am at a point in understanding of the consciousness that practically forces me to believe in kinds of panpsychism such as integrated information theory. I really really trust that reductionism will hold for consciousness. So maybe a bug or even a tree has very few mental states. It may only experience 1 color or maybe it has a few experiences that we don't have. The point is that the strong emergence of the consciousness will probably one day be reduced to its parts like all other things that seemed to have emerged until they understood the parts better.
I am not very familiar with the literature on IIT, but it seems to approach the subject of consciousness from a top-down perspective rather than bottom-up. That is, it doesn't try to reduce the problem of mental activity to quantum events or the behavior of individual neurons, and I think that that is a very productive way to approach the problem. With the evolution of computing techniques (and cellular automata theory), simulation and modeling are very important tools in the development of scientific theory these days. So I see robotics as one of the main drivers in coming to understand how human cognition works, because it forces researchers to investigate high level cognitive behavior. Two very important topics in robotics are machine learning and self-awareness. Since robots are rudimentary "animals", i.e. they have moving bodies with actuators, sensors, and a "central nervous system" to make sense of their health and their surroundings, they are an ideal testing ground for high level cognitive behaviors.

By reducible, I don't mean reducible to the most reducible physical parts. I meant reducible to simple systems like a single logic gate or neuron. And really, in the end even the most fundamental constituents of reality would have had to emerged at some point assuming the universe has a finite past.

The hard emergence of consciousness right now - which seems to be the most popular way to think of the consciousness - is that it emerges from a much much more complex neurological process. It is a popular claim that this process is nothing less than the whole or large neurological process of an entire typical human brain. I personally can't accept that. It seems less reasonable, in my opinion, because there are so many redundant processes (specifically individual neurons of neural activity) in the brain that make up what they think is a whole irreducible consciousness.

I like IIT because it reduces consciousness to a neuron or logic gate, and the quantity just adds to conscious variety. Yet the "shapes" of each quale are irreducible and correlate to a neuron or logic gate.
 
From what I read, IIT did not reduce consciousness to a neuron or logic gate, so maybe I misunderstand what it is about. I don't think that it makes sense to engage in such radical reductionism, because consciousness strikes me more as an emergent property of much higher level interactions. Individual neurons have nothing more to do with consciousness than a single water molecule does with a wave in the ocean. It is the collective activity of those molecules at a much higher level that actually causes the wave.
 
From what I read, IIT did not reduce consciousness to a neuron or logic gate, so maybe I misunderstand what it is about. I don't think that it makes sense to engage in such radical reductionism, because consciousness strikes me more as an emergent property of much higher level interactions. Individual neurons have nothing more to do with consciousness than a single water molecule does with a wave in the ocean. It is the collective activity of those molecules at a much higher level that actually causes the wave.

The YouTube videos Giulio Tononi ( https://youtu.be/zAids7abnyw ) gives a simpler explanation that helps me understand it. But if you want to know much more, here is some really thorough information on IIT http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1003588#pcbi.1003588-Ascoli1 .

It's quite complicated but it does seem to imply that consciousness scales down, not quite to a single neuron but to a few neurons or 2 intetacting 3 gate mechanisms. From the link I provided, click on the "models" tab, and it is shown and explained in the illustration at Figure 13.
 
From what I read, IIT did not reduce consciousness to a neuron or logic gate, so maybe I misunderstand what it is about. I don't think that it makes sense to engage in such radical reductionism, because consciousness strikes me more as an emergent property of much higher level interactions. Individual neurons have nothing more to do with consciousness than a single water molecule does with a wave in the ocean. It is the collective activity of those molecules at a much higher level that actually causes the wave.

The YouTube videos Giulio Tononi ( https://youtu.be/zAids7abnyw ) gives a simpler explanation that helps me understand it. But if you want to know much more, here is some really thorough information on IIT http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1003588#pcbi.1003588-Ascoli1 .

It's quite complicated but it does seem to imply that consciousness scales down, not quite to a single neuron but to a few neurons or 2 intetacting 3 gate mechanisms. From the link I provided, click on the "models" tab, and it is shown and explained in the illustration at Figure 13.
Thanks, Ryan. I found those references helpful, but I obviously lack the background to evaluate its overall merits. I like some things about the content of the video and paper, but I was unhappy with the lack of a clear definition of what the proponents think "consciousness" is. That term can mean many different things. Their "axioms" do not really explain what they think it is. Rather they just describe properties that they think consciousness has. Their experiments seem to show some very significant patterns of brain activity associated with coming to a waking state or a state of full awareness. So it makes sense to talk about "integration" and "irreducibility" if that is the intent of what they are measuring. I'm not sure that that will really lead to a comprehensive understanding of what makes consciousness work. How does that lead to what I call a "train of thought" that can be conveyed to others via a linearly formatted linguistic signal? What is going on when the brain shifts perspectives to foreground and background different information? How does the brain arrive at a decision, given conflicting information? All of these activities and others go into what we tend to put under the umbrella of "consciousness" when we use that term.

Anyway, these are just thoughts that come to mind, given a very superficial and preliminary look at IIT. I know enough about my ignorance to keep an open mind on the subject. It does look like very interesting work.
 
The YouTube videos Giulio Tononi ( https://youtu.be/zAids7abnyw ) gives a simpler explanation that helps me understand it. But if you want to know much more, here is some really thorough information on IIT http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1003588#pcbi.1003588-Ascoli1 .

It's quite complicated but it does seem to imply that consciousness scales down, not quite to a single neuron but to a few neurons or 2 intetacting 3 gate mechanisms. From the link I provided, click on the "models" tab, and it is shown and explained in the illustration at Figure 13.
Thanks, Ryan. I found those references helpful, but I obviously lack the background to evaluate its overall merits. I like some things about the content of the video and paper, but I was unhappy with the lack of a clear definition of what the proponents think "consciousness" is. That term can mean many different things. Their "axioms" do not really explain what they think it is. Rather they just describe properties that they think consciousness has. Their experiments seem to show some very significant patterns of brain activity associated with coming to a waking state or a state of full awareness. So it makes sense to talk about "integration" and "irreducibility" if that is the intent of what they are measuring. I'm not sure that that will really lead to a comprehensive understanding of what makes consciousness work. How does that lead to what I call a "train of thought" that can be conveyed to others via a linearly formatted linguistic signal? What is going on when the brain shifts perspectives to foreground and background different information? How does the brain arrive at a decision, given conflicting information? All of these activities and others go into what we tend to put under the umbrella of "consciousness" when we use that term.

Anyway, these are just thoughts that come to mind, given a very superficial and preliminary look at IIT. I know enough about my ignorance to keep an open mind on the subject. It does look like very interesting work.

You might want to go to the "conscious" thread in the metaphysics section. There are some good conversations to be had there about consciousness. Or just start your own thread. Test your ideas or theories.

Your background in A.I. (even if it was just a few books you read) will be an interesting addition to the posters at TF regarding consciousness.
 
Back
Top Bottom