• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Consciousness & intelligence​

Speakpigeon

Contributor
Joined
Feb 4, 2009
Messages
6,317
Location
Paris, France, EU
Basic Beliefs
Rationality (i.e. facts + logic), Scepticism (not just about God but also everything beyond my subjective experience)
Consciousness and intelligence...​

I see intelligence as the ability to conceive abstracts models and use them in a purposeful and effective way.
A machine might be able to do that some day although if someone already had one, I'm not sure we would know.
I don't know how to define consciousness. I just know what it's like to be conscious, when I am. And beyond my own but incontrovertibly private experience of it, it's supposed to be what all human beings normally experience subjectively whenever they're paying attention.
Self seems irrelevant here.

So, consciousness and intelligence...​ Does either require the other?​

Intelligence is a capability. Consciousness seems more like a matter of fact. Is material existence necessary to intelligence? Who knows? Can you be intelligent without being conscious? Who knows?
I suspect that consciousness and intelligence sort of have fed off each other in the development of us modern human beings. Still, that may only be an effect of perspective. Intelligence certainly allows us to discuss consciousness and thereby contributes to our understanding of it, and perhaps gives a false sense that we're more conscious for being more intelligent. Hard to tell.
I think it's probably all connected with this notion of the Cartesian Theatre, as a by-product, as I think of it, of evolution. I'm not conscious of the unconscious activity of my brain, obviously. So, consciousness either is compartmented, different parts of the brain being conscious for themselves, so to speak, and ignoring each other; or maybe there's only one small module that got to become conscious as a result of our evolution. Hard to know.
Still, we know we have this small module at least that's conscious. In a way that module is really us. The one thing we're conscious of because it's a conscious thing and it's us. So, this small module may well also be uniquely gifted with intelligence, at least if the gift of abstract thinking and planification is only really worth paying for in the module that is in charge of supervising what we do in the world. And that's what seems to be the usefulness of the Cartesian Theatre, which is broadly a model of whatever is necessary to be able to perform the necessary supervising, broadly all the functions you have to have in a control centre, only much more integrated. So, basically, what we are conscious of is really just that, our Cartesian Theatre. We literally take it for the real world. We can't even refrain from believing this. So, it's literally the stage where we can use and evolve our intelligence, our own kind of intelligence, if maybe machines may be thought of as having intelligence but of a different sort.
So, clearly, our kind of intelligence does require our Cartesian Theatre, which in turn seems to be closely connected to our ability to be conscious since we're only conscious of it. However, we should be aware, and if you're not just yet you're very soon going to be upon further reading, that a lot of our conscious thoughts are produced through various unconscious processes of our brain and indeed of our body. The beautiful view of the world we can enjoy is essentially produced outside our consciousness, inside our brain and outside of it since this process requires things we call eyes and that eyes are not all neurons. That much we all know, I guess. But most of our thoughts come through some unconscious processes. For example, if we have the impression that two mechanical parts should fit together neatly, well, that impression, though itself conscious, will be produced by some unconscious process somewhere inside our brain. And there's a lot like this beside impressions, although we do have an awful lot of them.
Still, as I see it, intelligence is about abstract thinking and abstract thinking does seem to take place essentially in our conscious mind. Presumably, for each abstract thought we will have, there will be a lot of unconscious caretaking going on in the shadows of our brain. Yet, what we think of as intelligence does seem to be associated with consciousness. Now, here is one clear example of an unconscious process that's necessary to our conscious intelligent thoughts (when we have them): That's logical intuitions. We can certainly have conscious logical thoughts, somewhat on the model of syllogisms, but our brain is also capable of assisting our intelligence through providing discrete but very effective logical intuition whenever it's necessary. So, it's fair to say that our intelligence seems to rely a lot on unconscious processes. Still, it would seem pointless to deny that intelligence is closely associated with our conscious Cartesian Theatre. It's its playground, so to speak. It's where it will be most useful and efficient. It's where thinking abstract thoughts and conceiving abstract ideas will prove more effective and productive.
So, without going on and on about it, it seems to me that consciousness and intelligence are closely linked. However, it may be that it's entirely incidental. And maybe the parts of our brain that couldn't be said to be properly intelligent are nonetheless conscious, if only for themselves, so to speak. Who would know except themselves?
And again, it might be that machines one day will be made really intelligent but perhaps without being conscious. Again, who could possibly know?
And, maybe, they might let us ask them the question.
EB
 
Consciousness is a general reference to the attributes and features of brain cognition, the ability to see, hear, touch, smell, think, feel, reason, etc....but not all of these attributes and features are necessarily present in any given brain (other species, lesions, genetic defects, etc), or if present, active at any given time.
 
Consciousness and intelligence...​

I see intelligence as the ability to conceive abstracts models and use them in a purposeful and effective way.
A machine might be able to do that some day although if someone already had one, I'm not sure we would know.
I don't know how to define consciousness. I just know what it's like to be conscious, when I am. And beyond my own but incontrovertibly private experience of it, it's supposed to be what all human beings normally experience subjectively whenever they're paying attention.
Self seems irrelevant here.

So, consciousness and intelligence...​ Does either require the other?​
...

I agree with your basic definition of intelligence. The ability to create abstract models and use them in a purposeful and effective way. But I think there are machines that today have this ability, and in their area of expertise they exceed human intelligence. For instance having the ability to defeat grand masters at Go with tactics that leave their opponents baffled. So basically any animal with a brain has some level of intelligence.

I don't agree that the concept of the self is irrelevant to consciousness. Any reference to the self is also based on abstractions. It's also a model created by the brain. Therefore it seems to me any experiences associated with the self must also be abstractions, and from my own experience consciousness always occurs with respect to some aspect of the self. Task-specific machine intelligence isn't as yet sufficiently generalized for consciousness to emerge. But I imagine they will someday be designed to integrate an image of self with some manner of purpose and achieve consciousness. Needless to say it will be quite different from human consciousness.
 
I agree with your basic definition of intelligence. The ability to create abstract models and use them in a purposeful and effective way. But I think there are machines that today have this ability, and in their area of expertise they exceed human intelligence. For instance having the ability to defeat grand masters at Go with tactics that leave their opponents baffled. So basically any animal with a brain has some level of intelligence.

I would be prepared to accept that but I'm not entirely sure you really mean that. Are we really talking about machines that can conceive abstract models? New abstract models? That's a very tall order in my view. I suspect you have something else entirely in mind. I would expect machines to find solutions within the limits of a predefined analysis of the problem. They just apply preexisting models that have already been approved by a human brain. That this may wrong-foot their human opponent, sure, but that's no evidence of an ability to conceive abstract models. Even limited to things like chess and go, it would be really something and I don't hear the noise that would go with that.

I don't agree that the concept of the self is irrelevant to consciousness.

Sorry, I've been a bit vague there. I meant not relevant to the nature of consciousness. Clearly, the Cartesian Theatre normally includes a character in the play which is the self.

Any reference to the self is also based on abstractions.

I'm not sure what you mean by "reference" here. As I see it, there's an awful lot of data that relate to the self that's not abstract at all. Most of it will be a subset of the sense data streaming in, essentially sensations, like pain, tiredness etc. but also emotions, like joy and sadness, and impressions (see my thread on that). All that makes up or contributes to your sense of who you are now and whether you are the same as what you remember (and sometimes, you may develop a sense of not being the same person). Some of that stuff may be memorised, occasionally, like when there's a traumatic event.

Still, I would agree that there's also a lot of abstract data, for example perhaps a narrative of your role in a particular event or situation or a reconsideration of your relationship to somebody else following some crisis, etc.

It's also a model created by the brain. Therefore it seems to me any experiences associated with the self must also be abstractions,

I don't think the self is a model at all. I think it's in part a reference. What would be the use of a model of yourself? The self is yourself. I don't think you need a model about yourself to predict what you're going to do next. What you need is a representation which will be a reference, i.e. an autobiographical reference. Who you were so far, so that you can detect any abnormal change and act on it. The other part of the self is precisely who you are now, so as to compare with the reference.

You probably need to develop an abstract model about yourself for some limited aspects. For example, as to your ability to perform certain specific activities. Perhaps about your social status and a strategy to improve it or some such. But that will be a rather small part of the whole persona and a much less basic part compared to the other stuff. Typically, the amount of that will depend on the personality of the subject. Sophisticated people will have a richer, more detailed, more elaborate model of themselves. Most people probably don't have much in that area.

and from my own experience consciousness always occurs with respect to some aspect of the self.

Well, suppose you're just watching birds doing their things. Obviously, you're watching because you're interested but you essentially forget yourself just by being captivated with what the birds are doing. So, yes there's a connection to your self but only in that it's you who is interested and you doing the watching and you memorising the scene etc. Whatever goes through your mind at that moment is a stream of data that is in effect a part of your self. It's who you are at that moment, and possibly this will be memorised to some extent to become a part of your autobiographical data, therefore also associated with your self.

Task-specific machine intelligence isn't as yet sufficiently generalized for consciousness to emerge. But I imagine they will someday be designed to integrate an image of self with some manner of purpose and achieve consciousness. Needless to say it will be quite different from human consciousness.

If they have consciousness, they will be able to be conscious of any self they would have but I fail to see why consciousness would be something useful to have for machines, or how consciousness could just emerge as if by magic from electronic processes. It might be possible but it doesn't seem at all plausible to me at least for now. Having the ability to develop sophisticated abstract models of things in the world and the self doesn't equate to having consciousness.
EB
 
I would be prepared to accept that but I'm not entirely sure you really mean that. Are we really talking about machines that can conceive abstract models? New abstract models? That's a very tall order in my view. I suspect you have something else entirely in mind.

An abstract model is simply a generalized model. IOW looking for patterns from lots of examples. Machines are very good at learning by doing. They can teach themselves with just some basic instruction. That's supposed to be how self-driving cars will eventually be successful. What do you mean by "new" abstract models? Previously undiscovered? Or produced by the process of trial and error perhaps? IOW machines can be creative in the same way that human beings can be creative.

I would expect machines to find solutions within the limits of a predefined analysis of the problem. They just apply preexisting models that have already been approved by a human brain. That this may wrong-foot their human opponent, sure, but that's no evidence of an ability to conceive abstract models. Even limited to things like chess and go, it would be really something and I don't hear the noise that would go with that.

All of us basically start out "apply[ing] preexisting models that have already been approved by a human brain." That's why we spend our first 18 years or so going to school. But as I said the machines are still very task-specific.

Sorry, I've been a bit vague there. I meant not relevant to the nature of consciousness. Clearly, the Cartesian Theatre normally includes a character in the play which is the self.

Any reference to the self is also based on abstractions.

I'm not sure what you mean by "reference" here. As I see it, there's an awful lot of data that relate to the self that's not abstract at all. Most of it will be a subset of the sense data streaming in, essentially sensations, like pain, tiredness etc. but also emotions, like joy and sadness, and impressions (see my thread on that). All that makes up or contributes to your sense of who you are now and whether you are the same as what you remember (and sometimes, you may develop a sense of not being the same person). Some of that stuff may be memorised, occasionally, like when there's a traumatic event.

Still, I would agree that there's also a lot of abstract data, for example perhaps a narrative of your role in a particular event or situation or a reconsideration of your relationship to somebody else following some crisis, etc.

It's also a model created by the brain. Therefore it seems to me any experiences associated with the self must also be abstractions,

I don't think the self is a model at all. I think it's in part a reference. What would be the use of a model of yourself? The self is yourself. I don't think you need a model about yourself to predict what you're going to do next. What you need is a representation which will be a reference, i.e. an autobiographical reference. Who you were so far, so that you can detect any abnormal change and act on it. The other part of the self is precisely who you are now, so as to compare with the reference.

It seems to me that the self is a model constructed from our earliest experiences as a baby, especially with regard to our family members and those others we encounter, and which continues to evolve as we mature as individuals. And all the sensations and emotions and associations that the brain tries to make sense of need to become abstractions in order to be used in adapting to one's environment. But that includes the changing environment of the evolving self. That's why I say our experiences are based on abstractions. Experience is an activity. There are no passive experiences.

You probably need to develop an abstract model about yourself for some limited aspects. For example, as to your ability to perform certain specific activities. Perhaps about your social status and a strategy to improve it or some such. But that will be a rather small part of the whole persona and a much less basic part compared to the other stuff. Typically, the amount of that will depend on the personality of the subject. Sophisticated people will have a richer, more detailed, more elaborate model of themselves. Most people probably don't have much in that area.

I think abstraction and model creation is integral to all experience and is the the primary adaptive function of the brain at all levels of experience. It's the basis for the structural associations required for knowledge and meaning, and by which our experiences are made richer.

and from my own experience consciousness always occurs with respect to some aspect of the self.

Well, suppose you're just watching birds doing their things. Obviously, you're watching because you're interested but you essentially forget yourself just by being captivated with what the birds are doing. So, yes there's a connection to your self but only in that it's you who is interested and you doing the watching and you memorising the scene etc. Whatever goes through your mind at that moment is a stream of data that is in effect a part of your self. It's who you are at that moment, and possibly this will be memorised to some extent to become a part of your autobiographical data, therefore also associated with your self.

It seems to me that to be conscious of something implies an awareness of the distinction between one thing with respect to another. One thing only contains meaning with respect to its relationship with other things. I see that as a metaphysical truth. The term "consciousness" refers to the specific case of an awareness of something in the external world with respect to the self.

Task-specific machine intelligence isn't as yet sufficiently generalized for consciousness to emerge. But I imagine they will someday be designed to integrate an image of self with some manner of purpose and achieve consciousness. Needless to say it will be quite different from human consciousness.

If they have consciousness, they will be able to be conscious of any self they would have but I fail to see why consciousness would be something useful to have for machines, or how consciousness could just emerge as if by magic from electronic processes. It might be possible but it doesn't seem at all plausible to me at least for now. Having the ability to develop sophisticated abstract models of things in the world and the self doesn't equate to having consciousness.
EB

I think the only distinction will come down to the relative complexity and level of integration. If there are vastly more intelligent beings in the universe they would no doubt look at our level of consciousness as trivial. We suppose we're at the top of the heap and therefore unique in some absolute way. That's the illusion.
 
I would be prepared to accept that but I'm not entirely sure you really mean that. Are we really talking about machines that can conceive abstract models? New abstract models? That's a very tall order in my view. I suspect you have something else entirely in mind.

An abstract model is simply a generalized model. IOW looking for patterns from lots of examples. Machines are very good at learning by doing. They can teach themselves with just some basic instruction. That's supposed to be how self-driving cars will eventually be successful. What do you mean by "new" abstract models? Previously undiscovered? Or produced by the process of trial and error perhaps? IOW machines can be creative in the same way that human beings can be creative.

So we're just using the word "abstract" in a very different way.

You seem to mean, broadly, abstractions are average values over a sample. So, obviously, machines can already do that. I guess it's a legitimate use of the word in that all machines would maintain some kind of representation of the world, which inevitably would stand apart from the world itself and therefore apart any concrete object represented. However, any idiot machine can do that to some extent and you'd have to characterise it as intelligent. A nut can be taken as a representation of all the average screws that it can fits on. So, any machine that can fit nuts onto screws is an intelligent being according to what seems to be your definition.

By abstraction, I mean something very different. I found two definitions that fit:
5. defined in terms of its formal properties: an abstract machine.
2. expressing a quality or characteristic apart from any specific object or instance: an abstract word like justice.
According to this, I take the process of abstraction to consist in some sort of formal analysis of the things around us in terms of their properties, properties conceived independently of them, independently of any specific instance.

"Formal" here just means using some kind of symbolic representation. That can be arranged pretty brutally. Colour vision, for example, is a good example of a symbolic or formal representation since there are no actual colours in the world around us. Colour vision is also a good example of an abstract representation since colours represent properties (or qualities) abstracted from concrete things but independent of any specific instances of them. So, I would take that as an example of abstraction.

However, as I see it, it's not an intelligent system. If we go a little bit into the details, you have a primary system that does the colour abstraction without doing any conceiving. That's basically the eyes, presumably with a few neurons there. Then you have a second system, using the data from the first as inputs, that conceives a three-dimensional representation of the world out there as peopled with coloured objects, i.e. things identified as having specific and persistent properties, like a glass of water, a gun, a door. All this is done somewhere inside your brain by unconscious processes, or at least processes of which you are not aware. It's all done non-voluntarily. All you get to see, literally, is the final result, i.e. the beautifully coloured image representing the world and that you stupidly take to be the world. All our senses work like this. You get the end result of unconscious processes that are essentially hard-wired somehow into your brain. You can't actually look at a cow and see something else. There's no room for the conceiving part of the process, at least not at that level. The conception only starts from there. You can look at a cow and "see", or imagine, or still better, conceive of all the money you could earn for yourself selling milk, meat and dairy products to Europe or China. In the vision system, the colour abstracting is hard-wired and the conceiving of the three-dimensional representation of the world is also hard-wired. You can't change it. You can imagine a different world using the intelligent part of your brain but you can't see one using the hard-wired vision system. Senses are all very effective and I would grant you that there's an amazing wizardry necessary in our sense of vision but the abstracting and the conception are done by two different systems and they both become hard-wired once they become fully operational, in kids of a certain age. After that, no creation.

So, not intelligent our senses, essentially because there's no need for intelligence for what they do. They work fine without having to use intelligence.


So, anyway, it all comes down to having different notions of what it is to be intelligent. You're clearly finely tuned with the communication strategy of the engineering and IT industry. They certainly talk a lot of their intelligent cars and such. Me, I think we all mean something very different by intelligence, way more sophisticated and complex than what machines do. They're effective but they're not adaptative. Change the problem and they are lost. Kids need to learn but then they can use what they've learnt intelligently essentially because you need to if you are in an open environment/ Machines don't work too well in an open environment.


And then same thing for consciousness, two different notions, but that's obvious enough that I don't need to belabour the point.

So, sure, according to your notions of abstraction, conception, intelligence and consciousness, machines can become and maybe already are conscious intelligent beings.

Me, I would say, though, not like me.
EB
 
Back
Top Bottom