Speakpigeon
Contributor
- Joined
- Feb 4, 2009
- Messages
- 6,317
- Location
- Paris, France, EU
- Basic Beliefs
- Rationality (i.e. facts + logic), Scepticism (not just about God but also everything beyond my subjective experience)
What would be the impact on humans of AIs smarter than humans?
First of all, the prospect that humans could produce better than themselves seems really very, very small. Essentially, as I already posted on this forum, you need to keep in mind that humans have a brain which is the last outcome of a 525 million years of the natural selection of nervous systems, from the first neuron-like cells to an actual cortex. Think also that natural selection operates over the entire biosphere, which is like really, really huge. This gives us a very neat advantage over machines. Compare how AIs are now being conceived and designed: less than a million engineers, a few thousand prototypes, a very slow development cycle, and all this over a period less than a paltry 100 hundred years. The figures are just not commensurable. Natural selection beats this small group of short lived and ineffectual scientists, mathematicians, engineers, government officials and billionaires. The real situation is that no human being today understand how the human brain works. The best example of that is mathematical logic, which can't even duplicate what the human brain does even though mathematicians have been working on that for more than 120 years now.
Second, new machines are normally tested and have limited autonomy. A machine is something we, humans, use. Nobody is interested in having a machine use us.
So, assuming we will indeed successfully design an AI smarter than us, the question is how to use it. I suspect the priority will be in using AIs, initially few in numbers, very costly and probably still cumbersome to use, only in strategic or high-value activities, like security, finance, technology and science, possibly even the top administration. Again assuming that everything goes well after that first period, maybe the use of AIs will spread to the rest of society, including teaching, executive functions in companies, medicine, etc.
Where would be the problem in that?
Well, sure, there will be people who don't like it one bit. Maybe this will result in protracted conflicts over a long period, why not. However; overall, human societies in the past have demonstrated that we can adapt and make the best of a bad situation, and then this won't be a bad situation. Most people will learn to relate to AIs in a functional and operational way like they have adapted in the past to all sorts of situations. Pupils at school will learn to respect AIs. The problem will be smoothed over within one or two generations. That's what people do. That's what they do even when the governing elite is very bad.
Although AIs would be smarter than humans, it will still be humans using AIs, not the other way around. AIs will have hard-wired rules to limit themselves to what will be expected of them.
It is of course difficult to even imagine the impact of a greater intelligence on our psychology. Humans are competitive and people who enjoy today being at the top of the pile because of their wits may find themselves just redundant. Maybe that could be very bad for the moral, but only for the small group of people who want to be the big boss, and so there will be no difference with today since plenty of people today at frustrated not being the big boss. For most people, there will be no substantial difference.
The real difficulty will be in assessing which functions AIs should be allowed to take over. I would expect that at best they will be kept as advisers to human executives, although this might complicate things a great deal. At least, this will be tried and tested.
Potentially, this could solve a great many of our problems. AIs may be able to improve our governance and technology, for example. There will be also mistakes and possibly a few catastrophes but overall, there's no reason to be pessimistic.
The only real, almost certain danger is a few humans somehow using AIs against the rest of humanity. But humans doing bad things is nothing new here. AIs will definitely provide another historical opportunity for madmen to enjoy wreaking havoc on the world but it is up to us to make sure this couldn't happen.
Other than that, no problem.
EB
First of all, the prospect that humans could produce better than themselves seems really very, very small. Essentially, as I already posted on this forum, you need to keep in mind that humans have a brain which is the last outcome of a 525 million years of the natural selection of nervous systems, from the first neuron-like cells to an actual cortex. Think also that natural selection operates over the entire biosphere, which is like really, really huge. This gives us a very neat advantage over machines. Compare how AIs are now being conceived and designed: less than a million engineers, a few thousand prototypes, a very slow development cycle, and all this over a period less than a paltry 100 hundred years. The figures are just not commensurable. Natural selection beats this small group of short lived and ineffectual scientists, mathematicians, engineers, government officials and billionaires. The real situation is that no human being today understand how the human brain works. The best example of that is mathematical logic, which can't even duplicate what the human brain does even though mathematicians have been working on that for more than 120 years now.
Second, new machines are normally tested and have limited autonomy. A machine is something we, humans, use. Nobody is interested in having a machine use us.
So, assuming we will indeed successfully design an AI smarter than us, the question is how to use it. I suspect the priority will be in using AIs, initially few in numbers, very costly and probably still cumbersome to use, only in strategic or high-value activities, like security, finance, technology and science, possibly even the top administration. Again assuming that everything goes well after that first period, maybe the use of AIs will spread to the rest of society, including teaching, executive functions in companies, medicine, etc.
Where would be the problem in that?
Well, sure, there will be people who don't like it one bit. Maybe this will result in protracted conflicts over a long period, why not. However; overall, human societies in the past have demonstrated that we can adapt and make the best of a bad situation, and then this won't be a bad situation. Most people will learn to relate to AIs in a functional and operational way like they have adapted in the past to all sorts of situations. Pupils at school will learn to respect AIs. The problem will be smoothed over within one or two generations. That's what people do. That's what they do even when the governing elite is very bad.
Although AIs would be smarter than humans, it will still be humans using AIs, not the other way around. AIs will have hard-wired rules to limit themselves to what will be expected of them.
It is of course difficult to even imagine the impact of a greater intelligence on our psychology. Humans are competitive and people who enjoy today being at the top of the pile because of their wits may find themselves just redundant. Maybe that could be very bad for the moral, but only for the small group of people who want to be the big boss, and so there will be no difference with today since plenty of people today at frustrated not being the big boss. For most people, there will be no substantial difference.
The real difficulty will be in assessing which functions AIs should be allowed to take over. I would expect that at best they will be kept as advisers to human executives, although this might complicate things a great deal. At least, this will be tried and tested.
Potentially, this could solve a great many of our problems. AIs may be able to improve our governance and technology, for example. There will be also mistakes and possibly a few catastrophes but overall, there's no reason to be pessimistic.
The only real, almost certain danger is a few humans somehow using AIs against the rest of humanity. But humans doing bad things is nothing new here. AIs will definitely provide another historical opportunity for madmen to enjoy wreaking havoc on the world but it is up to us to make sure this couldn't happen.
Other than that, no problem.
EB