• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Dilemma: Should we build machines more intelligent than human beings if we could?

Should we build machines more intelligent than human beings if we could?

  • No, we shouldn't

    Votes: 0 0.0%
  • I don't know

    Votes: 0 0.0%

  • Total voters
    9
  • Poll closed .

Speakpigeon

Contributor
Joined
Feb 4, 2009
Messages
6,317
Location
Paris, France, EU
Basic Beliefs
Rationality (i.e. facts + logic), Scepticism (not just about God but also everything beyond my subjective experience)
This is a poll. Thank you to vote before posting comments.

It seems plausible, if not rather likely, that one day humans will be able to build machines more intelligent than themselves. This would likely have all sorts of consequences, some of them good, some of them bad, for humanity as a whole or for some, possibly many, individuals. However, assuming we could do it, either we would do it or we wouldn't. Further, once someone discovers how to do it, it becomes very difficult not to do it. Governments will want to do it, the military will want to do it, business will want to do it, and many people individually will be minded to do it, making the outcome almost inevitable that we will build machines more intelligent than human beings.

So, the question is, would you be in favour of building such machines or not?

And what would be your argument for or against, if you have one?
EB
 
But in all that, you did not define what you mean by intelligent.
Is a calculator more intelligent than a human in doing complicated maths that only some people can do in thrir head? Or maths no one can do?
 
But in all that, you did not define what you mean by intelligent.

Why didn't you ask me what I meant by "dilemma", "machine", "human", "could"? So, why are you asking me what I mean by "intelligence"?

Intelligence
the ability to respond quickly and successfully to a new situation; use of the faculty of reason in solving problems, directing conduct, etc. effectively

Is a calculator more intelligent than a human in doing complicated maths that only some people can do in thrir head? Or maths no one can do?

I never heard of anyone claiming his pocket calculator was more intelligent than himself.

If you can't understand such a simple question, I guess it's just as well you don't try to answer it.
EB
 
Why didn't you ask me what I meant by "dilemma", "machine", "human", "could"? So, why are you asking me what I mean by "intelligence"?
Meh. Seems to me that 'intelligence' was the most crucial word in the question, and it has many different definitions.
And i had a bet with myself you'd get pissy if the answers you got did not magically use the same definition you intended.
 
Go ahead and build them. Application and input determines output. Garbage in, garbage out. Limited input, if the information that any particular AI acquires is limited to its intended function, a stock market trading AI, a city traffic light organizing AI, etc, the thoughts and plans of any given machine is restricted to its function....sooo, if collusion and plotting the downfall of the human is seen as an issue, that isolation/need to know basis in relation to AI that is far smarter than humans should, in theory, eliminate the problem, if there is a problem.

How smart are we talking about? A bit smarter than the smartest human? Multiple times? Off the scale intelligence?
 
Go ahead and build them. Application and input determines output. Garbage in, garbage out. Limited input, if the information that any particular AI acquires is limited to its intended function, a stock market trading AI, a city traffic light organizing AI, etc, the thoughts and plans of any given machine is restricted to its function....sooo, if collusion and plotting the downfall of the human is seen as an issue, that isolation/need to know basis in relation to AI that is far smarter than humans should, in theory, eliminate the problem, if there is a problem.

Sure, I would agree that we can start using AIs without any worry that we suddenly find ourselves being used by the AIs. An AIs would still be a machine, like any computer, any car, or any bomb, that is, potentially dangerous and no more than potentially dangerous.

However, ... Well, I'll let other posters say what they think.

How smart are we talking about? A bit smarter than the smartest human? Multiple times? Off the scale intelligence?

My assumption is that of a machine with essentially the same logic as that of a human being. However, the intelligence of a human is strictly limited, as clearly demonstrated by many posters here and elsewhere. The size of our brain is fixed, the biological processes of our brain won't change any time soon, we can only communicate with each others through language, that is, we are unable to communicate our mental data directly to other people, and perhaps the most important, our brain is kept very busy with menial tasks such as keeping us alive, so that the processing time allocated to thinking intelligent thoughts is extremely limited. AIs wouldn't be so limited. So, assuming the same logic, because we would be unable to conceive of a better one than our own, then machines would become more intelligent than us, and this in proportion of the size and technology used. How much exactly is theoretically possible, I have no idea, but potentially, it could be really huge, perhaps something like all AIs being 100 times more intelligent than the whole of humanity, if any figure isn't meaningless. So, I would bet with off the scale. Say, the same as we humans relatively to a rabbit.
EB
 
If it's possible to do, someone will do it. That means the two choices available are to have the more intelligent machines yourself or to be in competition against those with more intelligent machines without having them yourself. Given the lack of a third option, building them yourself makes the most sense.
 
  • Like
Reactions: WAB
I voted that the question doesn't make sense, but I would be more charitable and say that depending on the circumstances (which are not specified in the question) there could be multiple answers. For instance, what is the motivation for building these machines in the first place?

In a future scenario of widespread automation of production and service, no machine would need to be anywhere near as intelligent--whatever that word may mean, which is another issue--as a human being. Based on the description of the scale of artificial intelligence you provided to DBT, with the analogy being between a human and a rabbit, the utility of such a feat of engineering is not immediately apparent to me.

A moral issue arises as well. The extent or manner in which complicated mental processing gives rise to subjective consciousness is not known. Nor is it understood whether the ability to suffer can be separated from having the ability to reason abstractly; i.e., whether it would even be possible in principle to design a super-intelligent machine that is immune to suffering.

Given these uncertainties, and the lack of a practical need for it, I would say that humans should not make super-intelligent machines. But I can't say for certain that doing so would only lead to bad consequences. For one thing, having the technology to replicate human intelligence electronically, and in a greatly enhanced way, would probably open the doors to other scientific leaps. Those could be beneficial for us in the end, and might only be achievable via experimenting with AI. If this is the case, then in a certain sense the development of that technology may be inevitable, and the most important task would be to make sure that when it arrives, it is not greeted by a world run by capitalism.
 
If it's possible to do, someone will do it. That means the two choices available are to have the more intelligent machines yourself or to be in competition against those with more intelligent machines without having them yourself. Given the lack of a third option, building them yourself makes the most sense.

Sure, AIs more intelligent than us would be the means to beat the competition and we will all want them. Yet, if AIs more intelligent than us would be a bad thing, I don't see why it would be impossible to convince all governments and big companies capable of developing AIs that it would be bad and consequently to agree on a moratorium. If they wouldn't be a bad thing, then there would be no reason not to do it. So, the question is whether they would be bad to begin with and whether we could stop ourselves creating them.
EB
 
I voted that the question doesn't make sense, but I would be more charitable and say that depending on the circumstances (which are not specified in the question) there could be multiple answers.

So, you are replying to a question that doesn't make sense?! :rolleyes:

For instance, what is the motivation for building these machines in the first place? In a future scenario of widespread automation of production and service, no machine would need to be anywhere near as intelligent--whatever that word may mean, which is another issue--as a human being. Based on the description of the scale of artificial intelligence you provided to DBT, with the analogy being between a human and a rabbit, the utility of such a feat of engineering is not immediately apparent to me.

Maybe we will need them badly. To solve global warming, improve the capabilities of our military, win the economic competition, make scientific discoveries, improve government, cure diseases, cure social ills like poverty and unemployment, make us live longer, and on and on and on.

A moral issue arises as well. The extent or manner in which complicated mental processing gives rise to subjective consciousness is not known. Nor is it understood whether the ability to suffer can be separated from having the ability to reason abstractly; i.e., whether it would even be possible in principle to design a super-intelligent machine that is immune to suffering.

Well, that seems like it's coming from left field. I guess you would need to substantiate your suggestion here. You might be right, but I fail to see any good reason that you would.

Given these uncertainties, and the lack of a practical need for it, I would say that humans should not make super-intelligent machines. But I can't say for certain that doing so would only lead to bad consequences. For one thing, having the technology to replicate human intelligence electronically, and in a greatly enhanced way, would probably open the doors to other scientific leaps. Those could be beneficial for us in the end, and might only be achievable via experimenting with AI. If this is the case, then in a certain sense the development of that technology may be inevitable, and the most important task would be to make sure that when it arrives, it is not greeted by a world run by capitalism.

So now it seems you're saying AIs more intelligent than us would "probably open the doors to other scientific leaps", which sounds to me like a good motivation to build these machines...

So, should we do it or not?
EB
 
So, you are replying to a question that doesn't make sense?! :rolleyes:
Literally the first sentence of my reply was an explanation of why I chose that option, so maybe stop acting like a douchebag to everyone who replies

Maybe we will need them badly. To solve global warming, improve the capabilities of our military, win the economic competition, make scientific discoveries, improve government, cure diseases, cure social ills like poverty and unemployment, make us live longer, and on and on and on.
Specifically, would we need AIs that are to us what we are to rabbits, in terms of intelligence? I dispute that claim. The problems you listed either can be solved with our current level of technology, are not a matter of technology at all, or are bad things that we shouldn't be doing anyway.

So now it seems you're saying AIs more intelligent than us would "probably open the doors to other scientific leaps", which sounds to me like a good motivation to build these machines...

Only if enabling scientific progress is itself a good reason to do something, which I'm not convinced that it is, or at least not convinced that it couldn't be overcome by other considerations.
 
Literally the first sentence of my reply was an explanation of why I chose that option, so maybe stop acting like a douchebag to everyone who replies


Specifically, would we need AIs that are to us what we are to rabbits, in terms of intelligence? I dispute that claim. The problems you listed either can be solved with our current level of technology, are not a matter of technology at all, or are bad things that we shouldn't be doing anyway.

So now it seems you're saying AIs more intelligent than us would "probably open the doors to other scientific leaps", which sounds to me like a good motivation to build these machines...

Only if enabling scientific progress is itself a good reason to do something, which I'm not convinced that it is, or at least not convinced that it couldn't be overcome by other considerations.

We may not absolutely need it. But you don't give any clear reason that we shouldn't do it.
EB
 
Maybe another way to see the problem is to ask why we should not legislate, worldwide, against the creation of AIs more intelligent than us. There are already people warning us of the danger of more intelligent AIs and it is conceivable that they will convince all the major powers that building such machines should be stopped. Building that kind of machines probably can't be done by lone individuals or even small organisations. So if the big powers get to agree it shouldn't be done, it probably won't.

Assuming this, what do you think are the good reasons that we should, against opinions to the contrary, build such machines?
EB
 
Maybe another way to see the problem is to ask why we should not legislate, worldwide, against the creation of AIs more intelligent than us. There are already people warning us of the danger of more intelligent AIs and it is conceivable that they will convince all the major powers that building such machines should be stopped. Building that kind of machines probably can't be done by lone individuals or even small organisations. So if the big powers get to agree it shouldn't be done, it probably won't.

Assuming this, what do you think are the good reasons that we should, against opinions to the contrary, build such machines?
EB

I think you have a quaint and naive view of what "world powers" are willing to do if you don't see that literally every single one will immediately begin working on intelligent AI in secret the moment such legislation is passed.
 
Maybe another way to see the problem is to ask why we should not legislate, worldwide, against the creation of AIs more intelligent than us. There are already people warning us of the danger of more intelligent AIs and it is conceivable that they will convince all the major powers that building such machines should be stopped. Building that kind of machines probably can't be done by lone individuals or even small organisations. So if the big powers get to agree it shouldn't be done, it probably won't.

Assuming this, what do you think are the good reasons that we should, against opinions to the contrary, build such machines?
EB

I think you have a quaint and naive view of what "world powers" are willing to do if you don't see that literally every single one will immediately begin working on intelligent AI in secret the moment such legislation is passed.

That is an illogical reply. I assumed explicitly that the big powers would be convinced it shouldn't be done.

So, stop eluding the question. What do you think are the good reasons that we should, against opinions to the contrary, build such machines?
EB
 
If it's possible to do, someone will do it. That means the two choices available are to have the more intelligent machines yourself or to be in competition against those with more intelligent machines without having them yourself. Given the lack of a third option, building them yourself makes the most sense.

Sure, AIs more intelligent than us would be the means to beat the competition and we will all want them. Yet, if AIs more intelligent than us would be a bad thing, I don't see why it would be impossible to convince all governments and big companies capable of developing AIs that it would be bad and consequently to agree on a moratorium. If they wouldn't be a bad thing, then there would be no reason not to do it. So, the question is whether they would be bad to begin with and whether we could stop ourselves creating them.
EB

Whether they're good or bad is irrelevant. It's like nuclear missiles. You have one group with nukes who dominates everybody because they're the only ones with nukes or you get a MAD situation where the nuclear capabilities cancel each other out. If you get nine groups saying "Hey, these are really dangerous, let's not build any" then all you've done is created an opportunity for the tenth group.

Same with super intelligent AI. Somebody is going to do it if it's possible to do. You can be that somebody or you can be one of the people that somebody uses it against. The third option of not having any of those somebodies develop it in the first place isn't a real option.
 
Since the vast majority of humans are irrational and complete morons, I think it is imperative we build machines smarter than humans. My toaster is smarter than most of the fuckwits I work with.
 
If it's possible to do, someone will do it. That means the two choices available are to have the more intelligent machines yourself or to be in competition against those with more intelligent machines without having them yourself. Given the lack of a third option, building them yourself makes the most sense.

Sure, AIs more intelligent than us would be the means to beat the competition and we will all want them. Yet, if AIs more intelligent than us would be a bad thing, I don't see why it would be impossible to convince all governments and big companies capable of developing AIs that it would be bad and consequently to agree on a moratorium. If they wouldn't be a bad thing, then there would be no reason not to do it. So, the question is whether they would be bad to begin with and whether we could stop ourselves creating them.
EB

Whether they're good or bad is irrelevant. It's like nuclear missiles. You have one group with nukes who dominates everybody because they're the only ones with nukes or you get a MAD situation where the nuclear capabilities cancel each other out. If you get nine groups saying "Hey, these are really dangerous, let's not build any" then all you've done is created an opportunity for the tenth group.

Same with super intelligent AI. Somebody is going to do it if it's possible to do. You can be that somebody or you can be one of the people that somebody uses it against. The third option of not having any of those somebodies develop it in the first place isn't a real option.

You are assuming someone will build them. Sure, in this case we better all have them if we don't want to be history. But it is at east conceivable that all the major powers become convinced it would be bad and cooperate to make it impossible in actual fact. Assuming this, what would be your argument that we should or shouldn't build them?
EB
 
Since the vast majority of humans are irrational and complete morons, I think it is imperative we build machines smarter than humans. My toaster is smarter than most of the fuckwits I work with.

Then we're toasts. :D
EB
 
Since the vast majority of humans are irrational and complete morons, I think it is imperative we build machines smarter than humans. My toaster is smarter than most of the fuckwits I work with.

It is actually not clear at all why you think it is "imperative". How having machines more intelligent than any human being and that fuckwit morons will be able to use for their own ends could possibly be good news?
EB
 
Back
Top Bottom