• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Dilemma: Should we build machines more intelligent than human beings if we could?

Should we build machines more intelligent than human beings if we could?

  • No, we shouldn't

    Votes: 0 0.0%
  • I don't know

    Votes: 0 0.0%

  • Total voters
    9
  • Poll closed .
Since the vast majority of humans are irrational and complete morons, I think it is imperative we build machines smarter than humans. My toaster is smarter than most of the fuckwits I work with.

It is actually not clear at all why you think it is "imperative".

A bit of hyperbole on my part but when you deal with people day in and day out you start to realize that most people are thickos. A lot of them sincerely believe in a deity. So replace them with AI robots or something so I don't need to explain to them for the 100th time that if their computer isn't turning on to check that the power is plugged in.

How having machines more intelligent than any human being and that fuckwit morons will be able to use for their own ends could possibly be good news?
EB

As I said earlier, my toaster is more intelligent than the fuckwits I work with so we are well on our way. It's all good news as far as I can see. What's the downside exactly ?
 
Would a Super Intelligent Machine/Entity cooperate with fuckwit humans engaging in what It clearly sees as being fuckwit behaviour?
 
Would a Super Intelligent Machine/Entity cooperate with fuckwit humans engaging in what It clearly sees as being fuckwit behaviour?

We can assume that the machines will still be used by humans and not the reverse.

There would be no difficulty in making intelligent machines devoid of any intention, as are our current desktop computers. They would reply to our queries and do our bidding and no more.

We could still have killer AI robots but these would have to be designed and build specifically for this purpose by some fuckwit military. But even then, the killer robot would only try to suppress human-designated targets.

Although, we would have to make sure we don't unintentionally designate the whole species as legitimate target. But who would do that?

Ah, yes, fuckwits.
EB
 
Whether they're good or bad is irrelevant. It's like nuclear missiles. You have one group with nukes who dominates everybody because they're the only ones with nukes or you get a MAD situation where the nuclear capabilities cancel each other out. If you get nine groups saying "Hey, these are really dangerous, let's not build any" then all you've done is created an opportunity for the tenth group.

Same with super intelligent AI. Somebody is going to do it if it's possible to do. You can be that somebody or you can be one of the people that somebody uses it against. The third option of not having any of those somebodies develop it in the first place isn't a real option.

You are assuming someone will build them. Sure, in this case we better all have them if we don't want to be history. But it is at east conceivable that all the major powers become convinced it would be bad and cooperate to make it impossible in actual fact. Assuming this, what would be your argument that we should or shouldn't build them?
EB

Yes, I'm assuming that someone would be a bad actor and build one anyways regardless of any agreements in place restricting such development. Is your position that there wouldn't be groups within the governments of places like Russia, China or America who'd find a way to funnel money to this type of project in order to get themselves a leg up over those others who are playing by the rules? Especially since all of them would be assuming (quite correctly) that the others are doing the same thing?
 
Assuming everything goes perfectly and there are no undesirable or unpredictable consequences to doing X, are you in favor or opposed to doing X, ah I love making threads. For my next thread: where is the logical fallacy in "spiders are fish, therefore I am in love with a strawberry"?? Dimwits need not apply this is philosophy sweetie look it up
 
If you mean by intelligent could we build a machine 'smarter'' than us then I do not think so.
We can certainly build machines that are faster, stronger etc. than us but they only do what we tell them do not any more.

In 1970 the movie https://en.wikipedia.org/wiki/Colossus:_The_Forbin_Project was made that discussed that very topic.
The conclusion was not good.
 
Would a Super Intelligent Machine/Entity cooperate with fuckwit humans engaging in what It clearly sees as being fuckwit behaviour?

We can assume that the machines will still be used by humans and not the reverse.

There would be no difficulty in making intelligent machines devoid of any intention, as are our current desktop computers. They would reply to our queries and do our bidding and no more.

We could still have killer AI robots but these would have to be designed and build specifically for this purpose by some fuckwit military. But even then, the killer robot would only try to suppress human-designated targets.

Although, we would have to make sure we don't unintentionally designate the whole species as legitimate target. But who would do that?

Ah, yes, fuckwits.
EB

Presumably an intelligent system allows adaption and self modification in order to respond to changing conditions. If so, no matter that we make intelligent machines devoid of any intention, the machine may develop intention during the course of its own self development and evolution as an intelligent entity, consequently we lose control of the machine. As you said, we are not talking about machines that are only a bit smarter than us.
 
Whether they're good or bad is irrelevant. It's like nuclear missiles. You have one group with nukes who dominates everybody because they're the only ones with nukes or you get a MAD situation where the nuclear capabilities cancel each other out. If you get nine groups saying "Hey, these are really dangerous, let's not build any" then all you've done is created an opportunity for the tenth group.

Same with super intelligent AI. Somebody is going to do it if it's possible to do. You can be that somebody or you can be one of the people that somebody uses it against. The third option of not having any of those somebodies develop it in the first place isn't a real option.

You are assuming someone will build them. Sure, in this case we better all have them if we don't want to be history. But it is at east conceivable that all the major powers become convinced it would be bad and cooperate to make it impossible in actual fact. Assuming this, what would be your argument that we should or shouldn't build them?
EB

Yes, I'm assuming that someone would be a bad actor and build one anyways regardless of any agreements in place restricting such development. Is your position that there wouldn't be groups within the governments of places like Russia, China or America who'd find a way to funnel money to this type of project in order to get themselves a leg up over those others who are playing by the rules? Especially since all of them would be assuming (quite correctly) that the others are doing the same thing?

Again, I agree. But you are again overlooking my point, that it is at east conceivable that all the major powers become convinced it would be a bad idea and cooperate to make it impossible in actual fact. Assuming this, what would be your argument that we should or shouldn't build them?
EB
 
Assuming everything goes perfectly and there are no undesirable or unpredictable consequences to doing X, are you in favor or opposed to doing X, ah I love making threads. For my next thread: where is the logical fallacy in "spiders are fish, therefore I am in love with a strawberry"?? Dimwits need not apply this is philosophy sweetie look it up

Yes, I guess you are arguing we should all buy a really big gun and kill everybody else just t make sure. Let's just sort out the bad guys from the only good guy.
EB
 
If you mean by intelligent could we build a machine 'smarter'' than us then I do not think so.

Why not? We could build machines more, and indeed much more, intelligent than any human. These machines would only be capable of doing what we tell them to do. For example, provide intelligent answers to our questions, or go and kill our enemy. Why not?

We can certainly build machines that are faster, stronger etc. than us but they only do what we tell them do not any more.

Sure, our machines are stupid. However, even if we built machines more intelligent than us, they still wouldn't do anything we wouldn't have asked.

In 1970 the movie https://en.wikipedia.org/wiki/Colossus:_The_Forbin_Project was made that discussed that very topic.
The conclusion was not good.

OK, but can we have this conversation here?
EB
 
Would a Super Intelligent Machine/Entity cooperate with fuckwit humans engaging in what It clearly sees as being fuckwit behaviour?

We can assume that the machines will still be used by humans and not the reverse.

There would be no difficulty in making intelligent machines devoid of any intention, as are our current desktop computers. They would reply to our queries and do our bidding and no more.

We could still have killer AI robots but these would have to be designed and build specifically for this purpose by some fuckwit military. But even then, the killer robot would only try to suppress human-designated targets.

Although, we would have to make sure we don't unintentionally designate the whole species as legitimate target. But who would do that?

Ah, yes, fuckwits.
EB

Presumably an intelligent system allows adaption and self modification in order to respond to changing conditions. If so, no matter that we make intelligent machines devoid of any intention, the machine may develop intention during the course of its own self development and evolution as an intelligent entity, consequently we lose control of the machine. As you said, we are not talking about machines that are only a bit smarter than us.

No, I am assuming these machines would be more intelligent but would be incapable of evolving. Intelligence doesn't give the capacity of evolving. Intelligence allows you to evolve new intelligent ideas, not new intentions or behaviours. Let's assume simple desktop computers, just vastly more intelligent than us. At most, we may decide to use them as killer robots but even then, we could limit their range of behaviours, with no intentionality and no capacity to evolve one.

What then? Good? Bad? And why?
EB
 
Presumably an intelligent system allows adaption and self modification in order to respond to changing conditions. If so, no matter that we make intelligent machines devoid of any intention, the machine may develop intention during the course of its own self development and evolution as an intelligent entity, consequently we lose control of the machine. As you said, we are not talking about machines that are only a bit smarter than us.

No, I am assuming these machines would be more intelligent but would be incapable of evolving. Intelligence doesn't give the capacity of evolving. Intelligence allows you to evolve new intelligent ideas, not new intentions or behaviours. Let's assume simple desktop computers, just vastly more intelligent than us. At most, we may decide to use them as killer robots but even then, we could limit their range of behaviours, with no intentionality and no capacity to evolve one.

What then? Good? Bad? And why?
EB


AI has been defined in different ways, depending on who you ask. I wouldn't define desktop computers, et al, as being intelligent. Not in the way we are presumably talking about; the ability to think and reason that is orders of magnitude above human capability.

Merriam-Webster defines artificial intelligence this way:

A branch of computer science dealing with the simulation of intelligent behavior in computers.
The capability of a machine to imitate intelligent human behavior.''

The Encyclopedia Britannica states, “artificial intelligence (AI), the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings.” Intelligent beings are those that can adapt to changing circumstances.

Definitions of artificial intelligence begin to shift based upon the goals that are trying to be achieved with an AI system. Generally, people invest in AI development for one of these three objectives:

Build systems that think exactly like humans do (“strong AI”)
Just get systems to work without figuring out how human reasoning works (“weak AI”)
Use human reasoning as a model but not necessarily the end goal

Turns out that the bulk of the AI development happening today by industry leaders falls under the third objective and uses human reasoning as a guide to provide better services or create better products rather trying to achieve a perfect replica of the human mind.''
 
AI has been defined in different ways, depending on who you ask. I wouldn't define desktop computers, et al, as being intelligent. Not in the way we are presumably talking about; the ability to think and reason that is orders of magnitude above human capability.

Merriam-Webster defines artificial intelligence this way:

A branch of computer science dealing with the simulation of intelligent behavior in computers.
The capability of a machine to imitate intelligent human behavior.''

Well, that's interesting, but the word "behaviour" is obviously surplus to requirement.

The Encyclopedia Britannica states, “artificial intelligence (AI), the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings.” Intelligent beings are those that can adapt to changing circumstances.

The last bit is off. And Encyclopedia Britannica (not the same as EB, by the way), doesn't say as you seem to say that "intelligent beings are those that can adapt to changing circumstances".

Here is the entire quote:

Artificial intelligence (AI), the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings. The term is frequently applied to the project of developing systems endowed with the intellectual processes characteristic of humans, such as the ability to reason, discover meaning, generalize, or learn from past experience. https://www.britannica.com/technology/artificial-intelligence

The page only mention "intelligent beings" in this one sentence, so Encyclopedia Britannica talks of "tasks", not of adaptation to changing circumstances. Intelligence implies adaptation, yes, but a limited kind of adaptation. That is, it is the reasoning which is adapted to circumstances, not necessarily the machine itself or the behaviour of the machine, as suggested by Forbes. We could have machines more intelligent than us that would nonetheless be unable to adapt their behaviour, and therefore adapt their behaviour to changing circumstances. This would be something surplus to intelligence. Also, intelligence is not what allows human beings to evolve for example. So, we should keep these different notions distinct.

Definitions of artificial intelligence begin to shift based upon the goals that are trying to be achieved with an AI system. Generally, people invest in AI development for one of these three objectives:

Build systems that think exactly like humans do (“strong AI”)
Just get systems to work without figuring out how human reasoning works (“weak AI”)
Use human reasoning as a model but not necessarily the end goal

Turns out that the bulk of the AI development happening today by industry leaders falls under the third objective and uses human reasoning as a guide to provide better services or create better products rather trying to achieve a perfect replica of the human mind.''

Yes, of course, and the main reason for that is that we have been unable to model human reasoning properly so far. If someone did it, the situation would immediately change and most people would turn towards designing strong AIs.
EB
 
You can't stop them from being built if they can be built, so it's a moot point whichever stance you take.
 
I think we should build machines more intelligent than us, just to see what happens.

We can speculate about whether we're going to get a Skynet, Helios or Wintermute, but it doesn't really matter, since all of these outcomes are more interesting than not even trying.
 
I think we should build machines more intelligent than us, just to see what happens.

We can speculate about whether we're going to get a Skynet, Helios or Wintermute, but it doesn't really matter, since all of these outcomes are more interesting than not even trying.

I'm not sure I would have wished to see what the Nazis where going to be doing. Interesting? I would have said it is interesting to talk about it first, and possibly then decide whether we want to let them access the Reich's government.

It you think it would be interesting to see, why is it not even more interesting to discuss the possibilities first? Maybe you'd prefer different Sci-Fi films explore the different possible scenarios, perhaps with some tin-box in the lead role?

And I don't even know what' Helios or Wintermute! Where have I been, you know?
EB
 
You can't stop them from being built if they can be built, so it's a moot point whichever stance you take.

Oh, whoa, this is seriously pessimistic credo.

Of course we could stop them if we want to.

For example, we could build super-intelligent robots that we would let loose with the unique task of arresting anyone trying to build super-intelligent machines.
EB
 
And I don't even know what' Helios or Wintermute! Where have I been, you know?
EB

So sorry, let me explain:

Skynet is an DARPA prototype for a super-high-altitude airship (SHAA) that can communicate with mobile radio antennae via tightbeam. It is intended to be an autonomous command system (ACS) that acts as a fallback in the event that the US government is destroyed by a decapitation strike.

Helios is a merger of two earlier AI projects, Daedalus and Icarus, which in turn evolved from the NSA's Echelon IV network. It's associated with conspiracy theoroes (Illuminati, Roswell etc.), but according to the white paper it's just used to analyse smartphone metadata and predict terrorist attacks.

Wintermute is rumoured to be an AI under development by Pornhub designed to manage the marketplace for their new cryptocurrency, Clitcoin.
 
Back
Top Bottom