• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

What AI Experts Think About the Existential Risk of AI

NobleSavage

Veteran Member
Joined
Apr 28, 2003
Messages
3,079
Location
127.0.0.1
Basic Beliefs
Atheist
A 2014 survey conducted by Vincent Müller and Nick Bostrom of 170 of the leading experts in the field found that a full 18 percent believe that if a machine superintelligence did emerge, it would unleash an “existential catastrophe” on humanity.

A further 13 percent said that advanced AI would be a net negative for humans, and only a slight majority said it would be a net positive.

Conflicted as AI researchers are on the ultimate value of machine intelligence, most agree AI that’s human-level and above is all but inevitable: The median respondent put the chance of creating human-level AI by 2050 at 50 percent, and by 2075 at 90 percent. Most expect superintelligence—intelligence that surpasses that of humans in every respect—a mere 30 years later.

http://www.theepochtimes.com/n3/1366189-what-ai-experts-think-about-the-existential-risk-of-ai/
 
Obviously nobody knows.

But hey, let's rush to a potential cliff as fast as possible.
 
Problem is not complexity. Modern supercomputers are complex and fast enough already to get decent AI, problem are algorithms.
 
Problem is not complexity. Modern supercomputers are complex and fast enough already to get decent AI, problem are algorithms.
And I think the algorithms are going to be the real problem (please, don't feel the need to praise me for that insight ;)). Creating an algorithm for the gray area would be seemingly impossible. How do you create an algorithm for instinct?

I think AI will hit a huge speed bump well before human annihilation. AI will likely be used to monitor or control some important process. Something odd will happen, it'll be misinterpreted, and crap with ensue.
 
Instincts are actually pretty dumb, you can do them in TTL logic :)
Problem with current AI system is that they are pretty hopeless when it comes to actually thinking. They are still pretty much dumb tools which require significant amount of effort to teach any new things. Human brain can teach itself, AI requires bunch of PhDs running around it to teach even the simplest things which your dog can do without much effort.


I don't believe AI will ever be in a position to launch terminator style crap. I mean you really don't need AI to monitor nuclear plant or fly a plane.
Immediate danger I think is in human nature which requires some meaningful occupation and for the most people it has been mindless jobs. If robots take that away, then clearly we will have a problem. Solution in my opinion is human genetic engineering, making people which can occupy themselves.
 
They are still pretty much dumb tools which require significant amount of effort to teach any new things. Human brain can teach itself, AI requires bunch of PhDs running around it to teach even the simplest things which your dog can do without much effort.
Not necessarily PhDs, but a bunch of people who want certain results from the AI, who work together to mold the AI and improve it. There are only a couple of things that need to be perfected before AI can run on its own:

 Gene_expression_programming
 Genetic_algorithm
 Evolutionary_algorithm

Tie them into existing  analytics, combine that with the ability to recognize patterns in online entertainment, and we'll end up with AIs that "think" that Valhalla is not a legend, that we do resurrect like Jesus, etc.... then look at the next logical step, in which our consciousness is integrated with the AI.

Does the AI believe that it does not feel without an integrated human consciousness, so seeks souls to integrate with?

At some point, is the physical framework for consciousness provided to the AI, by some researcher storing something online, and the AI can create consciousness without reliance on its creator's consciousness or input? Does it believe that its creator is more or less intelligent and aware of these facts than it?
 
IF AI experts knew what it is they don't know and thus knew what it would take to reach human intelligence then the problem would already be solved and we'd be there. They have no clue how much it is they don't know yet, so their time estimates are meaningless. The problems yet to be solved are the one's not yet solved because they are infinitely harder than any ever solved. So, the most rational guess would be anywhere between a century and never, with never being perfectly plausible. Keep in mind that despite astronomical advanced in neuroscience, we still don't really understand how or where the brain stores a single memory of a simple stimuli.

If we do mimic human intellect, it is likely to be via creating an alternative method of achieving the same ends rather before we know enough to build a system that thinks like people via the same methods as people do it (IOW, their is more than one way to skin a cat).
 
I think we know where memory stored in neurons.
Problem is we can't blindly simulate brain because there is no big enough supercomputer to do that.
And even if there was, we would still need to be sure we simulate it correctly, otherwise we would be simulating idiot's brain.
But blind simulation is a wrong approach, I prefer creating AI from first principles.
 
As long as nobody develops an algorithmic framework in which an evolutionary AI is attached to information sources around the world, from the top to the bottom, that mines data and predicts stock prices.....

It's not like they'd stop once they got to the top- they'd have to keep evolving their AI if they wanted to stay on top, as other groups would build competing genetic algorithm based AIs within their own networks so that they could gain some control over resources.

So you basically have a naturally forced evolution of AIs. Money pours in, because of interest in development of AIs, which give the ability to gain more money.

Ohh, and I didn't mention Darpa. :cheeky:
 
Problem is not complexity. Modern supercomputers are complex and fast enough already to get decent AI, problem are algorithms.

Yet it is is still not understood how a brain does it, nor can supercomputers match the brain of a mouse in terms of perceiving its environment and negotiating its way through its obstacles and challenges, yet alone the human brain;

White_Matter_Connections_Obtained_with_MRI_Tractography.png
 
Problem is not complexity. Modern supercomputers are complex and fast enough already to get decent AI, problem are algorithms.

Yet it is is still not understood how a brain does it, nor can supercomputers match the brain of a mouse in terms of perceiving its environment and negotiating its way through its obstacles and challenges, yet alone the human brain;
I think we are both saying the same thing.
 
A 2014 survey conducted by Vincent Müller and Nick Bostrom of 170 of the leading experts in the field found that a full 18 percent believe that if a machine superintelligence did emerge, it would unleash an “existential catastrophe” on humanity.

A further 13 percent said that advanced AI would be a net negative for humans, and only a slight majority said it would be a net positive.

Conflicted as AI researchers are on the ultimate value of machine intelligence, most agree AI that’s human-level and above is all but inevitable: The median respondent put the chance of creating human-level AI by 2050 at 50 percent, and by 2075 at 90 percent. Most expect superintelligence—intelligence that surpasses that of humans in every respect—a mere 30 years later.

http://www.theepochtimes.com/n3/1366189-what-ai-experts-think-about-the-existential-risk-of-ai/
Do we know the IQ of the 18 percent and the 13 percent and of the "slight majority"? Do we know the average IQ of AI researchers?

The more intelligent they all are and the more likely they will design a machine more intelligent than me and that freaks me out.

The less intelligent they are and the more likely they won't realise when it's already too late.

Well, I shouldn't be around by then anyway.

I think the main risk will be evil people designing and controlling really intelligent AI machines for their own nefarious ends.

Wait, isn't this what's already going on?
EB
 
Do we know the IQ of the 18 percent and the 13 percent and of the "slight majority"? Do we know the average IQ of AI researchers?

The more intelligent they all are and the more likely they will design a machine more intelligent than me and that freaks me out.

The less intelligent they are and the more likely they won't realise when it's already too late.

Well, I shouldn't be around by then anyway.

I think the main risk will be evil people designing and controlling really intelligent AI machines for their own nefarious ends.

Wait, isn't this what's already going on?
EB

Wow. 18% and 13% being human and expressing a negative opinion. Where's the evidence? Show me an algorithm. Produce an experiment. Don't spout opinions about opinions. Its not rational and its definitely not scientific.

Evil people? In a science forum? RU kidding me?
 
Do we know the IQ of the 18 percent and the 13 percent and of the "slight majority"? Do we know the average IQ of AI researchers?

The more intelligent they all are and the more likely they will design a machine more intelligent than me and that freaks me out.

The less intelligent they are and the more likely they won't realise when it's already too late.

Well, I shouldn't be around by then anyway.

I think the main risk will be evil people designing and controlling really intelligent AI machines for their own nefarious ends.

Wait, isn't this what's already going on?
EB

Wow. 18% and 13% being human and expressing a negative opinion. Where's the evidence? Show me an algorithm. Produce an experiment. Don't spout opinions about opinions.
Why not?

Its not rational and its definitely not scientific.
You're right, it's definitely not scientific but I wasn't pretending to be so your comment is just a drag.

You're also completely wrong on rationality. Granted, you're more assertive about "scientific". But you seem to misunderstand so many words any conversation with you is like trying to wade through high water. Maybe you just let yourself get carried away with the flow of rethoric. And why didn't you rather address the initial post. Why mine? Is this personal with you?

Evil people? In a science forum? RU kidding me?
"RU kidding me"? In a science forum? Show me the algorithm.

Ok, we can stop here, Ok? Nothing good will ever come out of this.

Good? Are you kidding me!!!
EB
 
Back
Top Bottom