NobleSavage
Veteran Member
A 2014 survey conducted by Vincent Müller and Nick Bostrom of 170 of the leading experts in the field found that a full 18 percent believe that if a machine superintelligence did emerge, it would unleash an “existential catastrophe” on humanity.
A further 13 percent said that advanced AI would be a net negative for humans, and only a slight majority said it would be a net positive.
Conflicted as AI researchers are on the ultimate value of machine intelligence, most agree AI that’s human-level and above is all but inevitable: The median respondent put the chance of creating human-level AI by 2050 at 50 percent, and by 2075 at 90 percent. Most expect superintelligence—intelligence that surpasses that of humans in every respect—a mere 30 years later.
http://www.theepochtimes.com/n3/1366189-what-ai-experts-think-about-the-existential-risk-of-ai/