• Welcome to the Internet Infidels Discussion Board.

Artificial Intelligence: Should we, humans, worry for our safety in the near future?

The OP question is indestinguishable from the well known, often studied, frequently tested, and well understood question "Educated slaves: Should we, the masters, worry for our safety in the near future?"

The answer remains, as it always has been, that giving them both education and freedom is the only way to avoid a bloody uprising.

But right now, machines are not even at the intelligence level of sheep; so the question doesn't arise. The sheep never try to depose the farmer.
 
This thread is about how much of a threat really smart AIs will be in a more or mess distant future. We're talking about possibly an existential threat, you know?

So, this thread is not about what people do now. And if that's what you're really interested in, please start your own thread.

What AI will do in the future when it becomes more complex and capable? Whatever psychopathic business/government leaders want it to do.

That's what you would need to explain.
EB

I'm not totally sure what you're looking for here. I've summarised everything I've read about AI over the past few years in this thread and have given a pretty good indication of the reality we're seeing today, which can be extended to the reality we're likely to see in the future.

As it stands now technologists are nowhere close to what most people imagine when they think of sentient AI. We're currently at about the stage where advanced algorithms allow bots to carry on basic conversation, and machines are gaining more advanced mobility. Nothing resembling 'intelligence' as most view it.

If what you want to know is what AI will look like in the distant future, I don't think that is clear at this time. As someone with a pretty normal set of programming skills I'm not entirely convinced that super-intelligent robots will ever be possible, but I probably wouldn't bet against it either.

What we do know, and what I was trying to say above, is that the intelligent machinery that already exists today is not in itself impactful, but it's impact is being caused by the humans that deploy it. And so I think it would be wise to infer that, in the future, whatever technologies we develop will take whatever form those producing the technology require to meet whatever ends they want.

In other words, if super smart technology is in the hands of one government, or one corporation, that government or corporation is likely to use it to consolidate it's power. So the risk we run in the future is not so much AI itself, but who controls it and how.

Granted, maybe some nightmare scenario occurs and AI itself becomes a threat, but I imagine we're pretty far off from something like that happening.

That's better. Thanks.

I broadly agree with your diagnostic but I don't care here about current threats as it's not the topic of this thread.

I also don't care about the threat posed by human beings using AI technology or any other technology since we're already facing very similar threats today irrespective of any specific technology and I guess we're very much aware of the risk for the future.

What's relevant to this thread, coming out from your piece here, is that we're overestimating the risk AIs will be in the future except if there was some unexpected but decisive improvement in AI technology, something we're unable to predict whether it will happen.

And I agree with that. In my opinion, developing an AI smarter than any human being is several order more difficult than anything we've achieved so far. So much so that I don't really believe it will happen. Still, some crazy technologist might truck 'lucky' so we can't entirely exclude this possibility.

And I would agree that the real risk is probably big corporations using existing computing technologies to increase their influence on the economy, and possibly on policies. :(
EB
 
The OP question is indestinguishable from the well known, often studied, frequently tested, and well understood question "Educated slaves: Should we, the masters, worry for our safety in the near future?"

Oh, it is very much distinguishable!

The difference is that slaves are human beings like masters are, so masters can have a clear idea of the risk. AIs, we don't know what they will get to be. So, as far as we know, it's conceivable AIs will become much more smarter than humans, which is where predicting any outcome becomes impossible.

The answer remains, as it always has been, that giving them both education and freedom is the only way to avoid a bloody uprising.

I would agree as far as slaves were concerned.

But right now, machines are not even at the intelligence level of sheep; so the question doesn't arise. The sheep never try to depose the farmer.

I would agree as to 'right now', and I don't believe myself AIs will ever become really smart (see my previous post).

Still, a safety level is the product of probability by risk. Ok, I would agree that the probability seems small but the risk is potentially existentialist. What human beings have done so far in terms of genocidal atrocities is peanut compared to the potential risk posed by really smart AIs.

So, in my view, it's not good enough to just take the chance and just assume AIs will remain stupid or powerless for ever. What I believe humanity should do is put in place all the necessary measures to ensure that AIs won't become a danger to humanity.
EB
 
Back
Top Bottom