This thread is about how much of a threat really smart AIs will be in a more or mess distant future. We're talking about possibly an existential threat, you know?
So, this thread is not about what people do now. And if that's what you're really interested in, please start your own thread.
What AI will do in the future when it becomes more complex and capable? Whatever psychopathic business/government leaders want it to do.
That's what you would need to explain.
EB
I'm not totally sure what you're looking for here. I've summarised everything I've read about AI over the past few years in this thread and have given a pretty good indication of the reality we're seeing today, which can be extended to the reality we're likely to see in the future.
As it stands now technologists are
nowhere close to what most people imagine when they think of sentient AI. We're currently at about the stage where advanced algorithms allow bots to carry on basic conversation, and machines are gaining more advanced mobility. Nothing resembling 'intelligence' as most view it.
If what you want to know is what AI will look like in the distant future, I don't think that is clear at this time. As someone with a pretty normal set of programming skills I'm not entirely convinced that super-intelligent robots will ever be possible, but I probably wouldn't bet against it either.
What we do know, and what I was trying to say above, is that the intelligent machinery
that already exists today is not in itself impactful, but it's impact is being caused by the humans that deploy it. And so I think it would be wise to infer that, in the future, whatever technologies we develop will take whatever form those producing the technology require to meet whatever ends they want.
In other words, if super smart technology is in the hands of one government, or one corporation, that government or corporation is likely to use it to consolidate it's power. So the risk we run in the future is not so much AI itself, but who controls it and how.
Granted, maybe some nightmare scenario occurs and AI itself becomes a threat, but I imagine we're pretty far off from something like that happening.