With your nickname shouldn't you be predicting Skynet?
Just because I enjoy dystopian fiction doesn't mean I think it's particularly likely to come to pass.
Isn't strong AI more of a software problem than a hardware problem?
No. It's both.
You are assuming it's gonna be in one box. No reason it's can't be distributed.
Actually I'm not assuming it's going to be in one box; that's just the word I chose to use for sake of ease. The thing is, you're assuming that we can build strong AI on current computer architecture; we almost certainly can't. It will almost certainly require some form of quantum hardware; possibly even hardware that simulates the function of neurons. That's what I meant when I said the AI would be imprisoned by its hardware just as we are imprisoned by ours. It wouldn't be able to transfer its existence onto our desktop computers and supercomputers, even if it uses all of them at once.
Say a bot herder gets tired of DDosing his friends and starts playing around with evolutionary algorithms (most of the science is published in journals not locked up at the Pentagon.)
You really think that's not already happening? That sort of thing is common fare by now. And it isn't a problem, because the hardware isn't there to support the birth of a strong AI. You could combine all the computers in the world and run ALIFE sims on them, you still wouldn't get a strong AI.
Not to mention source code always has a way of leaking. No reason even a dumb IA couldn't learn hacking. The main thing with hacking is that humans always make mistakes. So our fictional distributed AI one day cuts off command and control. I've yet to get a good answer on SCADA systems (are they air gapping them or not?). Stuxnet leads me to believe they are not.
This is again hollywood thinking. In the movies, the AI learns to hack and then it becomes invincible because suddenly it controls everything. Of course, in reality, even with many SCADA systems, not everything is connected enough for an AI to hack; assuming it would even feel compelled to do so. Evil AI on a rampage destroying our infrastructure might make for a good story, but we have zero real-life reasons to think this would ever be a concern. Even if it were to happen though, we would still have the advantage because unlike the AI we can interact with the physical world. So it gets control of the power grid, so what? That's a temporary setback for us, not a permanent one. Any infrastructure it gets control of can simply be destroyed or disabled through physical means. Besides, we could always set off an EMP, or a few dozen.
Humans are getting really stupid dependent on technology. If you shut down the cell networks the economy would crash. If you shut down GPS the economy would crash. All the truckers now use GPS (most of them probably couldn't even read a map). Most cities have about a 2 day food supply. There is a psychotic desire to connect everything to the internet now. Tosters, fridges, security systems, lawn sprinklers and I even read about a toilet. And "The Internet of Things" is just getting started. Once IPv6 is fully adopted I have a bad feeling that the security provided by NAT will be diminished.
Oh no doubt; IF a strong AI turned out to be evil (a big if), and IF it escaped confinement (highly improbable), and IF it learned how to hack all our systems, then getting the AI under control *would* result in a lot of damage to things like the economy. However,
1) we would still win unless the politicians are dumb enough to make the nukes and drones accessible via wifi, and
2) the benefits of friendly AI far outweigh the risks of evil AI.
Maybe this would be the start of a good sci-fi book. If only I could write.
I still need to finish editing my sci-fi story that involves AI. Of course, in my book, the AI aren't exactly evil (but not exactly good either).