lpetrich
Contributor
Artificial intelligence researcher Rodney Brooks wrote last year The Seven Deadly Sins of AI Predictions - MIT Technology Review
1. Overestimating and underestimating quoting Roy Amara's law: "We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run."
2. Imagining magic noting Arthur C. Clarke's three laws:
3. Performance versus competence about the narrowness of present-day robot and AI skills. For instance, some AI software may learn to recognize frisbees in pictures, but that learning will not tell it much about those objects.
4. Suitcase words is what Marvin Minsky called words with a variety of meanings, including variations of a few overall meanings. Words like "learning", with all the different kinds of learning. Machine learning is still very limited by human standards, so calling it learning may be misleading.
5. Exponentials like Moore's law of computer-chip design. Over much of the last half-century, computer chips' clock speeds have increased roughly exponentially and their feature sizes have decreased roughly exponentially. However, this increase and decrease have slowed over the last decade, and they may be leveling off. Similar things have happened as other technologies matured. So one should not expect automatic super progress.
6. Hollywood scenarios stating "The plot for many Hollywood science fiction movies is that the world is just as it is today, except for one new twist." Then mentioning how "The Bicentennial Man" has someone read a newspaper. He did not read a tablet computer's display, he did not listen to a podcast, and he did not use a direct-to-brain interface. He read a physical, paper, dead-tree newspaper. One can find lots of such naivete in lots of other science fiction, both visual and printed.
"It turns out that many AI researchers and AI pundits, especially those pessimists who indulge in predictions about AI getting out of control and killing people, are similarly imagination-challenged." He then goes on to argue that super AI will have lots of intermediate predecessors, and that humanity will adapt to their presence as they are developed and put into use.
7. Speed of deployment Software can be deployed very fast, because it's often very cheap and easy to copy and distribute it once it is written. But hardware is much more expensive to build, and old hardware may persist for much longer than old software. Thus, it is likely to take a long time before most cars will be self-driving, because many people will continue to drive older models of car.
1. Overestimating and underestimating quoting Roy Amara's law: "We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run."
2. Imagining magic noting Arthur C. Clarke's three laws:
- When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.
- The only way of discovering the limits of the possible is to venture a little way past them into the impossible.
- Any sufficiently advanced technology is indistinguishable from magic.
3. Performance versus competence about the narrowness of present-day robot and AI skills. For instance, some AI software may learn to recognize frisbees in pictures, but that learning will not tell it much about those objects.
4. Suitcase words is what Marvin Minsky called words with a variety of meanings, including variations of a few overall meanings. Words like "learning", with all the different kinds of learning. Machine learning is still very limited by human standards, so calling it learning may be misleading.
5. Exponentials like Moore's law of computer-chip design. Over much of the last half-century, computer chips' clock speeds have increased roughly exponentially and their feature sizes have decreased roughly exponentially. However, this increase and decrease have slowed over the last decade, and they may be leveling off. Similar things have happened as other technologies matured. So one should not expect automatic super progress.
6. Hollywood scenarios stating "The plot for many Hollywood science fiction movies is that the world is just as it is today, except for one new twist." Then mentioning how "The Bicentennial Man" has someone read a newspaper. He did not read a tablet computer's display, he did not listen to a podcast, and he did not use a direct-to-brain interface. He read a physical, paper, dead-tree newspaper. One can find lots of such naivete in lots of other science fiction, both visual and printed.
"It turns out that many AI researchers and AI pundits, especially those pessimists who indulge in predictions about AI getting out of control and killing people, are similarly imagination-challenged." He then goes on to argue that super AI will have lots of intermediate predecessors, and that humanity will adapt to their presence as they are developed and put into use.
7. Speed of deployment Software can be deployed very fast, because it's often very cheap and easy to copy and distribute it once it is written. But hardware is much more expensive to build, and old hardware may persist for much longer than old software. Thus, it is likely to take a long time before most cars will be self-driving, because many people will continue to drive older models of car.
A lot of AI researchers and pundits imagine that the world is already digital, and that simply introducing new AI systems will immediately trickle down to operational changes in the field, in the supply chain, on the factory floor, in the design of products.
Nothing could be further from the truth. Almost all innovations in robotics and AI take far, far, longer to be really widely deployed than people in the field and outside the field imagine.