• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Artificial intelligence: more difficult than one might expect?

lpetrich

Contributor
Joined
Jul 27, 2000
Messages
25,313
Location
Eugene, OR
Gender
Male
Basic Beliefs
Atheist
Artificial intelligence researcher Rodney Brooks wrote last year The Seven Deadly Sins of AI Predictions - MIT Technology Review

1. Overestimating and underestimating quoting Roy Amara's law: "We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run."

2. Imagining magic noting Arthur C. Clarke's three laws:
  1. When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.
  2. The only way of discovering the limits of the possible is to venture a little way past them into the impossible.
  3. Any sufficiently advanced technology is indistinguishable from magic.
RB is rather skeptical about the second sentence in the first law, and I share his skepticism. He then got into the magic issue, asking what present-day technology would have looked like to someone a few centuries back, someone like Isaac Newton. Magicality, he concludes, makes for lack of falsifiability. "If it is far enough away from the technology we have and understand today, then we do not know its limitations. And if it becomes indistinguishable from magic, anything one says about it is no longer falsifiable."

3. Performance versus competence about the narrowness of present-day robot and AI skills. For instance, some AI software may learn to recognize frisbees in pictures, but that learning will not tell it much about those objects.

4. Suitcase words is what Marvin Minsky called words with a variety of meanings, including variations of a few overall meanings. Words like "learning", with all the different kinds of learning. Machine learning is still very limited by human standards, so calling it learning may be misleading.

5. Exponentials like Moore's law of computer-chip design. Over much of the last half-century, computer chips' clock speeds have increased roughly exponentially and their feature sizes have decreased roughly exponentially. However, this increase and decrease have slowed over the last decade, and they may be leveling off. Similar things have happened as other technologies matured. So one should not expect automatic super progress.

6. Hollywood scenarios stating "The plot for many Hollywood science fiction movies is that the world is just as it is today, except for one new twist." Then mentioning how "The Bicentennial Man" has someone read a newspaper. He did not read a tablet computer's display, he did not listen to a podcast, and he did not use a direct-to-brain interface. He read a physical, paper, dead-tree newspaper. One can find lots of such naivete in lots of other science fiction, both visual and printed.

"It turns out that many AI researchers and AI pundits, especially those pessimists who indulge in predictions about AI getting out of control and killing people, are similarly imagination-challenged." He then goes on to argue that super AI will have lots of intermediate predecessors, and that humanity will adapt to their presence as they are developed and put into use.

7. Speed of deployment Software can be deployed very fast, because it's often very cheap and easy to copy and distribute it once it is written. But hardware is much more expensive to build, and old hardware may persist for much longer than old software. Thus, it is likely to take a long time before most cars will be self-driving, because many people will continue to drive older models of car.

A lot of AI researchers and pundits imagine that the world is already digital, and that simply introducing new AI systems will immediately trickle down to operational changes in the field, in the supply chain, on the factory floor, in the design of products.

Nothing could be further from the truth. Almost all innovations in robotics and AI take far, far, longer to be really widely deployed than people in the field and outside the field imagine.
 
The 8th deadly sin would be forgetting that it's called "artificial" for a good reason.
Yes, AI in the year 2068 might be able to fool someone born in the 20th Century but it won't fool the humans living in 2068 who, by that time will (still) be working on ways to make AI even less "artificial".

"Any sufficiently advanced technology is indistinguishable from magic."
Skeptics would do well to bear this in mind when pre-judging the bibles supernatural claims.
 
The 8th deadly sin would be forgetting that it's called "artificial" for a good reason.
Yes, AI in the year 2068 might be able to fool someone born in the 20th Century but it won't fool the humans living in 2068 who, by that time will (still) be working on ways to make AI even less "artificial".

"Any sufficiently advanced technology is indistinguishable from magic."
Skeptics would do well to bear this in mind when pre-judging the bibles supernatural claims.

FFS. It's not "Any magical claim is attributable to advanced technology". :rolleyes:

What is it with theists and getting things backwards??

To the OP, I doubt that artificial intelligence comparable to human intelligence will ever be commonplace, simply because there is no call for it - we can make new human type intelligences very easily, and frankly most of them are not all that useful.

AI in the real world I would expect to surpass humans in certain ways, while being far inferior to humans in others - we tend to build specialist systems, and humans are necessarily generalists. It's difficult to see a demand for fully generalist AIs - You might want an artificial cardiac surgeon which can learn not only from studying hearts, and not only from studying biological systems, but also from apparently unrelated fields - even outside biology. But you probably won't need it to be able to hold its own in a pub conversation about the best football teams or rock bands.
 
I've read some interesting analysis about AI and self-driving cars lately. The gist is that the technology so far gets us about 95% of the way there, but without that final 5% toward 100% autonomy the technology is useless, or at least far less useful than we wanted it to be.

So yea, I think some of the hype about AI these days is over-blown. So far we've picked the low-hanging fruit, but some of the more complex stuff is elusive, at least in the short-term.
 
Artificial Intellgence has various connotaions.

To me it is checking a large number of rules and extrapolations quickly.

A computer chess engine to me is AI. High end systems can adapt to a human player's strategies over time. It is a rule based system. There are limits to rule based systems.

The original idea in the 79s-80s was reducing acquired human expertise in a field to a set of rules.

Artificial Consciousness implies creative thinking analogous to a human. An adaptive neural net capable of learning by experience would seem to be the next step.

Back in the 80s from the AI hype I expected by now engineering would have become a dead occupation. It happened to an extent. Being an electrical engineer today with all the CAD tools with embedded AI does not require the kind of knowledge and experience once required.

Using a GUI I can set up complex thermal, mechanical, and math problems without a lot of expertise. Mathcad with its white board like GUI and symbol math palate in the 80s was transformational in engineering. Then came Matlab and others.
 
The difficulty of artificial intelligence has led to a boom-and-bust cycle in its funding and support:  AI winters and what might be called AI summers. These articles discuss what went on in them:
Chronology:
  • Summer 1: 1956 - 1974
  • Winter 1: 1974 - 1980
  • Summer 2: 1980 - 1987
  • Winter 2: 1987 - 1993
  • Summer 3: 1993 - present

I've come across several opinions:

The first two AI summers featured mainly top-down approaches, finding explicit inference rules. That seemed like a plausible approach, but it had its limits. It takes a *lot* of work to construct the rules for a good expert system, for instance. The present one has lots of bottom-up, statistical work, and that does seem to get results. However, it requires large datasets to train on, and a lot of CPU cycles to do so -- neither of which were very feasible before about a decade ago.

This approach may eventually reach its limits, but as it does so, we may have an idea of how to proceed further.
 
I've read some interesting analysis about AI and self-driving cars lately. The gist is that the technology so far gets us about 95% of the way there, but without that final 5% toward 100% autonomy the technology is useless, or at least far less useful than we wanted it to be.

So yea, I think some of the hype about AI these days is over-blown. So far we've picked the low-hanging fruit, but some of the more complex stuff is elusive, at least in the short-term.
I think self-driving companies are competing hard for investment dollars. This explains mindbogglingly retarded approach they take. They all feel the need to put these shiny prototypes on the road and count how many miles they managed to get, when it's quite obvious that it is a complete waste of money. Well, it helps them get investors but from the point of getting there it's a waste. Road worthy self driving car AI need to be able to recognize objects, all objects (people, animals, toys, card-board boxes, refrigerators, plastic bottles, grass, broken bricks,everything).
Once it does that, integrating it into a car is a simple mechanical task. right now, these systems are only good for being emergency in case driver is distracted by a smartphone.
And these fucking lidars, what's the deal with these? Everyone knows that they are nothing but a useless distraction.
 
Self driving cars is an exgood example of the problem.

You start defining a few rules, then add contingencies' across multiple rules, and add more rules to try to account for all possibilities.

A seemingly simple task done by our brains becomes a difficult task to implement in software by rules. In manufacturing on complex tasks I found it is often impossible to formally describe a task such that anyone without skills or experience can do it. In the end it requires the ability of the brain to Crete a working synthesis to accomplish the task.

It requires human creativity without trying to define what that is.
 
The 8th deadly sin would be forgetting that it's called "artificial" for a good reason.
Yes, AI in the year 2068 might be able to fool someone born in the 20th Century but it won't fool the humans living in 2068 who, by that time will (still) be working on ways to make AI even less "artificial".

"Any sufficiently advanced technology is indistinguishable from magic."
Skeptics would do well to bear this in mind when pre-judging the bibles supernatural claims.

It us called the Turing Test. If in interacting with a system you can not tell the difference between a human and the system then the system for all practical purposes is human.

It has been used in experiments.

Computers and software are far better when it cones to data mining large data basses to seek out terrorist links than done by humans. Computers have long since surpassed humans in tasks that before required years of human experience.
 
You don't seem to want to acknowledge my point about the problem of the moving target.
Turing Tests have to get better and better over time. Why?
 
You don't seem to want to acknowledge my point about the problem of the moving target.
Turing Tests have to get better and better over time. Why?

Why ask why? How do I know you are not an AI experiment?
 
The 8th deadly sin would be forgetting that it's called "artificial" for a good reason.
Yes, AI in the year 2068 might be able to fool someone born in the 20th Century but it won't fool the humans living in 2068 who, by that time will (still) be working on ways to make AI even less "artificial".

"Any sufficiently advanced technology is indistinguishable from magic."
Skeptics would do well to bear this in mind when pre-judging the bibles supernatural claims.
Actually, I would be skeptical about that claim. First, whast does "magic" or "supernatural" even mean?

I haven't seen a single coherent stipulative definition (of either term) that does the philosophical work intended by those who posit it. If you have such definitions, please let me know.

Now, if it's not a stipulative definition, maybe it's ostensive, so it's a matter of pointing at examples (real or hypothetical) and saying 'that's magic', 'that's not magic', 'that's supernatural', etc. But if we go by that, I have no good reason at all to even suspect that any advanced technology would match the events we see in fiction or religion and are usually called 'magic' or 'supernatural'. But then again, that allows me to distinguish them. Yes, sure, Star Trek stuff is not called 'magic', but 'technology'. However, I have no good reason to suspect that it is nomologically possible for technology to, say, teleport people like that, or have artificial gravity like that, or warp drive, etc.

So, at least as far as I can tell, actual technology will be different from usual claims of magic or supernatural involvement. Now you might think that perhaps it matches some other magical or supernatural stuff. But then, I would require more information about what you mean by 'supernatural' or 'magic'.
 
The 8th deadly sin would be forgetting that it's called "artificial" for a good reason.
Yes, AI in the year 2068 might be able to fool someone born in the 20th Century but it won't fool the humans living in 2068 who, by that time will (still) be working on ways to make AI even less "artificial".

Indeed they would need to make self-replicating and self-repairing properties for these A.I.'s . The pefect solution to wear-and-tear would be erm.. biology.

Full circle.... back to (far-more-superior biological) basics. :D

"Any sufficiently advanced technology is indistinguishable from magic."
Skeptics would do well to bear this in mind when pre-judging the bibles supernatural claims.

Non-religious ponderers have thought of ideas and suggestions such as gods = aliens, Jesus = spaceman etc..

Q the immortal in star-trek could be mistaken for Zeus , Loki , or Satan to the mere mortal.
 
If god created humans and we are made in god's image then we must be god's AI project.
 
Learner said:
Q the immortal in star-trek could be mistaken for Zeus , Loki , or Satan to the mere mortal.
Q can die, but that aside, there is no good reason to think that in the actual world, there is anything like Q. Now you might say it's metaphysically possible, but then, the question of whether such hypothetical beings in non-actual possible worlds (whether Satan or Q) 'supernatural' seem to raise problems for the coherence of the word 'supernatural'. For example, if someone in the Star Trek universe said that Q is supernatural, and Q denied it, then one would reasonably wonder what it is that they're even debating about.
 
The 8th deadly sin would be forgetting that it's called "artificial" for a good reason.
Yes, AI in the year 2068 might be able to fool someone born in the 20th Century but it won't fool the humans living in 2068 who, by that time will (still) be working on ways to make AI even less "artificial".

"Any sufficiently advanced technology is indistinguishable from magic."
Skeptics would do well to bear this in mind when pre-judging the bibles supernatural claims.

Completely ignoring the difference that AI is supposed to take decades to develop (and obeys physical laws), and requires resources, whereas supernatural events could supposedly occur at any moment, and do not require any resources.
 
HeeHee, us skeptics do kept that pearl of wisdom in mind.

It is directed at theists.

If Jesus appears on your lawn is it the son of a deity or is it a hologram or a Star Trek transporter? Or is it simply a hallucination? How would you know?
 
A lessor known area of intense research is AS, Artificial Stupidity.
 
Back
Top Bottom