• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

How would you estimate when AI's will be smarter than humans?

cpollett

New member
Joined
Jan 13, 2005
Messages
21
Location
San Jose
Basic Beliefs
Somewhere between Finitist and Ultrafinitist
I have seen a number of places where people have been polled on when true AI will arrive. For example,
there is some discussion of this in Bostrom's book SuperIntelligence. In one poll cited
there it is estimated that 90% of researcher thought there would be human-level AI (HLAI) by 2075. So I was
wondering what would be a good way to actually measure how far in the future HLAI is?

One way to get to HLAI might be using neural nets -- at its crudest maybe we can get HLAI by just simulating
human minds in computers. An upper bound on this approach would also be an upper bound
on the time to HLAI. In 1998, a state of art system of Lecun, et al for optical character recognition
was 7 layers. Google's system currently is 30 layers. Let's assume that the time to go from useful, computationally
possible topologies and training algorithms for n-layer neural nets to 10n-layer neural nets obeys some kind of
Moore's Law. Roughly, let's say in year 2000, we could do 10 layers and that by year 2040 we'll be able to do 100 layers.
Let's assume we can train recurrent networks with the same effectiveness as feed forward networks. How many
layers would we need to simulate? Well, the number of layers probably grows at less than the number of neurons in any one
direction in the brain. So probably less than the cube root of the number of neurons which is roughly 10^11.
This gives an estimate of at most 10^8 layers. As we need 7 more orders of magnitude than the 2000 level,
this gives a bound of around 2280. My guess is that 10^8 is kinda high and the true value for the depth of minds
network is more like 10^6 or 10^7 but this still gives us estimates of HLAI in about 200 years?

I was wondering what people's thoughts were on my estimates above as well as their own takes on estimating the
time to HLAI?
 
I had the opportunity to see Peter Diamandis speak at an event in Vegas a few years ago. He was the founder of the X Prize organization... He spoke about how our governments are very interested in Mores Law insofar as the implications as to when computers will be better 'decision makers' than people. I believe he said that they were looking at 2050 as being that pivotal point.


Edited to add related video (it is about optimism, but he talks about mores law and technology):

http://www.diamandis.com/videos/

click "Peter's Keynotes"
 
Last edited:
How would you estimate when AI's will be smarter than humans?
AI is already smarter than some people I have run into, if by smart you mean able to effectively interact with people. It is quite possible that some of the members of TFT are bots. To replace some people it only needs improvement in ability to manipulate physical things and mobility. However if by smarter than humans you mean the ability to always make good decisions then it still has a way to go.
 
How would you estimate when AI's will be smarter than humans?
AI is already smarter than some people I have run into, if by smart you mean able to effectively interact with people. It is quite possible that some of the members of TFT are bots. To replace some people it only needs improvement in ability to manipulate physical things and mobility. However if by smarter than humans you mean the ability to always make good decisions then it still has a way to go.

We still have access to non-symbolic intelligence from billions of years of evolution, which is yet to be compressed into digital (symbolic) form. Not. :D


In the beginning was form. Time is the second dimension. Space is third, and words flowed fourth. Until the words flowed forth, what was known was not well defined symbolically.
 
How would you estimate when AI's will be smarter than humans?
AI is already smarter than some people I have run into, if by smart you mean able to effectively interact with people. It is quite possible that some of the members of TFT are bots. To replace some people it only needs improvement in ability to manipulate physical things and mobility. However if by smarter than humans you mean the ability to always make good decisions then it still has a way to go.

A good decision depends on context. In one context it might be a good decision, in another context it might be a bad decision. If we're going to give a robot unlimited range of movement and thought, then 'good decisions' depend on the over-arching goal. Even if we were to find the ultimate goal, we have no way of knowing that it's the goal we should have chosen. So a robot with unlimited ability can't 'always make good decisions', logically or morally.

But the key difference between a machine and a living being is memory. Compared to humans, a machine has almost unlimited memory. So given enough information, and the right algorithm to process that information, a robot is much more capable of reaching sound conclusions than any given person. In fact, that already happens. Anyone ever heard of Google?

But is a machine that reaches conclusions more effectively than a person, but was built by people, smarter than people? I don't know, I'd say it's just a really fantastic tool.
 
I doubt that anything like an intelligent, aware computer comparable to the complex environment negotiating abilities of a mouse is going to be developed in this century.

But I hope I'm proved wrong.
 
AI is already smarter than some people I have run into, if by smart you mean able to effectively interact with people. It is quite possible that some of the members of TFT are bots. To replace some people it only needs improvement in ability to manipulate physical things and mobility. However if by smarter than humans you mean the ability to always make good decisions then it still has a way to go.

A good decision depends on context. In one context it might be a good decision, in another context it might be a bad decision. If we're going to give a robot unlimited range of movement and thought, then 'good decisions' depend on the over-arching goal. Even if we were to find the ultimate goal, we have no way of knowing that it's the goal we should have chosen. So a robot with unlimited ability can't 'always make good decisions', logically or morally.

But the key difference between a machine and a living being is memory. Compared to humans, a machine has almost unlimited memory. So given enough information, and the right algorithm to process that information, a robot is much more capable of reaching sound conclusions than any given person. In fact, that already happens. Anyone ever heard of Google?

But is a machine that reaches conclusions more effectively than a person, but was built by people, smarter than people? I don't know, I'd say it's just a really fantastic tool.
I agree. That was sorta my point. The OP didn't specify what metric of "smart" was to be used to determine when or if AI would be "smarter than a human".
 
I doubt that anything like an intelligent, aware computer comparable to the complex environment negotiating abilities of a mouse is going to be developed in this century.

But I hope I'm proved wrong.

I guess what I am after is pick something hard to emulate right now in an AI system that would be a prerequisite for either HLAI or in your case mouse-level AI then estimate how fast we solving that problem to get a lower bound on the other. I recall seeing in Scientific American a couple of years before Deep Blue beat Kasprov a chart plotting the number of plies the best chess playing systems could search versus year and then listing elo player the system could beat. It, I think, gave an accurate estimate for when the best player was actually beaten.
 
Back
Top Bottom