• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Google buys two more UK artificial intelligence startups

The chance of either alife or self-modifying viruses leading to strong AI is effectively zero, however.

Agreed. However, I'm not convinced that simulating the human brain is the way to go. To me it seems best to just keep working away at the myriad AI algorithms.

Why not both?

Whole brain emulation is in fact the only method we can be reasonably certain will work; relying on far fewer 'maybe's' than any other approach. IF strong AI is simply a matter of computation and simulation; then whole brain emulation is most likely to achieve positive results: we already know brains are capable of general intelligence. By comparison, we have no such certainty about other methods.


How do we determine when we have strong AI?

There would be absolutely no doubt whether an AI is of the strong/agi variety. Self-awareness is only a small part of what defines strong AI; at the very least, a strong AI must be capable of performing the *full* range of cognitive actions a human is capable of, under the same conditions as humans are capable of. You can't fake that as a whole.
 
Agreed. However, I'm not convinced that simulating the human brain is the way to go. To me it seems best to just keep working away at the myriad AI algorithms.

Why not both?

Fine with me.

Whole brain emulation is in fact the only method we can be reasonably certain will work; relying on far fewer 'maybe's' than any other approach. IF strong AI is simply a matter of computation and simulation; then whole brain emulation is most likely to achieve positive results: we already know brains are capable of general intelligence. By comparison, we have no such certainty about other methods.

What are we going to learn from simulating the brain? Yeah the brain is amazing in may ways we don't yet understand - but it also sucks. Why not just take things people are good at (visual processing, reasoning) and working on the algos bottom up?


How do we determine when we have strong AI?

There would be absolutely no doubt whether an AI is of the strong/agi variety. Self-awareness is only a small part of what defines strong AI; at the very least, a strong AI must be capable of performing the *full* range of cognitive actions a human is capable of, under the same conditions as humans are capable of. You can't fake that as a whole.

The full range of cognitive functions are being carried out with thousands of separate projects. We just need an RFC defining a protocol for communication.
 
What are we going to learn from simulating the brain?

Are you kidding? The correct question is: "What COULDN'T we learn from simulating the brain?"


Yeah the brain is amazing in may ways we don't yet understand - but it also sucks. Why not just take things people are good at (visual processing, reasoning) and working on the algos bottom up?

Because that's an absurd oversimplification of what's actually involved. Plus, it's kind of like saying; "Sure, we have this totally awesome supergenius artist whose work we could use for ad campaign, it'd take him a few months to come up with what we want... but why don't we just try and have bob in accounting pick up an etch-a-sketch, pay for a decade of art school, and hope he turns out just as awesome as that supergenius?"

How are you even going to take the things people are good at and work on algorithms from the ground up when you don't even understand *why* people are good at those things (which is one of the countless things you could learn from a full brain simulation)

The full range of cognitive functions are being carried out with thousands of separate projects.

No, they're not. Right now, the level at which these things are being done by non-humans doesn't come even close to the level humans do it at. It isn't enough to just muck about with the same things we do, an AGI/Strong AI has to be *just as good* or *better* at it than we are. Right now, computers are better than us in very few things, and way behind us in most everything else. The gap is closing for some of those things, but not all.
 
Ok, let me know when we simulate the brain and learn how to transfer a electrical-chemical process to silicon.

In the mean time, I'll keep posting all the progress we are making from the bottom up.
 
Ok, let me know when we simulate the brain and learn how to transfer a electrical-chemical process to silicon.

In the mean time, I'll keep posting all the progress we are making from the bottom up.

What progress is that? We're not particularly close to creating strong AI through those means; at least, we have no reason to think we are. We still have no idea how to achieve it. What we have is mere low level simulations of specific behavior under specific circumstances; some of it quite impressive sure, but nothing like what we're talking about. On the other hand, we *are* currently building a full human brain simulation; there's fewer problems to overcome in doing so, it's mostly just a matter of getting the computing power and mapping down, whereas with other routes to AI we need to figure out a hell of a lot more before we can even really be concerned with how much computing power we're going to need.

Obviously we need (and *will*) to do both; but my money is on the human brain simulation to produce tangible results first.
 
I've never understood the people who think AI is an existential threat to us; they seem to be mistaking Hollywood fiction for reality. If we ever manage to create a true artificial super intelligence, one of two things will happen.

Either;

It will be the greatest thing to ever happen to us as the vast intellect we've created solves all our problems.

Or;

We pull the plug on it.
Guns, zepplins, farming (Ammonia and chemical weapons), airplanes, fission, fusion... you name the technology, it was manipulated for military purposes.
 
I've never understood the people who think AI is an existential threat to us; they seem to be mistaking Hollywood fiction for reality. If we ever manage to create a true artificial super intelligence, one of two things will happen.

Either;

It will be the greatest thing to ever happen to us as the vast intellect we've created solves all our problems.

Or;

We pull the plug on it.
Guns, zepplins, farming (Ammonia and chemical weapons), airplanes, fission, fusion... you name the technology, it was manipulated for military purposes.
That's easy, suicide islamic extremist android-robot bomber.
Seriusly, it's hard to combine human like robots and islam in its present state and not be scared.
 
Guns, zepplins, farming (Ammonia and chemical weapons), airplanes, fission, fusion... you name the technology, it was manipulated for military purposes.

And? It doesn't change the fundamental nature of the two potential results. No military is going to commission a true artificial super intelligence, because no human military force could possibly hope to ever control such an entity. The best they could hope for is the emergence of an amoral AGI that happens to take their side; and no military is going to commit serious resources to a project that would only be beneficial to them on a one in a million chance. That's not to say militaries wouldn't be interested in advanced AI to use as weapons, but such AI systems most certainly wouldn't constitute strong AI. And the chances of such AI evolving into strong AI on its own are also effectively zero because no military force would willingly run the risk of allowing such an AI to rewrite its own code enough for that to be a possibility.

This means that the military is highly unlikely to be the source of strong AI. And if it's not the source, then we really don't have to worry about the military trying to manipulate or control it. That'd be like being worried about a hamster somehow managing to control a human being. And if the military somehow did actually create a strong AI, they're far more likely to institute safeguard protocols than the corporate sector is, so if it goes out of control then there'd almost certainly be a quick and easy way to disable it.

No, the military creating/manipulating AI and having it become a problem for us is just your typical hollywood fare, not a serious concern.
 
Ok, let me know when we simulate the brain and learn how to transfer a electrical-chemical process to silicon.

In the mean time, I'll keep posting all the progress we are making from the bottom up.

What progress is that? We're not particularly close to creating strong AI through those means; at least, we have no reason to think we are. We still have no idea how to achieve it. What we have is mere low level simulations of specific behavior under specific circumstances; some of it quite impressive sure, but nothing like what we're talking about. On the other hand, we *are* currently building a full human brain simulation; there's fewer problems to overcome in doing so, it's mostly just a matter of getting the computing power and mapping down, whereas with other routes to AI we need to figure out a hell of a lot more before we can even really be concerned with how much computing power we're going to need.

Progress: I can talk to my phone and it works amazing well. The Obama campaign used sentiment analysis to help craft their message on the internet. How do you think Google filters porn so well? The stuff the Watson platform is doing is really cool. Emily Howell, a bot that writes music:





Yeah, non of this is "strong AI", but I kinda think "strong AI" as a definition is stupid. An IA need not be human, just better. I think Jarad Diamond needs to update his book to, "Guns, Germs, Steel, and Algorithms."

What progress have we made simulating the brain (not trying to be argumentative. I just don't know)? I know we simulated a rat brain some time ago. Did that produce any results? It seems simulating the brain will give us more information about how to help humans i.e. drugs and mental illness.

Obviously we need (and *will*) to do both; but my money is on the human brain simulation to produce tangible results first.

Agreed.

On the toppic of good and evil AI, this study is interesting (a little dated and a little crude, yet interesting nonetheless) http://www.dailygalaxy.com/my_weblog/2009/05/a-robot-hitler.html
 
Yeah, non of this is "strong AI", but I kinda think "strong AI" as a definition is stupid. An IA need not be human, just better. I think Jarad Diamond needs to update his book to, "Guns, Germs, Steel, and Algorithms."

Strong AI doesn't actually refer to an AI that's human (or better); it refers to an AI that's capable of doing everything (or more) a human can in cognitive terms. It doesn't matter if it doesn't act as a human does, what matters is that it isn't just a dumb program that runs checklists and routines, but that it is an actual *mind*; capable of the kind of "general" intelligence humans have. Anything less than that is just an AI.


What progress have we made simulating the brain (not trying to be argumentative. I just don't know)? I know we simulated a rat brain some time ago. Did that produce any results? It seems simulating the brain will give us more information about how to help humans i.e. drugs and mental illness.


Both the US and the EU are (separately) attempting to construct a human brain simulation (BRAIN Initiative and Human Brain Project respectively). The rat brain you mentioned (the Blue Brain Project) is a precursor Swiss and EU project to the HBP that its researchers predict that they could create a cellular simulation (as opposed to their efforts of increasing the fidelity of their rat-brain simulation to the molecular level) of the human brain by 2023. (You may also have been referring to an IBM project which claims to have done the same and claims to have simulated a cat brain)

The Swiss/EU rat brain simulation has yielded some interesting results, apparently:

"Four years ago (2005), a team of researchers at the École Polytechnique Fédérale de Lausanne in Switzerland switched on Blue Brain, a computer designed to mimic a functioning slice of a rat's brain. At first, the virtual neurons fired only when prodded by a simulated electrical current. But recently, that has changed.

Apparently, the simulated neurons have begun spontaneously coordinating, and organizing themselves into a more complex pattern that resembles a wave. According to the scientists, this is the beginning of the self-organizing neurological patterns that eventually, in more complex mammal brains, become personality."


That's a pretty big deal itself.

There are also other neuron simulations that don't necessarily aim to reproduce a full human brain, but which show some very impressive results. There's Spaun, for instance, a brain simulation that can do simple math almost as well as a human can. This is not a program adding and subtracting numbers; it's a simulation of neurons that themselves do the math.

"Spaun sees a series of digits: 1 2 3; 5 6 7; 3 4 ?. Its neurons fire, and it calculates the next logical number in the sequence. It scrawls out a 5, in legible if messy writing.

This is an unremarkable feat for a human, but Spaun is actually a simulated brain. It contains 2.5 million virtual neurons — many fewer than the 86 billion in the average human head, but enough to recognize lists of numbers, do simple arithmetic and solve reasoning problems. "


"A pure computer simulation, Spaun simulates the physiology of each of its neurons, from spikes of electricity that flow through them to neurotransmitters that cross between them. The computing cells are divided into groups, corresponding to specific parts of the brain that process images, control movements and store short-term memories. These regions are wired together in a realistic way, and even respond to inputs that mimic the action of neurotransmitters.

As Spaun sees a stream of numbers, it extracts visual features so that it can recognize the digits. It can then perform at least eight different tasks, from simple ones like copying an image, to more complex ones similar to those found on IQ tests, such as finding the next number in a series. When finished, it writes out its answer with a physically modelled arm.

Spaun is almost as accurate at such simple tasks as the average human, and reproduces many quirks of human behaviour, such as the tendency to remember items at the start and end of a list better than those in the middle. “We weren’t surprised that it could do tasks,” says Eliasmith, ”but we were often surprised that subtle features like the time it took or the errors it made were the same as for humans”."
 
It's a bit ironic that people who claim to pursue human brain simulation start with AI doing stupid IQ tests.
Truth is, when humans do IQ tests they usually do them like computers, consciously going through set of checklists.
 
Last edited:
"Four years ago (2005), a team of researchers at the École Polytechnique Fédérale de Lausanne in Switzerland switched on Blue Brain, a computer designed to mimic a functioning slice of a rat's brain. At first, the virtual neurons fired only when prodded by a simulated electrical current. But recently, that has changed.

Apparently, the simulated neurons have begun spontaneously coordinating, and organizing themselves into a more complex pattern that resembles a wave. According to the scientists, this is the beginning of the self-organizing neurological patterns that eventually, in more complex mammal brains, become personality."[/I]

That's a pretty big deal itself.

Sounds like they built an analog system on top of a digital platform?
 
Strong AI doesn't actually refer to an AI that's human (or better); it refers to an AI that's capable of doing everything (or more) a human can in cognitive terms. It doesn't matter if it doesn't act as a human does, what matters is that it isn't just a dumb program that runs checklists and routines, but that it is an actual *mind*; capable of the kind of "general" intelligence humans have. Anything less than that is just an AI.
And here you contradict yourself: requiring that the strong AI is some sort of mind is what was questioned.
 
Emily Howell, a bot that writes music:





Yeah, non of this is "strong AI", but I kinda think "strong AI" as a definition is stupid. An IA need not be human, just better. I think Jarad Diamond needs to update his book to, "Guns, Germs, Steel, and Algorithms."


But Emily is a perfect example of "human AI" because making sounds is no problem at all: making sounds that humans like is hard. And making music (sounds organized to appeal to humans) is very hard. But is more of tuning in on what people like than handling the world around us.
 
Back
Top Bottom