• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Google buys two more UK artificial intelligence startups

NobleSavage

Veteran Member
Joined
Apr 28, 2003
Messages
3,079
Location
127.0.0.1
Basic Beliefs
Atheist
Google has expanded its artificial intelligence research team, acquiring two Oxford University spin-off companies specialising in machine learning and computer vision.

Dark Blue Labs and Vision Factory and their seven key researchers will be added to Google’s DeepMind artificial intelligence research company – another British artificial intelligence startup which the search giant acquired in January.

“We are thrilled to welcome these extremely talented machine learning researchers to the Google DeepMind team and are excited about the potential impact of the advances their research will bring,” wrote Demis Hassabis, co-founder of DeepMind.

http://www.theguardian.com/technolo...achine-learning-dark-blue-labs-vision-factory

In other news, Elon Musk: ‘With artificial intelligence we are summoning the demon.’

Video

http://www.washingtonpost.com/blogs...cial-intelligence-we-are-summoning-the-demon/
 
I've never understood the people who think AI is an existential threat to us; they seem to be mistaking Hollywood fiction for reality. If we ever manage to create a true artificial super intelligence, one of two things will happen.

Either;

It will be the greatest thing to ever happen to us as the vast intellect we've created solves all our problems.

Or;

We pull the plug on it.
 
We pull the plug on it.

How we gonna do that?

Are you really asking me how you unplug a computer? Because that's really as simple as it is. In the movies, the AI often 'escapes' out onto the internet somehow, and then we can't stop it. But that's going to be impossible for a real AI to do. To create a true super intelligence is going to require some very specialized hardware; such an intelligence would not be able to exist on anything other than said specialized hardware. It might *utilize* the internet (and only if the researchers make the mistake of giving it internet access before having figured out its intentions and motivatoins), but it wouldn't be able to 'escape' through it; it would be as much a prisoner of whatever hardware houses it as we are prisoner to our bodies. That being the case, it's just a simple matter of pushing the off button. As humans, I'm sure we'd have an advantage or two over a defenseless box.

The development of AI itself is not the problem; it's what people do with it once its been created. Any danger from an evil AI comes purely from whatever access we give it. Keep it locked away in an off-grid system where it does research for us, and there's never going to be a problem (just don't believe the evil AI when it tells you it researched a new and superior version of chocolate-strawberry milk and it wants you to have the first taste, and you'll be fine). Give it access to the traffic lights, and it might either eliminate traffic jams or, if it's evil, cause a bunch of accidents until we figure it out and take its access away. Obviously, if you're dumb enough to give a general super intelligence access to weapons, then you probably deserve to be wiped out; but any such decision would most certainly be made by politicians, instead of the scientists working to figure out AI.
 
Google has expanded its artificial intelligence research team, acquiring two Oxford University spin-off companies specialising in machine learning and computer vision.

Dark Blue Labs and Vision Factory and their seven key researchers will be added to Google’s DeepMind artificial intelligence research company – another British artificial intelligence startup which the search giant acquired in January.

“We are thrilled to welcome these extremely talented machine learning researchers to the Google DeepMind team and are excited about the potential impact of the advances their research will bring,” wrote Demis Hassabis, co-founder of DeepMind.

http://www.theguardian.com/technolo...achine-learning-dark-blue-labs-vision-factory

In other news, Elon Musk: ‘With artificial intelligence we are summoning the demon.’

Video

http://www.washingtonpost.com/blogs...cial-intelligence-we-are-summoning-the-demon/
I hope Google actually does something with these companies rather than just buying up potential competition.
 
How we gonna do that?

Are you really asking me how you unplug a computer? Because that's really as simple as it is. In the movies, the AI often 'escapes' out onto the internet somehow, and then we can't stop it. But that's going to be impossible for a real AI to do. To create a true super intelligence is going to require some very specialized hardware; such an intelligence would not be able to exist on anything other than said specialized hardware. It might *utilize* the internet (and only if the researchers make the mistake of giving it internet access before having figured out its intentions and motivatoins), but it wouldn't be able to 'escape' through it; it would be as much a prisoner of whatever hardware houses it as we are prisoner to our bodies. That being the case, it's just a simple matter of pushing the off button. As humans, I'm sure we'd have an advantage or two over a defenseless box.

The development of AI itself is not the problem; it's what people do with it once its been created. Any danger from an evil AI comes purely from whatever access we give it. Keep it locked away in an off-grid system where it does research for us, and there's never going to be a problem (just don't believe the evil AI when it tells you it researched a new and superior version of chocolate-strawberry milk and it wants you to have the first taste, and you'll be fine). Give it access to the traffic lights, and it might either eliminate traffic jams or, if it's evil, cause a bunch of accidents until we figure it out and take its access away. Obviously, if you're dumb enough to give a general super intelligence access to weapons, then you probably deserve to be wiped out; but any such decision would most certainly be made by politicians, instead of the scientists working to figure out AI.

With your nickname shouldn't you be predicting Skynet? :) Isn't strong AI more of a software problem than a hardware problem? You are assuming it's gonna be in one box. No reason it's can't be distributed. Say a bot herder gets tired of DDosing his friends and starts playing around with evolutionary algorithms (most of the science is published in journals not locked up at the Pentagon.) Not to mention source code always has a way of leaking. No reason even a dumb IA couldn't learn hacking. The main thing with hacking is that humans always make mistakes. So our fictional distributed AI one day cuts off command and control. I've yet to get a good answer on SCADA systems (are they air gapping them or not?). Stuxnet leads me to believe they are not.

Humans are getting really stupid dependent on technology. If you shut down the cell networks the economy would crash. If you shut down GPS the economy would crash. All the truckers now use GPS (most of them probably couldn't even read a map). Most cities have about a 2 day food supply. There is a psychotic desire to connect everything to the internet now. Tosters, fridges, security systems, lawn sprinklers and I even read about a toilet. And "The Internet of Things" is just getting started. Once IPv6 is fully adopted I have a bad feeling that the security provided by NAT will be diminished.

Maybe this would be the start of a good sci-fi book. If only I could write.
 
I just looked up IBM's Watson out of curiosity. It has 2,880 POWER7 processor cores and has 16 terabytes of RAM. That could all be distributed, just that there would be lag between the interconnects. As far as software it uses IBM's proprietary DeepQA, Apache UIMA (open source), Apache Hadoop (open source), and Linux for the OS.
 
We pull the plug on it.
Rather impractical. A more likely solution is to try to keep in control of it with various hardware and software mechanisms. Isaac Asimov recognized the desirability of doing so long ago, and that's why he came up with his Three Laws of Robotics for his science-fictional robots. Or more generally, Three Laws of AI Systems.

He had gotten annoyed with all the stories of robots destroying their creators with the clear implication that we are not meant to build machines like those. He recognized that we build safety features and safety mechanisms into many of our tools, so why not also AI systems?
 
I think the main threat is to a job market.
I see manufacturing going completely humanless pretty soon, then doctors going out of work. There will be no truck drivers in 10 years.
Teachers are going to be replaced by AI for the most part. And most people are not capable to occupy themselves by anything without work.
 
With your nickname shouldn't you be predicting Skynet? :)

Just because I enjoy dystopian fiction doesn't mean I think it's particularly likely to come to pass.


Isn't strong AI more of a software problem than a hardware problem?

No. It's both.

You are assuming it's gonna be in one box. No reason it's can't be distributed.

Actually I'm not assuming it's going to be in one box; that's just the word I chose to use for sake of ease. The thing is, you're assuming that we can build strong AI on current computer architecture; we almost certainly can't. It will almost certainly require some form of quantum hardware; possibly even hardware that simulates the function of neurons. That's what I meant when I said the AI would be imprisoned by its hardware just as we are imprisoned by ours. It wouldn't be able to transfer its existence onto our desktop computers and supercomputers, even if it uses all of them at once.



Say a bot herder gets tired of DDosing his friends and starts playing around with evolutionary algorithms (most of the science is published in journals not locked up at the Pentagon.)

You really think that's not already happening? That sort of thing is common fare by now. And it isn't a problem, because the hardware isn't there to support the birth of a strong AI. You could combine all the computers in the world and run ALIFE sims on them, you still wouldn't get a strong AI.



Not to mention source code always has a way of leaking. No reason even a dumb IA couldn't learn hacking. The main thing with hacking is that humans always make mistakes. So our fictional distributed AI one day cuts off command and control. I've yet to get a good answer on SCADA systems (are they air gapping them or not?). Stuxnet leads me to believe they are not.

This is again hollywood thinking. In the movies, the AI learns to hack and then it becomes invincible because suddenly it controls everything. Of course, in reality, even with many SCADA systems, not everything is connected enough for an AI to hack; assuming it would even feel compelled to do so. Evil AI on a rampage destroying our infrastructure might make for a good story, but we have zero real-life reasons to think this would ever be a concern. Even if it were to happen though, we would still have the advantage because unlike the AI we can interact with the physical world. So it gets control of the power grid, so what? That's a temporary setback for us, not a permanent one. Any infrastructure it gets control of can simply be destroyed or disabled through physical means. Besides, we could always set off an EMP, or a few dozen.

Humans are getting really stupid dependent on technology. If you shut down the cell networks the economy would crash. If you shut down GPS the economy would crash. All the truckers now use GPS (most of them probably couldn't even read a map). Most cities have about a 2 day food supply. There is a psychotic desire to connect everything to the internet now. Tosters, fridges, security systems, lawn sprinklers and I even read about a toilet. And "The Internet of Things" is just getting started. Once IPv6 is fully adopted I have a bad feeling that the security provided by NAT will be diminished.

Oh no doubt; IF a strong AI turned out to be evil (a big if), and IF it escaped confinement (highly improbable), and IF it learned how to hack all our systems, then getting the AI under control *would* result in a lot of damage to things like the economy. However,

1) we would still win unless the politicians are dumb enough to make the nukes and drones accessible via wifi, and

2) the benefits of friendly AI far outweigh the risks of evil AI.


Maybe this would be the start of a good sci-fi book. If only I could write.

I still need to finish editing my sci-fi story that involves AI. Of course, in my book, the AI aren't exactly evil (but not exactly good either).
 
We pull the plug on it.
Rather impractical. A more likely solution is to try to keep in control of it with various hardware and software mechanisms.

Impractical only if you could actually compel an evil super-intelligence to do your bidding safely. Personally, I see it as much more practical to pull the plug and start developing a new, friendlier AI.


Isaac Asimov recognized the desirability of doing so long ago, and that's why he came up with his Three Laws of Robotics for his science-fictional robots. Or more generally, Three Laws of AI Systems.

He had gotten annoyed with all the stories of robots destroying their creators with the clear implication that we are not meant to build machines like those. He recognized that we build safety features and safety mechanisms into many of our tools, so why not also AI systems?

He also wrote about how those three laws could be subverted; so relying purely on those would not be smart. But yes, of course we should build safety mechanisms into our AI systems; and the lack of such safety features in almost all stories of 'AI goes wild, kills everyone' suggests that sci-fi authors either don't think things through very well (surely true of some of them); or realize that a story where the AI goes rogue isn't quite as interesting when the researcher just has to press a button that liquifies all the chips; or worse, types "format C:"

Personally, I hope for AI similar to the Minds from Ian m. Bank's Culture series. Hyper intelligent AI that are designed/grown to be benevolent. Of course, in this books, humans are sort of like pets doted on by the AI minds ; but if it means leading a life as the humans in those books do, then I certainly wouldn't mind.

For those who aren't watching it already, Person of Interest is very relevant too. Particularly one of the recent episodes where Harold (who created the first AI) is shown to kill off all the AI's he built that tried to kill him, until he came up with one that didn't.
 
Actually I'm not assuming it's going to be in one box; that's just the word I chose to use for sake of ease. The thing is, you're assuming that we can build strong AI on current computer architecture; we almost certainly can't. It will almost certainly require some form of quantum hardware; possibly even hardware that simulates the function of neurons. That's what I meant when I said the AI would be imprisoned by its hardware just as we are imprisoned by ours. It wouldn't be able to transfer its existence onto our desktop computers and supercomputers, even if it uses all of them at once.
Given sufficient computational resources, however, a classical computer could be made to simulate any quantum algorithm; quantum computation does not violate the Church–Turing thesis. I'm not all that impressed with quantum computing, it seems very limited to specific applications - like factoring large primes.
Containing nukes is much easier because they truly need serious specialized hardware to manufacture.


You really think that's not already happening? That sort of thing is common fare by now. And it isn't a problem, because the hardware isn't there to support the birth of a strong AI. You could combine all the computers in the world and run ALIFE sims on them, you still wouldn't get a strong AI.
All I'm really aware of are polymorphic viruses. Anything else I should look up?

1) we would still win unless the politicians are dumb enough to make the nukes and drones accessible via wifi, and

Noting at this point would surprise me. Stuxnet would never have worked if the Iranian's didn't use the default Siemens PW.

2) the benefits of friendly AI far outweigh the risks of evil AI.
Agreed.
 
We pull the plug on it.
Rather impractical. A more likely solution is to try to keep in control of it with various hardware and software mechanisms. Isaac Asimov recognized the desirability of doing so long ago, and that's why he came up with his Three Laws of Robotics for his science-fictional robots. Or more generally, Three Laws of AI Systems.

He had gotten annoyed with all the stories of robots destroying their creators with the clear implication that we are not meant to build machines like those. He recognized that we build safety features and safety mechanisms into many of our tools, so why not also AI systems?

So basically make strong AI but don't give it free will?
 
Are you really asking me how you unplug a computer? Because that's really as simple as it is. In the movies, the AI often 'escapes' out onto the internet somehow, and then we can't stop it. But that's going to be impossible for a real AI to do. To create a true super intelligence is going to require some very specialized hardware; such an intelligence would not be able to exist on anything other than said specialized hardware. It might *utilize* the internet (and only if the researchers make the mistake of giving it internet access before having figured out its intentions and motivatoins), but it wouldn't be able to 'escape' through it; it would be as much a prisoner of whatever hardware houses it as we are prisoner to our bodies. That being the case, it's just a simple matter of pushing the off button. As humans, I'm sure we'd have an advantage or two over a defenseless box.

The development of AI itself is not the problem; it's what people do with it once its been created. Any danger from an evil AI comes purely from whatever access we give it. Keep it locked away in an off-grid system where it does research for us, and there's never going to be a problem (just don't believe the evil AI when it tells you it researched a new and superior version of chocolate-strawberry milk and it wants you to have the first taste, and you'll be fine). Give it access to the traffic lights, and it might either eliminate traffic jams or, if it's evil, cause a bunch of accidents until we figure it out and take its access away. Obviously, if you're dumb enough to give a general super intelligence access to weapons, then you probably deserve to be wiped out; but any such decision would most certainly be made by politicians, instead of the scientists working to figure out AI.

With your nickname shouldn't you be predicting Skynet? :) Isn't strong AI more of a software problem than a hardware problem? You are assuming it's gonna be in one box. No reason it's can't be distributed. Say a bot herder gets tired of DDosing his friends and starts playing around with evolutionary algorithms (most of the science is published in journals not locked up at the Pentagon.) Not to mention source code always has a way of leaking. No reason even a dumb IA couldn't learn hacking. The main thing with hacking is that humans always make mistakes. So our fictional distributed AI one day cuts off command and control. I've yet to get a good answer on SCADA systems (are they air gapping them or not?). Stuxnet leads me to believe they are not.

Humans are getting really stupid dependent on technology. If you shut down the cell networks the economy would crash. If you shut down GPS the economy would crash. All the truckers now use GPS (most of them probably couldn't even read a map). Most cities have about a 2 day food supply. There is a psychotic desire to connect everything to the internet now. Tosters, fridges, security systems, lawn sprinklers and I even read about a toilet. And "The Internet of Things" is just getting started. Once IPv6 is fully adopted I have a bad feeling that the security provided by NAT will be diminished.

Maybe this would be the start of a good sci-fi book. If only I could write.

Already written: "One Second After" - though I would not necessarily call it "good sci-fi". It did get me in the feelers though because the hero's daughter was Type 1 diabetic. Running out of insulin remains my one and only paranoid fear. /derail
 
I think the main threat is to a job market.
I see manufacturing going completely humanless pretty soon, then doctors going out of work. There will be no truck drivers in 10 years.
Teachers are going to be replaced by AI for the most part. And most people are not capable to occupy themselves by anything without work.

I think people would be able to occupy themselves quite well - the question will be if there is some sort of reasonable guaranteed income due to those job losses. I doubt it, and that is where the problem will be
 
I think the main threat is to a job market.
I see manufacturing going completely humanless pretty soon, then doctors going out of work. There will be no truck drivers in 10 years.
Teachers are going to be replaced by AI for the most part. And most people are not capable to occupy themselves by anything without work.

I think people would be able to occupy themselves quite well
By watching TV and playing video games?
- the question will be if there is some sort of reasonable guaranteed income due to those job losses. I doubt it, and that is where the problem will be
Well, people already have that, it's called social security.
 
I think the main threat is to a job market.
I see manufacturing going completely humanless pretty soon, then doctors going out of work. There will be no truck drivers in 10 years.
Teachers are going to be replaced by AI for the most part. And most people are not capable to occupy themselves by anything without work.

I think people would be able to occupy themselves quite well - the question will be if there is some sort of reasonable guaranteed income due to those job losses. I doubt it, and that is where the problem will be

I've got a theory: there is a good chance the the corporate sector and all it's influence will get behind this universal income idea. It's not gonna take long for CFOs to figure out that they need people to buy their shit!
 
Given sufficient computational resources, however, a classical computer could be made to simulate any quantum algorithm;

Thereby assuming that would be sufficient to yield strong AI; we don't know that that is the case. It may be true, but it may also be true that the kind of sapience we're talking about can only exist via certain hardware. The only real test to determine this is to just try and do it. We're already working on trying to build a full and accurate simulation of the human brain. If we get the computing power to build a detailed enough simulation of a human brain, and do so in real-time, then it'll be very interesting to see what happens when we turn it on. Until then, however, we can not draw any conclusions.

quantum computation does not violate the Church–Turing thesis. I'm not all that impressed with quantum computing, it seems very limited to specific applications - like factoring large primes.

You're missing the point. The point is that sapience of the strong-ai (or human) variety might not be reducable to the kind of 'simple' calculations that could be performed by a sufficiently fast computer of our current architecture.

Containing nukes is much easier because they truly need serious specialized hardware to manufacture.

But you're assuming that AI's don't. There is no evidence to suggest that a strong AI could be created using anything other than specialized hardware. Is it possible that in the future, average desktop computers are powerful enough to allow an AI to be created using them? Maybe. We have no reason to exclude that possibility. We also have no reason to conclude that it's going to be possible either, though.


All I'm really aware of are polymorphic viruses. Anything else I should look up?

There's plenty of a-life simulations available for download, and in addition to polymorphic ones, there are also metamorphic viruses. The chance of either alife or self-modifying viruses leading to strong AI is effectively zero, however. The modifications still happen within predictable parameters. A virus that rewrites parts of itself to avoid detection is certainly interesting, but it doesn't actually add information, or if it does, the additions are limited enough that you're never going to reach a point where the expanded code might somehow fall into a configuration that leads to sapience. There's no mechanism, currently, by which our computer networks could inadvertently give rise to strong AI.
 
The chance of either alife or self-modifying viruses leading to strong AI is effectively zero, however.

Agreed. However, I'm not convinced that simulating the human brain is the way to go. To me it seems best to just keep working away at the myriad AI algorithms.

This is an example of one thing that Watson can do:



Watson is just learning from Wikipedia, mostly text. Wait till we throw big data at it. No reason the next step wouldn't be to add sensors, like a huge network of cctv cameras. The "internet of things" could become it's senses -- far outstripping a human's 5 senses.

Watson already has an open source ecosystem. You can get the APIs and build your own service https://developer.ibm.com/watson/

No reason the hardware needs to be "Watson". I can see gobs of source code ending up on GetHub. There is a obvious business ecosystem waiting to bloom where thousands of individual/companies create kick ass niche specific algos and then sell the API hosted on EC2 or some other cloud service. That's when things get interesting. Hundreds of thousands of people around the globe experimenting with APIs and improving them. Think of all the idiotic things people voluntary opt into when downloading an app for their phone. This distributed network could have sound, video, GPS, etc, etc, etc. of every cell phone in the world (or the 95% of idiots who click "yes").

How do we determine when we have strong AI? A lot of people tend to associate it with self awareness. I don't see any magic here. Have the program read it's log files in real time as a feed back loop and add memory -- boom, self awareness.

And the malevolent will do it on their botnets.
 
Yes, dumb simulation of a brain is a wrong approach. Trying to devise thinking algorithms from scratch is much better.
Having said that, it is my own experience that in a lot of cases you end up with pretty good simulation of a brain.
But in this case you at least understand why and how you end up like that, instead of blindly copying working sample.
Also as far as hardware goes I think we are pretty much there, it's software which sucks.
I don't know much about Watson but that small fail at trivia game tells me that it is only slightly better than complicated search engine without slightest "theory of mind" capabilities. It still requires manual tuning on each task and all it really does is searching through a database without slightest bit of "thinking".

Now about hardware again, people say brain is orders of magnitudes more powerful than today's supercomputers, but we don't know how efficient it is. Brain is a result of evolution and evolution is limited by the very principle it's based on - evolution.
High level thinking is relatively recent nature's "invention" and it is likely it's very inefficient because of that.
So we don't know if it is impossible to create an efficient algorithm which bypass all that evolution crap and which let human level comparable AI run on PC level computer.
 
Back
Top Bottom