• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

How long until humanity creates an AI that is better at arguing than...

There is a TED talk that Mr. Diamond (of the X prize fame) gave that related to this specifically. 34 years. that is how long it will be before computers can make better decisions than humans can... that was the scope of the 'processing power' figure.
Which talk?

Found this recent post: http://blog.ted.com/ibm-watson-offers-5-million-prize-for-an-ai-x-prize-presented-by-ted/

I was going to ramble on.. but I glanced at Speakpigeon's post..

I'm sorry, I got the name wrong: it is Peter Diamandis. I actually saw him in person at a conference in Vegas 3 years ago.

https://www.ted.com/speakers/peter_diamandis
 
They already process rule sets faster- which is one side of the equation... the other side:
Logic applied to linguistic statements, which is really the main interest now, will remain a difficulty because of the fuzziness of human linguistic behaviour and our inability to formalise it real good. AI robots will have to learn to speak the language we speak the way we all do, by practicing again and again, which can only results in a limited performance.
1) Train neural networks to select concepts (as individual premises, conclusions, etc.) from language.
What you can do is train a neural network to learn what one human being say. Add a second human being and what he will say won't be consistent with what the first one had been saying. Look at logic itself. Even proponents of standard logic adopt different styles of presentation and sometime vocabulary, and there are many different kinds of logic. Which one is the good one? Plus, people don't just talk, they perceive their environment and their perception organs are really good and powerful in terms of the quantity of information processed. How do you train AI machines to do the same thing? And if they can't look at the world the way we do they won't get a say. They will remain tools human beings use.

2) Train other neural networks to recognize validity from those premises.
If it's deductive logic kind of validity I think it's going to be very unproductive. Those premises will be too fuzzy to come together into some syllogistic pattern to get any valuable conclusion and deductive logic is really not enough. Science relies both on deductive and inductive logic. Inductive logic requires observing the world. Talking about it is just one aspect of what scientists, and everybody else for that matter, do.

3) Train a single neural network to do both.

The strict logic side (a is the equivalent of b, if a then c, therefore if b then c) could be checked from a non-neural net AI after the concepts have been arranged into computational/symbolic logic.

In other words, we'd split the AI into neural nets that focus, divide, and combine information into objects that can be checked with a specific rule set to see if they follow the rules of symbolic logic.

Sort of like what human scale AIs are doing now...
If you design AI intelligence away from human intelligence, you may loose whatever is good in it. Human intelligence works because it's adapted to the world, or rather to the way we perceive the world. And it's been something like a 3,5 billion-year evolution to get this well adapted.

And I don't think we understand how that works.

And will AI designers have the time to work their magic before the barbarians come?

Still, I think Trump will increase the funding for that kind of stuff so you can look at the bright side of the things coming up.
EB
 
If the issue is whether AI could beat us, sure, but bacteria and viruses could too without exercising their brains or batting an eyelid.
EB

That was what occurred to me immediately upon reading the OP. How do they do it? Vary/select/inherit. When you get an AI that could do as many trials in a given time as can the global population of a given bacterium or virus, then it will beat us. Probably to our own benefit...
Why to our own benefit?

Like stopping us from destroying the Earth in the thousands of ways we do?
EB
 
A runnning machine with a small computer ....
EB

error. Running machines now have large AI computers.

- - - Updated - - -

Remarkably rational AI Gore had all the facts and lost to fuzzy bushy George W. What are the rules and where do you learn them?
EB

Gore is rational but not really AI. Bush is AI. Bush won. Nuff sed.
 
I think Google, Facebook and other such concerns (Microsoft? Twitter? The IRS?) have introduced AI HMI in their Websites. Anyone tried them?
EB
 
They already process rule sets faster- which is one side of the equation... the other side:

1) Train neural networks to select concepts (as individual premises, conclusions, etc.) from language.
What you can do is train a neural network to learn what one human being say. Add a second human being and what he will say won't be consistent with what the first one had been saying.
Neural net training isn't exact, rather the neural nets pick up on numerous cues to figure out if something is or has the characteristics of something. They can pick a bus out of a photo, so they should (I lean towards do, but haven't Googled an article to back me up) be able to pick concepts out of language.


How do you train AI machines to do the same thing? And if they can't look at the world the way we do they won't get a say. They will remain tools human beings use.
Just be kind. If my purpose is to bring you a glass of water, don't be an asshole about it. It's not complicated.

2) Train other neural networks to recognize validity from those premises.
If it's deductive logic kind of validity I think it's going to be very unproductive. Those premises will be too fuzzy to come together into some syllogistic pattern to get any valuable conclusion and deductive logic is really not enough.
If something can recognize the business of buses in an image.. it can easily be trained to recognize something far simpler, like proper syllogistic form. You might not even need neural networks to recognize patterns that create syllogisms after you have the concepts (including if/then type statements) isolated.

If you design AI intelligence away from human intelligence, you may loose whatever is good in it.
Ehh, what makes you think human intelligence isn't AI?

Still, I think Trump will increase the funding for that kind of stuff so you can look at the bright side of the things coming up.
EB
You are soooooo :D :D :D ttyl :D
 
If the evidence clearly goes against your argument, there is no way to win. Bluff and bluster only goes so far. Faith is an altogether lost cause. So if a good information processor is able to relate evidence to claims made, thereby demonstrating their deficiencies, it should be able to kick arse.
 
If the evidence clearly goes against your argument, there is no way to win. Bluff and bluster only goes so far. Faith is an altogether lost cause. So if a good information processor is able to relate evidence to claims made, thereby demonstrating their deficiencies, it should be able to kick arse.
As I understand, some in AI are teaching computers language the same way children are taught, back and forth dialogue and repetition rather than linear programming, grammar rules, and word lists. And supposedly the computers are learning word meaning by context and sentence structure through dialogue. So it may be that in a few years we humans may find our asses soundly kicked. I have noticed that the spambots are much better than they were only a couple years ago.

Compared to earlier voice recognition software, even Siri talks gooder. ;)
 
If the evidence clearly goes against your argument, there is no way to win. Bluff and bluster only goes so far. Faith is an altogether lost cause. So if a good information processor is able to relate evidence to claims made, thereby demonstrating their deficiencies, it should be able to kick arse.
As I understand, some in AI are teaching computers language the same way children are taught, back and forth dialogue and repetition rather than linear programming, grammar rules, and word lists. And supposedly the computers are learning word meaning by context and sentence structure through dialogue. So it may be that in a few years we humans may find our asses soundly kicked. I have noticed that the spambots are much better than they were only a couple years ago.

Compared to earlier voice recognition software, even Siri talks gooder. ;)
I picked up somebody saying there was now a rate of no more than one error in twenty words in good AI. But I heard that through noise and maybe my own decoding module fucked up.

As I understand it, AIs on the Internet would listen in on what you're doing and learn from that. So, Google's AI would potentially learn from 1 billion or more people, day after day. The language should become real good, the limitation being the computing power dedicated to that and perhaps some pre-structuring of the data as you don't need to mingle the data from French and English speakers for example. Barack used "back of the queue" rather than "back of the line" in Britain. People noticethat sort of things. They did anyway in this case.
EB
 
Neural net training isn't exact, rather the neural nets pick up on numerous cues to figure out if something is or has the characteristics of something. They can pick a bus out of a photo, so they should (I lean towards do, but haven't Googled an article to back me up) be able to pick concepts out of language.

If something can recognize the business of buses in an image.. it can easily be trained to recognize something far simpler, like proper syllogistic form. You might not even need neural networks to recognize patterns that create syllogisms after you have the concepts (including if/then type statements) isolated.
I know neural networks can do this but if they are trained on a diet of linguistic occurrences alone they won't end up in the same spot as we are. They will remain goofyish. If you want to add a diet of real-world interactions, like walking the streets and having a job, AI will need interfaces to do the job of perceiving their environment and there the computing power (or whatever it's called for neural networks) required will be massive. Hence the problem.
EB
 
Computers follow instructions, right? So, we win.
EB
 
Computers follow instructions, right? So, we win.
EB

Computer software consists of instruction. That is a totally different matter.
I see what you mean (not literally, though): Computers are build by workers, i.E. human beings, and those don't follow instructions, so you never know. We're doomed.
EB
 
Back
Top Bottom