• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

How long until humanity creates an AI that is better at arguing than...

34 years. Really, that is the answer.

This is based on Mores Law... assuming Moores Law continues to operate as it has been for the past 3 decades, whereby computing processing power increases by a predictable amount every year, by 2050 computers will have equivalent processing power to the human brain.
 
34 years. Really, that is the answer.

This is based on Mores Law... assuming Moores Law continues to operate as it has been for the past 3 decades, whereby computing processing power increases by a predictable amount every year, by 2050 computers will have equivalent processing power to the human brain.

So, six months until one gets the processing power of a Trump supporter, then?
 
Sorry for the boring interjection but...

The processing power of a human brain isn't required for analysis of whether premises lead to conclusions, or the generation of valid conclusions from premises.

Actually, I tend to wonder if computers would generate interestingly unique valid conclusions that humans would not because they would not be limited to the thought patterns (if they had decent branch path search algorithms) that humans are confined to. beep.
 
Sorry for the boring interjection but...

The processing power of a human brain isn't required for analysis of whether premises lead to conclusions, or the generation of valid conclusions from premises.

Actually, I tend to wonder if computers would generate interestingly unique valid conclusions that humans would not because they would not be limited to the thought patterns (if they had decent branch path search algorithms) that humans are confined to. beep.

If you just want a machine that do logic more exact than humans then we have got that for at least 50 years already.

If you want a machine that thinks about intresting logical problems then I think they exist now.

If you want a machine that you could have interesting discussions with. Wait not more than 10. years.
 
34 years. Really, that is the answer.

This is based on Mores Law... assuming Moores Law continues to operate as it has been for the past 3 decades, whereby computing processing power increases by a predictable amount every year, by 2050 computers will have equivalent processing power to the human brain.

So, six months until one gets the processing power of a Trump supporter, then?

Even less until one gets the processing power of Trump himself perhaps?
 
If I could argue with an AI I'd be doing that instead of with people on the internet.
 
There is a TED talk that Mr. Diamond (of the X prize fame) gave that related to this specifically. 34 years. that is how long it will be before computers can make better decisions than humans can... that was the scope of the 'processing power' figure.

this is a target that has been tracked for many years by government organizations that have a very strong interest in when the politician is no longer a viable instrument for government rule... in other words, when the robot apocalypse is due.
 
AI puppets could eventually process things faster but the logic would be the same. I'm not talking about logic as it is made out by zombie logicians but logic as we do it without even batting an eyelid. The difficulty will remain the same for them: How to get the basic facts right. Logic is not enough, you need your premises all right to and that part is way more difficult than just basic logic. AI will have to replay human evolution for itself. so maybe being just fast won't be quite enough. I guess you'd need some heavy machinery, somewhat like the Internet for example, but on a much more complex scale to do the trick. Basically, AI has to learn the world

Logic applied to linguistic statements, which is really the main interest now, will remain a difficulty because of the fuzziness of human linguistic behaviour and our inability to formalise it real good. AI robots will have to learn to speak the language we speak the way we all do, by practicing again and again, which can only results in a limited performance.

Formal disciplines, perhaps scientific ones for example, could be mastered more efficiently. But think of government... Is there a known science of government? Is there a rule book? Can you learn to govern real good just by looking at historical and current practice?

If the issue is whether AI could beat us, sure, but bacteria and viruses could too without exercising their brains or batting an eyelid.
EB
 
34 years. Really, that is the answer.

This is based on Mores Law... assuming Moores Law continues to operate as it has been for the past 3 decades, whereby computing processing power increases by a predictable amount every year, by 2050 computers will have equivalent processing power to the human brain.

So, six months until one gets the processing power of a Trump supporter, then?
The Trump thing is a good example of the situation. Standard politicians find it increasingly difficult to argue anything because they assume rationality is necessary and politics is not entirely rational. People know more now than they did a century ago about the unfairness of society (as opposed to just government). We all tend to spot the bullshit more easily than perhaps our fathers did. Then Trump comes along and drop the requirement for a rational public discourse and hey presto got elected President of the United States of America. All he has to do is speak not the Truth but small snipets of truth that couldn't possibly make up a coherent policy but that voters will feel grateful someone articulates at all.

PS. Relax, I'm sure his supporters won't make a majority.

PPS. Although maybe it won't be necessary. There is a known precedent.

So, how do you explain to an AI robot the changing rules of this game?
EB
 
So, six months until one gets the processing power of a Trump supporter, then?
The Trump thing is a good example of the situation. Standard politicians find it increasingly difficult to argue anything because they assume rationality is necessary and politics is not entirely rational. People know more now than they did a century ago about the unfairness of society (as opposed to just government). We all tend to spot the bullshit more easily than perhaps our fathers did. Then Trump comes along and drop the requirement for a rational public discourse and hey presto got elected President of the United States of America. All he has to do is speak not the Truth but small snipets of truth that couldn't possibly make up a coherent policy but that voters will feel grateful someone articulates at all.

PS. Relax, I'm sure his supporters won't make a majority.

PPS. Although maybe it won't be necessary. There is a known precedent.

So, how do you explain to an AI robot the changing rules of this game?
EB

Introduce a suite of random variables and processes into its fuzzy logic. IOW be more human.
 
Sorry for the boring interjection but...

The processing power of a human brain isn't required for analysis of whether premises lead to conclusions, or the generation of valid conclusions from premises.

Actually, I tend to wonder if computers would generate interestingly unique valid conclusions that humans would not because they would not be limited to the thought patterns (if they had decent branch path search algorithms) that humans are confined to. beep.

If you just want a machine that do logic more exact than humans then we have got that for at least 50 years already.

If you want a machine that thinks about intresting logical problems then I think they exist now.

If you want a machine that you could have interesting discussions with. Wait not more than 10. years.
The hard part of automating reasoning isn't correctly generating a logical conclusion; it's finding the relevant conclusion needle in the haystack of trillions of perfectly valid but useless implications.
 
If you just want a machine that do logic more exact than humans then we have got that for at least 50 years already.

If you want a machine that thinks about intresting logical problems then I think they exist now.

If you want a machine that you could have interesting discussions with. Wait not more than 10. years.
The hard part of automating reasoning isn't correctly generating a logical conclusion; it's finding the relevant conclusion needle in the haystack of trillions of perfectly valid but useless implications.

[YOUTUBE]https://www.youtube.com/watch?v=WFR3lOm_xhE[/YOUTUBE]

It's already happening. Watson can do that hard yakka linguistically, of determining what is relevant when given plain English questions, and then can find relevant responses based an a massive knowledge base.

I find it interesting, not so much when Watson is right, but the way that when it is wrong, it is usually wrong in a way no human would ever be; and it's second and third best guesses are so frequently wildly 'silly' to human ways of thought.
 
The Trump thing is a good example of the situation. Standard politicians find it increasingly difficult to argue anything because they assume rationality is necessary and politics is not entirely rational. People know more now than they did a century ago about the unfairness of society (as opposed to just government). We all tend to spot the bullshit more easily than perhaps our fathers did. Then Trump comes along and drop the requirement for a rational public discourse and hey presto got elected President of the United States of America. All he has to do is speak not the Truth but small snipets of truth that couldn't possibly make up a coherent policy but that voters will feel grateful someone articulates at all.

PS. Relax, I'm sure his supporters won't make a majority.

PPS. Although maybe it won't be necessary. There is a known precedent.

So, how do you explain to an AI robot the changing rules of this game?
EB

Introduce a suite of random variables and processes into its fuzzy logic. IOW be more human fromderinsiding.
Remarkably rational AI Gore had all the facts and lost to fuzzy bushy George W. What are the rules and where do you learn them?
EB
 
It's always in very specific set-ups which are sort of one-dimensionnal. A runnning machine with a small computer which would easily beat me in straight-line race would lose badly in any other race. Change the rules of the game ever so slightly and the human wins. These are still "machines". They do what they are told to do, and they need humans to be told what to do.
EB
 
Robot Ken doesn't work.
EB
 
AI puppets could eventually process things faster but the logic would be the same.
They already process rule sets faster- which is one side of the equation... the other side:
Logic applied to linguistic statements, which is really the main interest now, will remain a difficulty because of the fuzziness of human linguistic behaviour and our inability to formalise it real good. AI robots will have to learn to speak the language we speak the way we all do, by practicing again and again, which can only results in a limited performance.
1) Train neural networks to select concepts (as individual premises, conclusions, etc.) from language.
2) Train other neural networks to recognize validity from those premises.
3) Train a single neural network to do both.

The strict logic side (a is the equivalent of b, if a then c, therefore if b then c) could be checked from a non-neural net AI after the concepts have been arranged into computational/symbolic logic.

In other words, we'd split the AI into neural nets that focus, divide, and combine information into objects that can be checked with a specific rule set to see if they follow the rules of symbolic logic.

Sort of like what human scale AIs are doing now...
 
If the issue is whether AI could beat us, sure, but bacteria and viruses could too without exercising their brains or batting an eyelid.
EB

That was what occurred to me immediately upon reading the OP. How do they do it? Vary/select/inherit. When you get an AI that could do as many trials in a given time as can the global population of a given bacterium or virus, then it will beat us. Probably to our own benefit...
 
Back
Top Bottom