34 years. Really, that is the answer.
This is based on Mores Law... assuming Moores Law continues to operate as it has been for the past 3 decades, whereby computing processing power increases by a predictable amount every year, by 2050 computers will have equivalent processing power to the human brain.
Sorry for the boring interjection but...
The processing power of a human brain isn't required for analysis of whether premises lead to conclusions, or the generation of valid conclusions from premises.
Actually, I tend to wonder if computers would generate interestingly unique valid conclusions that humans would not because they would not be limited to the thought patterns (if they had decent branch path search algorithms) that humans are confined to. beep.
34 years. Really, that is the answer.
This is based on Mores Law... assuming Moores Law continues to operate as it has been for the past 3 decades, whereby computing processing power increases by a predictable amount every year, by 2050 computers will have equivalent processing power to the human brain.
So, six months until one gets the processing power of a Trump supporter, then?
The Trump thing is a good example of the situation. Standard politicians find it increasingly difficult to argue anything because they assume rationality is necessary and politics is not entirely rational. People know more now than they did a century ago about the unfairness of society (as opposed to just government). We all tend to spot the bullshit more easily than perhaps our fathers did. Then Trump comes along and drop the requirement for a rational public discourse and hey presto got elected President of the United States of America. All he has to do is speak not the Truth but small snipets of truth that couldn't possibly make up a coherent policy but that voters will feel grateful someone articulates at all.34 years. Really, that is the answer.
This is based on Mores Law... assuming Moores Law continues to operate as it has been for the past 3 decades, whereby computing processing power increases by a predictable amount every year, by 2050 computers will have equivalent processing power to the human brain.
So, six months until one gets the processing power of a Trump supporter, then?
The Trump thing is a good example of the situation. Standard politicians find it increasingly difficult to argue anything because they assume rationality is necessary and politics is not entirely rational. People know more now than they did a century ago about the unfairness of society (as opposed to just government). We all tend to spot the bullshit more easily than perhaps our fathers did. Then Trump comes along and drop the requirement for a rational public discourse and hey presto got elected President of the United States of America. All he has to do is speak not the Truth but small snipets of truth that couldn't possibly make up a coherent policy but that voters will feel grateful someone articulates at all.So, six months until one gets the processing power of a Trump supporter, then?
PS. Relax, I'm sure his supporters won't make a majority.
PPS. Although maybe it won't be necessary. There is a known precedent.
So, how do you explain to an AI robot the changing rules of this game?
EB
The hard part of automating reasoning isn't correctly generating a logical conclusion; it's finding the relevant conclusion needle in the haystack of trillions of perfectly valid but useless implications.Sorry for the boring interjection but...
The processing power of a human brain isn't required for analysis of whether premises lead to conclusions, or the generation of valid conclusions from premises.
Actually, I tend to wonder if computers would generate interestingly unique valid conclusions that humans would not because they would not be limited to the thought patterns (if they had decent branch path search algorithms) that humans are confined to. beep.
If you just want a machine that do logic more exact than humans then we have got that for at least 50 years already.
If you want a machine that thinks about intresting logical problems then I think they exist now.
If you want a machine that you could have interesting discussions with. Wait not more than 10. years.
The hard part of automating reasoning isn't correctly generating a logical conclusion; it's finding the relevant conclusion needle in the haystack of trillions of perfectly valid but useless implications.If you just want a machine that do logic more exact than humans then we have got that for at least 50 years already.
If you want a machine that thinks about intresting logical problems then I think they exist now.
If you want a machine that you could have interesting discussions with. Wait not more than 10. years.
Remarkably rational AI Gore had all the facts and lost to fuzzy bushy George W. What are the rules and where do you learn them?The Trump thing is a good example of the situation. Standard politicians find it increasingly difficult to argue anything because they assume rationality is necessary and politics is not entirely rational. People know more now than they did a century ago about the unfairness of society (as opposed to just government). We all tend to spot the bullshit more easily than perhaps our fathers did. Then Trump comes along and drop the requirement for a rational public discourse and hey presto got elected President of the United States of America. All he has to do is speak not the Truth but small snipets of truth that couldn't possibly make up a coherent policy but that voters will feel grateful someone articulates at all.
PS. Relax, I'm sure his supporters won't make a majority.
PPS. Although maybe it won't be necessary. There is a known precedent.
So, how do you explain to an AI robot the changing rules of this game?
EB
Introduce a suite of random variables and processes into its fuzzy logic. IOW be morehumanfromderinsiding.
Which talk?There is a TED talk that Mr. Diamond (of the X prize fame) gave that related to this specifically. 34 years. that is how long it will be before computers can make better decisions than humans can... that was the scope of the 'processing power' figure.
They already process rule sets faster- which is one side of the equation... the other side:AI puppets could eventually process things faster but the logic would be the same.
1) Train neural networks to select concepts (as individual premises, conclusions, etc.) from language.Logic applied to linguistic statements, which is really the main interest now, will remain a difficulty because of the fuzziness of human linguistic behaviour and our inability to formalise it real good. AI robots will have to learn to speak the language we speak the way we all do, by practicing again and again, which can only results in a limited performance.
If the issue is whether AI could beat us, sure, but bacteria and viruses could too without exercising their brains or batting an eyelid.
EB