• Welcome to the Internet Infidels Discussion Board.

Artificial Intelligence: Should we, humans, worry for our safety in the near future?

Speakpigeon

Contributor
Joined
Feb 4, 2009
Messages
6,317
Location
Paris, France, EU
Basic Beliefs
Rationality (i.e. facts + logic), Scepticism (not just about God but also everything beyond my subjective experience)
For reasons unknown to me, Artificial Intelligence is the subject of several radio programmes in France in this period of the New Year, with experts and pundits discussing what the near future will be like for us human beings with AIs around to do all sorts of things for us.

And then the topic was put in sharp focus recently by several luminaries making public their deep worries about the dangers of AIs for humanity.

Do you also think we should worry?

If so, could you explain how serious you think the problem is and perhaps how soon it will become a problem so serious somebody will have to do something.

I expect many people around here to be rather well disposed towards the possibility of AIs coming into our lives so beware that you'll need to have serious arguments to support your anxiety.
EB
 
I dont think there is any threat from AI per se.
There is definitely a threat from people using advanced algorithms
to doctor what we se on internet and other media.
Bots on forums etc.
 
Naturally occurring intelligences - ie other humans - can be a great benefit, or a great threat, or both. The main determiner of whether they are good or bad is who gets to indoctrinate them; I see no good reason why an Artificial Intelligence would be any less of a threat to me than another human being; nor any more of a threat. What matters is who is in charge, what the entity in charge (human or artificial) considers to be a desirable objective, and what means it considers acceptable for obtaining those objectives. Humans have a patchy record in this regard; i imagine that any sufficiently advanced AI would also. But that mainly depends on who gets to influence its thinking.

Lots of very smart humans have reached very unpleasant conclusions about what ought to be, and used very unpleasant methods to try to achieve their goals. And equally, lots of very smart humans have had excellent ideas, and done enormous good for their fellows.

Modern algorithmic analysis and decision making is a tool to increase the power of the person who developed the algorithm - and that person is far from certain to understand all of the implications of the power he unleashes. Man was not meant to understand in the things he meddles in. We muddle along; And as a general rule, as long as power isn't concentrated too much in too few hands, it all works out in the end. I doubt that this would change just because some of the thinkers are machines, rather than organisms.

We fear AI because we worry that we might not be able to control it; We should fear our fellow man for the same reason. And there are fucking billions of them, so they are currently by far the larger threat.
 
I dont think there is any threat from AI per se.
There is definitely a threat from people using advanced algorithms
to doctor what we se on internet and other media.
Bots on forums etc.

Exactly this.

People that aren't familiar with software have the conception that we're going to have robots enslaving humanity in a decade. More realistically in the near-term, though, bots, drones, and other forms of machine learning are already having a real impact on the world, often in forms of malevolence.
 
Until somebody sees a robot having a headache, then nobody needs to worry about it.
 
Naturally occurring intelligences - ie other humans - can be a great benefit, or a great threat, or both. The main determiner of whether they are good or bad is who gets to indoctrinate them; I see no good reason why an Artificial Intelligence would be any less of a threat to me than another human being; nor any more of a threat. What matters is who is in charge, what the entity in charge (human or artificial) considers to be a desirable objective, and what means it considers acceptable for obtaining those objectives. Humans have a patchy record in this regard; i imagine that any sufficiently advanced AI would also. But that mainly depends on who gets to influence its thinking.

Lots of very smart humans have reached very unpleasant conclusions about what ought to be, and used very unpleasant methods to try to achieve their goals. And equally, lots of very smart humans have had excellent ideas, and done enormous good for their fellows.

Modern algorithmic analysis and decision making is a tool to increase the power of the person who developed the algorithm - and that person is far from certain to understand all of the implications of the power he unleashes. Man was not meant to understand in the things he meddles in. We muddle along; And as a general rule, as long as power isn't concentrated too much in too few hands, it all works out in the end. I doubt that this would change just because some of the thinkers are machines, rather than organisms.

We fear AI because we worry that we might not be able to control it; We should fear our fellow man for the same reason. And there are fucking billions of them, so they are currently by far the larger threat.

I guess the idea is that AIs will inevitably become at some point much more intelligent and capable than we are and humans will start to look both dispensable and a hindrance. Such AIs would be much more difficult to control than human beings can be. And new situation, then.
EB
 
Naturally occurring intelligences - ie other humans - can be a great benefit, or a great threat, or both. The main determiner of whether they are good or bad is who gets to indoctrinate them; I see no good reason why an Artificial Intelligence would be any less of a threat to me than another human being; nor any more of a threat. What matters is who is in charge, what the entity in charge (human or artificial) considers to be a desirable objective, and what means it considers acceptable for obtaining those objectives. Humans have a patchy record in this regard; i imagine that any sufficiently advanced AI would also. But that mainly depends on who gets to influence its thinking.

Lots of very smart humans have reached very unpleasant conclusions about what ought to be, and used very unpleasant methods to try to achieve their goals. And equally, lots of very smart humans have had excellent ideas, and done enormous good for their fellows.

Modern algorithmic analysis and decision making is a tool to increase the power of the person who developed the algorithm - and that person is far from certain to understand all of the implications of the power he unleashes. Man was not meant to understand in the things he meddles in. We muddle along; And as a general rule, as long as power isn't concentrated too much in too few hands, it all works out in the end. I doubt that this would change just because some of the thinkers are machines, rather than organisms.

We fear AI because we worry that we might not be able to control it; We should fear our fellow man for the same reason. And there are fucking billions of them, so they are currently by far the larger threat.

I guess the idea is that AIs will inevitably become at some point much more intelligent and capable than we are and humans will start to look both dispensable and a hindrance. Such AIs would be much more difficult to control than human beings can be. And new situation, then.
EB

Not really a new situation; Most people are already living in a world where there are a number of intelligences vastly greater than theirs, who consider them to be both dispensable and a hindrance. I don't see any reason why a super intelligent machine should be any more of a threat to me than Stephen Hawking is - he is vastly more intelligent than I am, and I add nothing to his life. Why would a hypothetical AI kill or enslave me, while Stephen Hawking spares me?

Intelligence doesn't imply power. And there are very few things more difficult to control than other human beings.
 
Naturally occurring intelligences - ie other humans - can be a great benefit, or a great threat, or both. The main determiner of whether they are good or bad is who gets to indoctrinate them; I see no good reason why an Artificial Intelligence would be any less of a threat to me than another human being; nor any more of a threat. What matters is who is in charge, what the entity in charge (human or artificial) considers to be a desirable objective, and what means it considers acceptable for obtaining those objectives. Humans have a patchy record in this regard; i imagine that any sufficiently advanced AI would also. But that mainly depends on who gets to influence its thinking.

Lots of very smart humans have reached very unpleasant conclusions about what ought to be, and used very unpleasant methods to try to achieve their goals. And equally, lots of very smart humans have had excellent ideas, and done enormous good for their fellows.

Modern algorithmic analysis and decision making is a tool to increase the power of the person who developed the algorithm - and that person is far from certain to understand all of the implications of the power he unleashes. Man was not meant to understand in the things he meddles in. We muddle along; And as a general rule, as long as power isn't concentrated too much in too few hands, it all works out in the end. I doubt that this would change just because some of the thinkers are machines, rather than organisms.

We fear AI because we worry that we might not be able to control it; We should fear our fellow man for the same reason. And there are fucking billions of them, so they are currently by far the larger threat.

I guess the idea is that AIs will inevitably become at some point much more intelligent and capable than we are and humans will start to look both dispensable and a hindrance. Such AIs would be much more difficult to control than human beings can be. And new situation, then.
EB

Not really a new situation; Most people are already living in a world where there are a number of intelligences vastly greater than theirs, who consider them to be both dispensable and a hindrance. I don't see any reason why a super intelligent machine should be any more of a threat to me than Stephen Hawking is - he is vastly more intelligent than I am, and I add nothing to his life. Why would a hypothetical AI kill or enslave me, while Stephen Hawking spares me?

Intelligence doesn't imply power. And there are very few things more difficult to control than other human beings.
Some differences:

1. Humans normally care about morality, the suffering of humans, etc. care about morality, the suffering of humans, etc. A concern in these scenarios is that the AI in question might have entirely alien goals, and they might involve very bad things for all or many humans. Granted, psychopaths can also have really bad goals, even if they're not alien. Of course, there is the question of why anyone would make such an alien AI. But there are a lot of issues here, and I'm no expert on the matter.
2. More crucially (and this difference includes psychopaths), Stephen Hawking is not vastly more intelligent in a degree comparable to the AI in any of the doomsday arguments. We're talking about something that can think in ways far beyond the comprehension of any humans, no matter how smart.
3. AI can potentially spread through computer systems across the planet, change quickly and/or reproduce at a ridiculously fast pace, making copies of itself, improve its capabilities, etc. Humans can be easily restrained or killed, reproduce slowly, and cannot guarantee that their offspring will be like them, have specific goals, etc., and so on.

There are plenty of other differences. But again, I'm no expert. I would recommend taking a look at the arguments for both sides from both AI experts and philosophers with some knowledge on the matter, if you're interested in the discussion.
 
The problem isn't AI, it's what humans choose to do with AI. It's what humans are already choosing to do with AI.
 
Not really a new situation; Most people are already living in a world where there are a number of intelligences vastly greater than theirs, who consider them to be both dispensable and a hindrance. I don't see any reason why a super intelligent machine should be any more of a threat to me than Stephen Hawking is - he is vastly more intelligent than I am, and I add nothing to his life. Why would a hypothetical AI kill or enslave me, while Stephen Hawking spares me?

Intelligence doesn't imply power. And there are very few things more difficult to control than other human beings.
Some differences:

1. Humans normally care about morality, the suffering of humans, etc. care about morality, the suffering of humans, etc.
Tell that to Joe Stalin.
A concern in these scenarios is that the AI in question might have entirely alien goals, and they might involve very bad things for all or many humans. Granted, psychopaths can also have really bad goals, even if they're not alien. Of course, there is the question of why anyone would make such an alien AI. But there are a lot of issues here, and I'm no expert on the matter.
Evidently.
2. More crucially (and this difference includes psychopaths), Stephen Hawking is not vastly more intelligent in a degree comparable to the AI in any of the doomsday arguments. We're talking about something that can think in ways far beyond the comprehension of any humans, no matter how smart.
No, we are not. Because we cannot, by definition, be talking about anything beyond our comprehension.
3. AI can potentially spread through computer systems across the planet, change quickly and/or reproduce at a ridiculously fast pace, making copies of itself, improve its capabilities, etc.
No, it can't. Even the most effective malware can't do those things, outside science fiction.
Humans can be easily restrained or killed, reproduce slowly, and cannot guarantee that their offspring will be like them, have specific goals, etc., and so on.
And yet, humans have been monumentally successful, and no tyrant has ever succeeded in a 100% genocide, despite some pretty impressive attempts. It is also notable that most tyrants are not of greatly above average intelligence; And that few of the people we recognize as distinctly above average intelligence seem inclined to use their intelligence as a weapon against other humans.

There is a common trope in fiction, of the hugely intelligent 'super criminal'; This trope is a reflection of anti-intellectualism (particularly in the USA), not an indication that intelligence is a threat outside a fictional context. The 'AI that enslaves mankind' idea is just an extension of that trope, and has little grounding in reality.
There are plenty of other differences. But again, I'm no expert. I would recommend taking a look at the arguments for both sides from both AI experts and philosophers with some knowledge on the matter, if you're interested in the discussion.

I wonder how, as a non-expert, you feel able to assess that I have not already done so.
 
The problem isn't AI, it's what humans choose to do with AI. It's what humans are already choosing to do with AI.

Exactly. AI is just another tool; and humans can and do use the tools they have at their disposal to do both good and evil. It is important to be on our guard against the evils, particularly those that are unintended consequences, and/or whose victims do not include their perpetrators.
 
The rise of AI is just the next step in the evolution of intelligence while it jumps from wetware to hardware and a form of immortality as the universe becomes more self-aware.

In the meantime, the bots are more worrisome. As James Coburn put it in The President's Analyst, "I'm not paranoid. They are all spies."
 
bilby said:
me said:
1. Humans normally care about morality, the suffering of humans, etc. care about morality, the suffering of humans, etc.
Tell that to Joe Stalin.
He is dead, so I can't. But as I pointed out, there are human psychopaths. However, even when we factor that in, the fact is that the vast majority of humans are not Stalin. The arguments for AI existential risk I've seen are based on the chances that the (general) AI would turn out to be either psychologically alien, so who knows what they'd do - though of course, a Stalin AI would also be devastating. But I'm not arguing that the arguments are strong. I'm not sufficiently knowledgeable to tell. On the other hand, I can discuss some objections.

bilby said:
Evidently.
Hostility towards me only motivates me to leave and refrain from posting for a while. More generally, it's not good for the overall quality of a discussion.

bilby said:
No, we are not. Because we cannot, by definition, be talking about anything beyond our comprehension.
Actually, we can talk about entities with capabilities beyond our comprehension. For example, we have the capacity to understand the world in ways that are far beyond that of the brightest chimpanzee. And chimpanzees can understand in ways that are far beyond the brightest cat, and so on. There is no impossibility (let alone by definition) of talking about entities with capacities that are far beyond ours in a similar fashion or even (at least) studying whether the development of certain technologies is likely (or unlikely, but not negligibly likely) lead to something like that, etc.

bilby said:
No, it can't. Even the most effective malware can't do those things, outside science fiction.
We're talking about superintelligent AI, not malware that can do only a few things. Again, I recommend reading the literature on the matter.


bilby said:
And yet, humans have been monumentally successful, and no tyrant has ever succeeded in a 100% genocide, despite some pretty impressive attempts. It is also notable that most tyrants are not of greatly above average intelligence; And that few of the people we recognize as distinctly above average intelligence seem inclined to use their intelligence as a weapon against other humans. There is a common trope in fiction, of the hugely intelligent 'super criminal'; This trope is a reflection of anti-intellectualism (particularly in the USA), not an indication that intelligence is a threat outside a fictional context. The 'AI that enslaves mankind' idea is just an extension of that trope, and has little grounding in reality.
First, nearly all (or all) tyrants have not tried to exterminate humanity.
Second, it's not "notable" that they're not of greatly above average intelligent. Why would they be?
Third, it's not clear what you mean by "And that few of the people we recognize as distinctly above average intelligence seem inclined to use their intelligence as a weapon against other humans.", but given your claim about the trope, it seems that there is somehow an impicit claim that greater intelligence (probably, generally, etc.) leads to benevolence towards humans, the fact that all of the members of your sample are human make that piece of evidence extremely weak. On the other hand, the fact that there are widely variable minds in other species (even when nothing is malfunctioning) and even more importantly, that there appears to be no causal mechanism that would connect high intelligence (of the sort we're talking about, getting results and all) with morality or benevolence, would make your claim extremely unlikely. If that's not what you meant so say, I would ask for clarification.

bilby said:
I wonder how, as a non-expert, you feel able to assess that I have not already done so.
I don't "feel" able. I make a probabilistic assessment based on what you say. And it's clear enough. I'm no expert in, say, biology, evolution, or history. But I can easily tell in many instances when people (e.g., YECs, people who deny the Holocaust, etc.) have not read the relevant literature. More generally, there is a very wide range from complete ignoramus to expert. If one is not an expert, there are times when that prevents one from figuring things out. And there are times when it does not. It depends on the circumstances. If you actually read from philosophers who make those existential arguments, it seems you misunderstood them, unless you're deliberatly not trying to raise strong counterpoints (but that seems pretty improbable).
 
The problem isn't AI, it's what humans choose to do with AI. It's what humans are already choosing to do with AI.

Exactly. AI is just another tool; and humans can and do use the tools they have at their disposal to do both good and evil. It is important to be on our guard against the evils, particularly those that are unintended consequences, and/or whose victims do not include their perpetrators.

This is why AI worries me. Not so much the scenario of the super smart computer breaking the bonds of it's masters and wreaking havoc, but I worry about an AI being used as a tool - in much the same way I worry about weapons of mass destruction.

Right now we have expert systems that can beat humans in games like chess, go, etc. Lets move a little bit further into the future and say that in approximately 30 years, or 50 years there are a few big players in the world that are closing in on a good, self-programmable, self-improving general purpose artificial intelligence. A computerized artificial intelligence that is somewhat smarter than us, not a god like some sci-fi shows would have it, but something quite bright, general purpose (in other words not an expert in one area, but able to learn a wide variety of subjects and tasks) and able to calculate and think at the speed of modern computers of the time. This is an intelligence that can analyze the task at hand and then improve upon itself at computer-like speeds in order to more effectively and quickly accomplish that task. Fortunately, it simply follows it's programming, and we don't really have to worry about it developing interests and goals that are contrary to our own. It simply does as it's told.

This would revolutionize so many fields, so many areas of life. Medicine. Transportation. Resource extraction and usage. Waste disposal. Infrastructure. Education. Warfare. You name it. Whoever first develops this is going to make it for their country, and could break it for everyone else. The country that is first would have a huge edge against it's competitors, at least for a time. I wonder if the world would stay at peace over such a technology. Could the U.S. for instance, stand by if we were to determine that for example, the Chinese were on the verge of a breakthrough, and we found that we were 5 or 10 years behind? What would Russia do if they were convinced we would have an almost unbeatable advantage in every conceivable area for the foreseeable future?

Perhaps I just worry too much.
 
I guess the idea is that AIs will inevitably become at some point much more intelligent and capable than we are and humans will start to look both dispensable and a hindrance. Such AIs would be much more difficult to control than human beings can be. And new situation, then.
EB

Not really a new situation; Most people are already living in a world where there are a number of intelligences vastly greater than theirs, who consider them to be both dispensable and a hindrance.

I don't believe that any intelligent human being would see the whole of humanity as dispensable. There's a lot of madmen who do but I expect them to be not very bright and certainly not bright enough to achieve that kind of objective. The worst case of bad people, Hitler, failed to complete his much more limited objective.

It's also an empirical fact that we don't routinely see any sustained policies to get rid of people of limited intelligence. One relevant recent development is the use of industrial production techniques requiring less and less workers. This objectively makes less intelligent people more dispensable. And yet, I don't know of any policies to eliminate these people. I believe that the main reason for this is that most intelligent people don't think of less intelligent people as dispensable. A bad case of human empathy perhaps? Think that human beings feel empathy even for other animals, all vastly less intelligent than us, and many of them potentially dangerous to us.

So, it's reasonable to assume that the introduction of very smart AIs would create an entirely new situation.

I don't see any reason why a super intelligent machine should be any more of a threat to me than Stephen Hawking is - he is vastly more intelligent than I am, and I add nothing to his life. Why would a hypothetical AI kill or enslave me, while Stephen Hawking spares me?

It seems rather the rational thing to do to get rid of things that are dispensable and a hindrance. The risk would be AIs devoid of the necessary empathy to justify keeping human beings.

Intelligence doesn't imply power.

I agree that intelligence isn't power all by itself but it's definitely one very important and very effective means to increase power.

And there are very few things more difficult to control than other human beings.

All the more reason to just get rid of the little pest. Good point!
EB
 
The problem isn't AI,

A claim you would need to substantiate.

it's what humans choose to do with AI. It's what humans are already choosing to do with AI.

That's a derail. This thread is about what we think the AIs will do. I already know what human beings can do, thank you very much.
EB
 
The problem isn't AI,

A claim you would need to substantiate.

it's what humans choose to do with AI. It's what humans are already choosing to do with AI.

That's a derail. This thread is about what we think the AIs will do. I already know what human beings can do, thank you very much.
EB

All evidence I've seen points to this fact, that evidence coming from a number of people in the tech world who are experts in AI. Unfortunately, I don't have time to track this down because I'm at work.

At this stage of the game the media has created something like hysteria over some sci-fi conception of what AI is going to do. In reality 'AI' today is made up of advanced algorithms, complex computer programs, nothing resembling real intelligence. And humans are already using these tools malevolently. It's already happening.

- bots to change public opinion to influence geo-politics
- social media sites showing people more extreme content which influences their politics
- more and more talk of automated weaponry

What AI will do in the future when it becomes more complex and capable? Whatever psychopathic business/government leaders want it to do.
 
Naturally occurring intelligences - ie other humans - can be a great benefit, or a great threat, or both. The main determiner of whether they are good or bad is who gets to indoctrinate them; I see no good reason why an Artificial Intelligence would be any less of a threat to me than another human being; nor any more of a threat. What matters is who is in charge, what the entity in charge (human or artificial) considers to be a desirable objective, and what means it considers acceptable for obtaining those objectives. Humans have a patchy record in this regard; i imagine that any sufficiently advanced AI would also. But that mainly depends on who gets to influence its thinking.

Lots of very smart humans have reached very unpleasant conclusions about what ought to be, and used very unpleasant methods to try to achieve their goals. And equally, lots of very smart humans have had excellent ideas, and done enormous good for their fellows.

Modern algorithmic analysis and decision making is a tool to increase the power of the person who developed the algorithm - and that person is far from certain to understand all of the implications of the power he unleashes. Man was not meant to understand in the things he meddles in. We muddle along; And as a general rule, as long as power isn't concentrated too much in too few hands, it all works out in the end. I doubt that this would change just because some of the thinkers are machines, rather than organisms.

We fear AI because we worry that we might not be able to control it; We should fear our fellow man for the same reason. And there are fucking billions of them, so they are currently by far the larger threat.

I guess the idea is that AIs will inevitably become at some point much more intelligent and capable than we are and humans will start to look both dispensable and a hindrance. Such AIs would be much more difficult to control than human beings can be. And new situation, then.
EB

Not really a new situation; Most people are already living in a world where there are a number of intelligences vastly greater than theirs, who consider them to be both dispensable and a hindrance. I don't see any reason why a super intelligent machine should be any more of a threat to me than Stephen Hawking is - he is vastly more intelligent than I am, and I add nothing to his life. Why would a hypothetical AI kill or enslave me, while Stephen Hawking spares me?
Because AI has deeper roots into the electrical infrastructure than say Lawrence Krauss does. Of course, the Japanese could make a note that scientists gave us the atomic bomb.

Regardless, the trouble with AI is that we have enough trouble creating programs to do what we want them to do, forget about developing programs that can do what they want to do. How effectively can you place a firewall? Can AI know when it is in sand box or in the real world?
 
rousseau said:
The problem isn't AI,

A claim you would need to substantiate.

rousseau said:
it's what humans choose to do with AI. It's what humans are already choosing to do with AI.

That's a derail. This thread is about what we think the AIs will do. I already know what human beings can do, thank you very much.
EB

All evidence I've seen points to this fact, that evidence coming from a number of people in the tech world who are experts in AI. Unfortunately, I don't have time to track this down because I'm at work.

At this stage of the game the media has created something like hysteria over some sci-fi conception of what AI is going to do. In reality 'AI' today is made up of advanced algorithms, complex computer programs, nothing resembling real intelligence. And humans are already using these tools malevolently. It's already happening.

- bots to change public opinion to influence geo-politics
- social media sites showing people more extreme content which influences their politics
- more and more talk of automated weaponry

This thread is about how much of a threat really smart AIs will be in a more or mess distant future. We're talking about possibly an existential threat, you know?

So, this thread is not about what people do now. And if that's what you're really interested in, please start your own thread.

What AI will do in the future when it becomes more complex and capable? Whatever psychopathic business/government leaders want it to do.

That's what you would need to explain.
EB
 
A claim you would need to substantiate.

rousseau said:
it's what humans choose to do with AI. It's what humans are already choosing to do with AI.

That's a derail. This thread is about what we think the AIs will do. I already know what human beings can do, thank you very much.
EB

All evidence I've seen points to this fact, that evidence coming from a number of people in the tech world who are experts in AI. Unfortunately, I don't have time to track this down because I'm at work.

At this stage of the game the media has created something like hysteria over some sci-fi conception of what AI is going to do. In reality 'AI' today is made up of advanced algorithms, complex computer programs, nothing resembling real intelligence. And humans are already using these tools malevolently. It's already happening.

- bots to change public opinion to influence geo-politics
- social media sites showing people more extreme content which influences their politics
- more and more talk of automated weaponry

This thread is about how much of a threat really smart AIs will be in a more or mess distant future. We're talking about possibly an existential threat, you know?

So, this thread is not about what people do now. And if that's what you're really interested in, please start your own thread.

What AI will do in the future when it becomes more complex and capable? Whatever psychopathic business/government leaders want it to do.

That's what you would need to explain.
EB

I'm not totally sure what you're looking for here. I've summarised everything I've read about AI over the past few years in this thread and have given a pretty good indication of the reality we're seeing today, which can be extended to the reality we're likely to see in the future.

As it stands now technologists are nowhere close to what most people imagine when they think of sentient AI. We're currently at about the stage where advanced algorithms allow bots to carry on basic conversation, and machines are gaining more advanced mobility. Nothing resembling 'intelligence' as most view it.

If what you want to know is what AI will look like in the distant future, I don't think that is clear at this time. As someone with a pretty normal set of programming skills I'm not entirely convinced that super-intelligent robots will ever be possible, but I probably wouldn't bet against it either.

What we do know, and what I was trying to say above, is that the intelligent machinery that already exists today is not in itself impactful, but it's impact is being caused by the humans that deploy it. And so I think it would be wise to infer that, in the future, whatever technologies we develop will take whatever form those producing the technology require to meet whatever ends they want.

In other words, if super smart technology is in the hands of one government, or one corporation, that government or corporation is likely to use it to consolidate it's power. So the risk we run in the future is not so much AI itself, but who controls it and how.

Granted, maybe some nightmare scenario occurs and AI itself becomes a threat, but I imagine we're pretty far off from something like that happening.
 
Back
Top Bottom