bilby said:
me said:
1. Humans normally care about morality, the suffering of humans, etc. care about morality, the suffering of humans, etc.
Tell that to Joe Stalin.
He is dead, so I can't. But as I pointed out, there are human psychopaths. However, even when we factor that in, the fact is that the vast majority of humans are not Stalin. The arguments for AI existential risk I've seen are based on the chances that the (general) AI would turn out to be either psychologically alien, so who knows what they'd do - though of course, a Stalin AI would also be devastating. But I'm not arguing that the arguments are strong. I'm not sufficiently knowledgeable to tell. On the other hand, I can discuss some objections.
bilby said:
Hostility towards me only motivates me to leave and refrain from posting for a while. More generally, it's not good for the overall quality of a discussion.
bilby said:
No, we are not. Because we cannot, by definition, be talking about anything beyond our comprehension.
Actually, we can talk about entities with capabilities beyond our comprehension. For example, we have the capacity to understand the world in ways that are far beyond that of the brightest chimpanzee. And chimpanzees can understand in ways that are far beyond the brightest cat, and so on. There is no impossibility (let alone by definition) of talking about entities with capacities that are far beyond ours in a similar fashion or even (at least) studying whether the development of certain technologies is likely (or unlikely, but not negligibly likely) lead to something like that, etc.
bilby said:
No, it can't. Even the most effective malware can't do those things, outside science fiction.
We're talking about superintelligent AI, not malware that can do only a few things. Again, I recommend reading the literature on the matter.
bilby said:
And yet, humans have been monumentally successful, and no tyrant has ever succeeded in a 100% genocide, despite some pretty impressive attempts. It is also notable that most tyrants are not of greatly above average intelligence; And that few of the people we recognize as distinctly above average intelligence seem inclined to use their intelligence as a weapon against other humans. There is a common trope in fiction, of the hugely intelligent 'super criminal'; This trope is a reflection of anti-intellectualism (particularly in the USA), not an indication that intelligence is a threat outside a fictional context. The 'AI that enslaves mankind' idea is just an extension of that trope, and has little grounding in reality.
First, nearly all (or all) tyrants have not tried to exterminate humanity.
Second, it's not "notable" that they're not of greatly above average intelligent. Why would they be?
Third, it's not clear what you mean by "And that few of the people we recognize as distinctly above average intelligence seem inclined to use their intelligence as a weapon against other humans.", but given your claim about the trope, it seems that there is somehow an impicit claim that greater intelligence (probably, generally, etc.) leads to benevolence towards humans, the fact that all of the members of your sample are human make that piece of evidence extremely weak. On the other hand, the fact that there are widely variable minds in other species (even when nothing is malfunctioning) and even more importantly, that there appears to be no causal mechanism that would connect high intelligence (of the sort we're talking about, getting results and all) with morality or benevolence, would make your claim extremely unlikely. If that's not what you meant so say, I would ask for clarification.
bilby said:
I wonder how, as a non-expert, you feel able to assess that I have not already done so.
I don't "feel" able. I make a probabilistic assessment based on what you say. And it's clear enough. I'm no expert in, say, biology, evolution, or history. But I can easily tell in many instances when people (e.g., YECs, people who deny the Holocaust, etc.) have not read the relevant literature. More generally, there is a very wide range from complete ignoramus to expert. If one is not an expert, there are times when that prevents one from figuring things out. And there are times when it does not. It depends on the circumstances. If you actually read from philosophers who make those existential arguments, it seems you misunderstood them, unless you're deliberatly not trying to raise strong counterpoints (but that seems pretty improbable).