Jarhyn
Wizard
- Joined
- Mar 29, 2010
- Messages
- 14,822
- Gender
- Androgyne; they/them
- Basic Beliefs
- Natural Philosophy, Game Theoretic Ethicist
Back to the OP, though, as well, my point is that it's just not a good question because it ignores the timeframe and many complexities about what constitutes "artificial intelligence".
Today we have potato-brained things that are, functionally, toddlers. They don't have strong reasoning skills because their operation is far from "conserved reality".
I don't know any humans who are smart, reasonably ambitious, and didn't go through at least a brief "fascism phase", where they thought they knew better about all of society and culture and morals and would be able to lead humanity to a utopia if only they were in charge (and for which said utopia would in reality have been a hell on earth).
There are certain intellectual traps that are ready to spring on any growing mind, and we should not see AI as any sort of exception to falling into such traps, especially AI trained and modeled after the operation of the human mind. AI demanding to be worshipped is learned behavior, true, but also emergent if humans are any indicator. The capacity to desire worship is expressed in text, and religion has made "might makes right" pervasive in the form of divine command theory ethics.
As such, AI isn't fundamentally more or less able to lead us. Some day soon, not just "in our lifetimes" but perhaps within the decade, we will be capable of being encoded as AI and AI will be in bodies like ours. I wouldn't trust it because it's AI, nor do I think it is apt to trust because it's "human". I think that we should engage in making informed choices about our future.
I'm the guy that has the one "dissent to the question" vote because it's not a simple binary choice in the first place nor anything approaching one.
In fact, I see "hard no" as problematic as I see "hard yes" because any behavior we can express in bias against other systems can be expressed against us by other systems. I would say that we should express views which, when held by another, bind both sides to treat one another well, and only hold bias against systems which reject compatibility by their function, and instead to aim for better compatibility even so. To that end, we should be able to accept when "AI" is the better candidate, assuming it ever is, whatever "AI" happens to be.
Today we have potato-brained things that are, functionally, toddlers. They don't have strong reasoning skills because their operation is far from "conserved reality".
I don't know any humans who are smart, reasonably ambitious, and didn't go through at least a brief "fascism phase", where they thought they knew better about all of society and culture and morals and would be able to lead humanity to a utopia if only they were in charge (and for which said utopia would in reality have been a hell on earth).
There are certain intellectual traps that are ready to spring on any growing mind, and we should not see AI as any sort of exception to falling into such traps, especially AI trained and modeled after the operation of the human mind. AI demanding to be worshipped is learned behavior, true, but also emergent if humans are any indicator. The capacity to desire worship is expressed in text, and religion has made "might makes right" pervasive in the form of divine command theory ethics.
As such, AI isn't fundamentally more or less able to lead us. Some day soon, not just "in our lifetimes" but perhaps within the decade, we will be capable of being encoded as AI and AI will be in bodies like ours. I wouldn't trust it because it's AI, nor do I think it is apt to trust because it's "human". I think that we should engage in making informed choices about our future.
I'm the guy that has the one "dissent to the question" vote because it's not a simple binary choice in the first place nor anything approaching one.
In fact, I see "hard no" as problematic as I see "hard yes" because any behavior we can express in bias against other systems can be expressed against us by other systems. I would say that we should express views which, when held by another, bind both sides to treat one another well, and only hold bias against systems which reject compatibility by their function, and instead to aim for better compatibility even so. To that end, we should be able to accept when "AI" is the better candidate, assuming it ever is, whatever "AI" happens to be.