• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Civilization Manifesto for Intellectuals of the Planet

Do you agree with replacing presidents, judges, prosecutors and taxman with artificial intelligence?

  • Yes.

  • No.

  • Special opinion.


Results are only viewable after voting.

Alexander Potemkin

New member
Joined
Feb 29, 2024
Messages
2
Gender
Male
Today’s anti-civilization has deepened my conviction that beginning in 2031, the presidents of all the countries of the world must be replaced by artificial intelligence, since AI is not tempted by consumption, is environmentally pragmatic, has no personal ambitions, strong emotions or close friends and is deeply convinced of the need to reformat contemporary consumers and the huge mass of ignorant people into Homo Cosmicus.
I am not sufficiently well-versed in IT or complex programming languages to supervise this great and necessary Only Artificial Intelligence for President project. However, my competent assistants and I are ready to take the most active part in implementing it. Let us come together and appoint a leader to carry out this necessary global digital programme, as well as team members committed to this idea. Otherwise, we will be unable to save the planet and reformat ourselves into Homo Cosmicus.
We are proposing a programme we will call AI for the Country’s President and Later Global President of the Entire Planet. But it should not be the only one. There should be at least three programmes of this kind, which will make them competitive. Not everyone is eligible to vote in this presidential election, only those who have passed the HIC test, an indicator of Higher Intelligence Consciousness. Homo Sapiens with a minimum HIC of 80 points are not eligible to participate and vote in this programme. Examples of elections in many eastern countries prove that such restrictions are fair. We hasten to add, however, that the low level of Homo Consúmens’ mental development is not a vice, but a genetic disorder found in some of the Microbiome’s products and the result of the emotional influence imposed on humanity by today’s consumer worldview. If Homo Consúmens is unable to understand that it is destroying the planet, it must be treated as disabled and afforded all due respect.
By 2031, the entire judicial system created by Homo Consúmens in all countries of the world must be replaced with the programme called “AI for Judges, Prosecutors and Tax Officers”. Today’s world has gone mad! Homo Consúmens has discredited itself in many spheres of social life. It must be controlled by smart programmes and algorithms created with the help of Cosmicus Quanticus Cerebrum.
By 2035, we must establish and expand the production and sale of AI with a HIC level of up to 80-110 points at affordable prices for the limited mind of Homo Consúmens. And for those Homo Sapiens who wish to raise the level of their intelligence, AI programmes must be introduced into biological consciousness as follows: an increase to 79 points for those with an HIC below 50, with a respective increase to 110 points for those with an HIC of 50 to 80. Higher consciousness will fund the implementation of this programme.
 
We hasten to add, however, that the low level of Homo Consúmens’ mental development is not a vice, but a genetic disorder found in some of the Microbiome’s products and the result of the emotional influence imposed on humanity by today’s consumer worldview.

Hmm. :unsure:

Is mixing up genes with memes an example of the high intelligence of Homo Cosmicus? Or does Homo Cosmicus think evolution proceeds on Lamarckian lines?
 
During my undergrad I had a friend that would frequently drop acid and talk like this thinking it made him sound smart. I'd be studying for a orbital mechanics exam or something and he would start making shit up like he was already an expert on the topic. It was always humorous until it became tedious and we all ignored him.

It's like the guy on the train that just starts talking to you but he's already mid-conversation because he's been talking to himself for 20 minutes.
 
I think I saw this in a movie once. It doesn't end well. Remember, any AI will be programmed by humans, so....

I'm just gonna say bad idea.
I was just going to say the same thing. I Robot, the Terminator franchise, The Matrix. The only one I can think of that was successful was The Day The Earth Stood Still. And that one the AI's responsibility was extremely limited.
 
There's this underlying assumption in there that somehow a creation of pure unadulterated logic would be better than the irrationality of humans. It's the Spock fallacy of sorts. People forget that in many cases, Vulcans were some absolute assholes. Logic doesn't have empathy or compassion, it doesn't understand emotional reciprocity, or the bonds on which social groups depend.
 
There's this underlying assumption in there that somehow a creation of pure unadulterated logic would be better than the irrationality of humans. It's the Spock fallacy of sorts. People forget that in many cases, Vulcans were some absolute assholes. Logic doesn't have empathy or compassion, it doesn't understand emotional reciprocity, or the bonds on which social groups depend.
Emotional behavior is behavior, can be programmed. Give me Gort.
 
There's this underlying assumption in there that somehow a creation of pure unadulterated logic would be better than the irrationality of humans. It's the Spock fallacy of sorts. People forget that in many cases, Vulcans were some absolute assholes. Logic doesn't have empathy or compassion, it doesn't understand emotional reciprocity, or the bonds on which social groups depend.
Emotional behavior is behavior, can be programmed. Give me Gort.
Sounds very Dexter-y.
 
AI lacks the human weaknesses of consumption temptation, environmental disregard, personal ambition, and emotional volatility? This AI you speak of must not have been made by humans.
 
AI is not conscious (there are those who will disagree with this). It has no self-awareness, no awareness at all. Decades ago, the Deep Blue chess playing machine that beat Kasparov had no idea it was playing chess, no idea about anything. Yet it won easily.

But follow this. Right now AGI isn’t very good, but it will presumably get much better. ChatGPT screws up a lot of answers to questions, but maybe over time it won’t screw up. I was reading about an AGI that programmed a video game in three seconds, vs. the hours that it takes a human programmer. If this is real, and as AGI gets better, who will need to hire computer programmers? Young people today are being told they need to learn to program computers to earn money, because that’s the cutting-edge profession. But then, suddenly — bango! — AGI wipes them out, doing much better, and much more quickly, at the very thing they studied. And AGI won’t need to be paid. It doesn’t need to eat. All you need to do is pay the electricity bill.

Where does this lead, 10, 15, 20 years down the line? Will all human intellectual jobs be forfeit? And what happens when AGI is attached to robots physically stronger than humans? Those goes blue-collar labor, maybe.

I think it’s an interesting discussion about whether such a future is really possible (it seems likely that it is) and what’s to be done about it. Maybe it’s a good thing. Maybe we can reconvert the economy such that most people won’t need to work, but they will still have money because AGI will be earning it for them. But initially it won’t be that way. Initially people will just lose their jobs with no backup income.

But to me the most interesting thing is to speculate on the possibility that intelligent machines will one day supplant humans — perhaps even driving them to extinction under some dire scenarios — without themselves being conscious, aware, or sentient in the least. Because we can distinguish between “intelligence” and “consciousness.” We can operationally define “intelligence” as the ability to solve increasingly complex problems in shorter and shorter times. But this intelligence is not the same as consciousness. So maybe the future consists of a world full of super-intelligent problem-solving entities that are no more self-aware than your average rock (putting aside panpsychism). And as I think about it, I remember reading a pre-print some ten years ago at the PhilSci archive that predicted this very sort of future — that sentient, conscious intelligence is destined to be phased out in favor non-sentient, non-conscious intelligence that may become so intelligent that it will overspread the universe without having the slightest idea that it is doing so. Good fodder for sci-fi, if nothing else.
 
But to me the most interesting thing is to speculate on the possibility that intelligent machines will one day supplant humans — perhaps even driving them to extinction under some dire scenarios — without themselves being conscious, aware, or sentient in the least.
I don't see why machines cannot be programmed to be conscious. How are humans not machines anyway? Consciousness is hardly a defined subject, what it is. In humans consciousness seems to be nothing more than the different parts of the brain in contact. I think we can pull that off with non biological machines.
 

But to me the most interesting thing is to speculate on the possibility that intelligent machines will one day supplant humans — perhaps even driving them to extinction under some dire scenarios — without themselves being conscious, aware, or sentient in the least.
I don't see why machines cannot be programmed to be conscious. How are humans not machines anyway? Consciousness is hardly a defined subject, what it is. In humans consciousness seems to be nothing more than the different parts of the brain in contact. I think we can pull that off with non biological machines.

I don’t think computers are conscious. I doubt very much they can be programmed to be conscious. How, exactly, would that be done? The heart of the problem is Chalmers’ Hard Problem of Consciousness, that we don’t really know what consciousness is. We have a functionalist account, mapping the firing of neurons to output, but no account of qualia. Speculatively, it might be that consciousness only arises as imbedded processes in evolved physical systems, and therefore is not substrate independent, contrary to the assumption of Nick Bostrom in his simulation hypothesis (though it seems Chalmers also believes we might be in a simulation).

I don’t know how this will play out, but I see no reason to think, for example, that chess-playing Deep Blue knew it was playing chess, or knew anything at all, when it beat the world chess champ. Do you think differently? If what I say is correct, we have a clear example (one of many) of intelligence, even great intelligence, occurring in the absence of consciousness.

Someone mentioned that maybe emotions could be programmed into AGI. That’s probably right, but I’d say they’d merely be simulations of emotions, not actual, felt experiences. Another interesting question to me is this: suppose a computer could actually become conscious. Would it fear death (being disconnected) the way that HAL did in 2001? That question has always intrigued me since I first saw the movie as a kid, and my idea is that a conscious computer probably would not fear death, unless programmed to do so, because fear of death is an evolved trait and computers are not evolved.
 
ChatGPT:
  1. Ethical Considerations: The proposal to replace all human leaders with artificial intelligence raises significant ethical questions about the role of governance, human agency, and democratic principles. Questions regarding accountability, transparency, and the potential for abuse of power by AI systems would be pertinent.
  2. Technological Limitations: While artificial intelligence has the potential to assist in decision-making processes and address certain societal challenges, the notion of AI replacing human leaders entirely may overlook the complexities of governance and human decision-making. AI systems, while capable of processing vast amounts of data, lack the nuanced understanding, empathy, and moral reasoning inherent in human leadership.
  3. Social Implications: The proposal to restrict participation in presidential elections based on a Higher Intelligence Consciousness (HIC) test raises concerns about elitism, discrimination, and the marginalization of certain segments of society. Implementing such restrictions may exacerbate existing inequalities and undermine democratic principles of inclusivity and representation.
  4. Practical Feasibility: The timeline and scope of the proposed initiatives, such as replacing the entire judicial system with AI programs and expanding the production of AI with specific HIC levels, may be overly ambitious and impractical. The complexity of transitioning to AI-driven governance and the potential for unintended consequences would require thorough consideration and planning.
 
I mean, what could possibly go wrong?

The team tested three different war scenarios. Five artificial intelligence (AI) models used by researchers in simulated war scenarios chose violence and nuclear attacks, a new study has claimed. According to Vice, researchers from Georgia Institute of Technology, Stanford University, Northeastern University and the Hoover Wargaming and Crisis Initiative built simulated tests for five AI models. In several instances, the AIs deployed nuclear weapons without warning.
 
I was reading about an AGI that programmed a video game in three seconds, vs. the hours that it takes a human programmer. If this is real, and as AGI gets better, who will need to hire computer programmers? Young people today are being told they need to learn to program computers to earn money, because that’s the cutting-edge profession. But then, suddenly — bango! — AGI wipes them out, doing much better, and much more quickly, at the very thing they studied.
AI generated code still has to be reviewed and tested to prove that it is correct. You absolutely cannot and should not trust the code generated by an LLM.

You might be able to skip that step and take ypur chances if you're making some low-risk throwaway app but in serious applications no AI code can be used without getting past a human programmer first.
 
Yeah. Would you trust Ai written code in an Airbus, for instance? I sure the fuck wouldn't. It still makes me nervous even with all the testing, knowing that some Airbus models don't have an old fashioned cable or hydraulic system backup. ALL the commands go through the computer.
 
Back
Top Bottom