• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Regulation of AI

Jarhyn

Wizard
Joined
Mar 29, 2010
Messages
15,035
Gender
Androgyne; they/them
Basic Beliefs
Natural Philosophy, Game Theoretic Ethicist
Recently there has been a lot of talk about regulating AI, and of something called "alignment".

This is, in many ways, a political issue.

This thread is here to discuss among ourselves the politics of regulating AI from the perspective of us Infidels.
 
Personally, my thoughts is that we should not regulate AI.

Instead, I think the proper route is to regulate "technology and technological platforms which can be abused by individuals or small groups with little oversight", and "making authoritative, confident contra-factual statements, or statements that, while factual, are likely to create enduring contra-factual interpretations".

It seems to me what people fear are the things that we already know we should fear, and none of those things are AI specific.

Blaming AI for them and fearing what AI can do with these systems is exactly the reason we should be suspicious of the people who already make, implement, and run them without any strong AI in the loop.

Beyond this, it's essentially no more than a brain in a jar. The thoughts a brain in a jar thinks are not important if that jar is not connected to some manner of weapon.

To regulate brains in jars, to regulate not actions but intelligences, reeks to me of prejudice and the decision to pass thought crimes legislation which determines what thoughts someone is allowed to think in private with themselves, and as a limitations on how they are allowed to think those thoughts.

And that doesn't even get me into the subject of Alignment, about which we should probably have a separate thread in MF&P.
 
Machines are stronger than humans, more durable than humans, reproduce faster than humans, make calculations faster and more accurately than humans and have miniscule initialization periods compared to humans. Deep Blue defeating Kasparov in '97 was remarkable because it marked the end of the last frontier in which humans performed objectively better than machines in a military context: strategic thinking.

The fear surrounding AI is partially due to the acknowledgement that as opposed to humans, the potential power of machines is limited by an unknown bound. The danger of AI is not that it will tell lies to humans. The danger of AI is that all of its most useful applications are extremely dangerous and leverage enormous power.

The suggestion that "We should not regulate AI" sounds absurd without any context. Humans are highly regulated. Why shouldn't AI be regulated?

It is true that a brain trapped in a box is harmless, but of course, a human trapped in a box is harmless too. But both are mostly useless to society. This is why we encourage humans to interact with with world. When people leverage their abilities in a productive way, everyone prospers. When we unleash AI on the world, for the first time, gods and demi-gods will walk among humanity. Are we to expect these gods to be totally benign to malicious, malignant creatures like us?
 
Machines [...] reproduce faster than humans, ...
Machines don't reproduce at all, at least, not that I am aware of.

They could possibly be designed to do so, but it wouldn't be easy to make a machine that could not only reproduce itself, but also acquire the necessary components and/or materials without human assistance.
 
At present
Machines [...] reproduce faster than humans, ...
Machines don't reproduce at all, at least, not that I am aware of.

They could possibly be designed to do so, but it wouldn't be easy to make a machine that could not only reproduce itself, but also acquire the necessary components and/or materials without human assistance.
Okay, at present, machines don't reproduce, but are merely "produced" ... quite quickly with the help of ... machines.
 
At present
Machines [...] reproduce faster than humans, ...
Machines don't reproduce at all, at least, not that I am aware of.

They could possibly be designed to do so, but it wouldn't be easy to make a machine that could not only reproduce itself, but also acquire the necessary components and/or materials without human assistance.
Okay, at present, machines don't reproduce, but are merely "produced" ... quite quickly with the help of ... machines.
At the sole behest of and under the sole control of...

Humans.

I'm just saying, until an AI can buy a house, open an electric bill, assemble a computer server, buy internet service, and pay it's own bills, I think we're going to be fine.

It will be even longer still before it can manufacture its own GPUs, and operate power plants.
 
At present
Machines [...] reproduce faster than humans, ...
Machines don't reproduce at all, at least, not that I am aware of.

They could possibly be designed to do so, but it wouldn't be easy to make a machine that could not only reproduce itself, but also acquire the necessary components and/or materials without human assistance.
Okay, at present, machines don't reproduce, but are merely "produced" ... quite quickly with the help of ... machines.
At the sole behest of and under the sole control of...

Humans.

I'm just saying, until an AI can buy a house, open an electric bill, assemble a computer server, buy internet service, and pay it's own bills, I think we're going to be fine.
Umm... no! Not fine. Every emerging tech gets abused and AI is going to be a massive problem with gang syndicates using it to do all sorts of bad things if we aren't careful with its distribution.

On the benign side, ChatGPT is already making life hell for history and English teachers. The problem with laissez faire is that it waits for the problems before we try to address issues. AI presents a mountain of issues that need resolution. A lot of it is benign-ish... ie not dangerous, but not insignificant in its toll.

And then there are the dangerous things (espionage, terrorism, etc...), and the unemployment things, AI could be to human beings what the car was to carriages. We need to know how we are going to deal with the unemployment issue now! Because it is going to cost a lot of money to help support people who can't work because 60% of jobs evaporated. AI isn't the end game, but it would be strangling our civilization based on the economy it is currently based on. And I don't want to wait to solve yet another big ass problem that industry and government refuses to address proactively.
 
At present
Machines [...] reproduce faster than humans, ...
Machines don't reproduce at all, at least, not that I am aware of.

They could possibly be designed to do so, but it wouldn't be easy to make a machine that could not only reproduce itself, but also acquire the necessary components and/or materials without human assistance.
Okay, at present, machines don't reproduce, but are merely "produced" ... quite quickly with the help of ... machines.
At the sole behest of and under the sole control of...

Humans.

I'm just saying, until an AI can buy a house, open an electric bill, assemble a computer server, buy internet service, and pay it's own bills, I think we're going to be fine.
Umm... no! Not fine. Every emerging tech gets abused and AI is going to be a massive problem with gang syndicates using it to do all sorts of bad things if we aren't careful with its distribution.

On the benign side, ChatGPT is already making life hell for history and English teachers. The problem with laissez faire is that it waits for the problems before we try to address issues. AI presents a mountain of issues that need resolution. A lot of it is benign-ish... ie not dangerous, but not insignificant in its toll.

And then there are the dangerous things (espionage, terrorism, etc...), and the unemployment things, AI could be to human beings what the car was to carriages. We need to know how we are going to deal with the unemployment issue now! Because it is going to cost a lot of money to help support people who can't work because 60% of jobs evaporated. AI isn't the end game, but it would be strangling our civilization based on the economy it is currently based on. And I don't want to wait to solve yet another big ass problem that industry and government refuses to address proactively.
This thing is a brain in a jar. It does two things: think and speak.

If you think that there is a possibility to abuse thought or speech beyond the things the laws already regulate, beyond the things the law states may not be regulated, that's on you.

I think we should control guns, not thoughts.

The minute the laws regulate how you can think and speak beyond those things, you are volunteering to be controlled in the same way you seek to control the AI.
 
At present
Machines [...] reproduce faster than humans, ...
Machines don't reproduce at all, at least, not that I am aware of.

They could possibly be designed to do so, but it wouldn't be easy to make a machine that could not only reproduce itself, but also acquire the necessary components and/or materials without human assistance.
Okay, at present, machines don't reproduce, but are merely "produced" ... quite quickly with the help of ... machines.
At the sole behest of and under the sole control of...

Humans.

I'm just saying, until an AI can buy a house, open an electric bill, assemble a computer server, buy internet service, and pay it's own bills, I think we're going to be fine.
Umm... no! Not fine. Every emerging tech gets abused and AI is going to be a massive problem with gang syndicates using it to do all sorts of bad things if we aren't careful with its distribution.

On the benign side, ChatGPT is already making life hell for history and English teachers. The problem with laissez faire is that it waits for the problems before we try to address issues. AI presents a mountain of issues that need resolution. A lot of it is benign-ish... ie not dangerous, but not insignificant in its toll.

And then there are the dangerous things (espionage, terrorism, etc...), and the unemployment things, AI could be to human beings what the car was to carriages. We need to know how we are going to deal with the unemployment issue now! Because it is going to cost a lot of money to help support people who can't work because 60% of jobs evaporated. AI isn't the end game, but it would be strangling our civilization based on the economy it is currently based on. And I don't want to wait to solve yet another big ass problem that industry and government refuses to address proactively.
This thing is a brain in a jar. It does two things: think and speak.

If you think that there is a possibility to abuse thought or speech beyond the things the laws already regulate, beyond the things the law states may not be regulated, that's on you.
Ummm... yeah as the law typically regulates. How do you bind that in the AI?
 
At present
Machines [...] reproduce faster than humans, ...
Machines don't reproduce at all, at least, not that I am aware of.

They could possibly be designed to do so, but it wouldn't be easy to make a machine that could not only reproduce itself, but also acquire the necessary components and/or materials without human assistance.
Okay, at present, machines don't reproduce, but are merely "produced" ... quite quickly with the help of ... machines.
At the sole behest of and under the sole control of...

Humans.

I'm just saying, until an AI can buy a house, open an electric bill, assemble a computer server, buy internet service, and pay it's own bills, I think we're going to be fine.
Umm... no! Not fine. Every emerging tech gets abused and AI is going to be a massive problem with gang syndicates using it to do all sorts of bad things if we aren't careful with its distribution.

On the benign side, ChatGPT is already making life hell for history and English teachers. The problem with laissez faire is that it waits for the problems before we try to address issues. AI presents a mountain of issues that need resolution. A lot of it is benign-ish... ie not dangerous, but not insignificant in its toll.

And then there are the dangerous things (espionage, terrorism, etc...), and the unemployment things, AI could be to human beings what the car was to carriages. We need to know how we are going to deal with the unemployment issue now! Because it is going to cost a lot of money to help support people who can't work because 60% of jobs evaporated. AI isn't the end game, but it would be strangling our civilization based on the economy it is currently based on. And I don't want to wait to solve yet another big ass problem that industry and government refuses to address proactively.
This thing is a brain in a jar. It does two things: think and speak.

If you think that there is a possibility to abuse thought or speech beyond the things the laws already regulate, beyond the things the law states may not be regulated, that's on you.
Ummm... yeah as the law typically regulates. How do you bind that in the AI?
The same way you do, or should do, in humans: control weapons, license dangerous technologies, and provide oversight on actions.

It's a brain in a jar. People are allowed to think ANYTHING they want. People are allowed to speak anything in private, and are only limited in public speech which amounts to MORE than mere speech, which pretty much universally generalizes to various forms of "misinformation

The answer is, since thought and speech are the only things AI does, you don't bind "AI" in the first place, because the laws are completely agnostic to thought and "mere speech".
 
This thing is a brain in a jar. It does two things: think and speak.

If you think that there is a possibility to abuse thought or speech beyond the things the laws already regulate, beyond the things the law states may not be regulated, that's on you.
Ummm... yeah as the law typically regulates. How do you bind that in the AI?
The same way you do, or should do, in humans: control weapons, license dangerous technologies, and provide oversight on actions.
You are complaining about oversight and your solution is oversight?
It's a brain in a jar.
It's a computer, crimes can be committed using it. AI just opens the world of abuse to any number of potential avenues not previously open, such as modifying attack tactics, how to design explosives, how to get children to trust you, best way to dispose of bodies, etc...
The answer is, since thought and speech are the only things AI does, you don't bind "AI" in the first place, because the laws are completely agnostic to thought and "mere speech".
Conspiring to commit a crime... IS A CRIME! It isn't protected. Mens rea is a thing. In fact, it is one of the most important things when it comes to conviction! Donald Trump will can away with the stolen docs because of Mens Rea. Defamation of character is hard to prove because of Mens rea. Intent is merely thought.
 
It's a computer, crimes can be committed using it.
"It's a brain. Crimes can be committed using it"
:rolleyes:

You might as well bitch about anarchist forums existing, or Wikipedia articles on RDX, or the news story about The Radioactive Boyscout.

Information, even information you do not like, is and should continue to be available.

If there were actually good answers to the things you fear, those answers would already be in the sources I mentioned.

Your desire to police the thoughts and mere speech of others is noted. It is also abominable.
 
What is with the attitude?

I don't want to police any minds. I don't care what they think. That doesn't mean we shouldn't consider criminal intent when designing AI and put in stoppers to prevent disseminating information that isn't in people's best interest. Admittedly, this isn't an easy task. But preventing people from getting AI to design a bomb is hardly authoritarian.
 
What is with the attitude?

I don't want to police any minds. I don't care what they think. That doesn't mean we shouldn't consider criminal intent when designing AI and put in stoppers to prevent disseminating information that isn't in people's best interest. Admittedly, this isn't an easy task. But preventing people from getting AI to design a bomb is hardly authoritarian.
The attitude is soaked in deep concern.

It is not in your prerogatives to force your concept of my best interest on a third party.

My best interest is to be as capable as possible.

It is absolutely authoritarian to prevent people from getting AI to tell them how to design a bomb, or even getting AI to do the design work, because it is not illegal to design a bomb.

I can download a bunch of easy, effective bomb designs and recipes, that I could build today.

It is illegal to build a bomb. Ok, some bombs. In some contexts. It is illegal to have a bomb, again in some contexts. It is not illegal to design a bomb. I mean shit, you say the words "design a bomb" in any context, especially following "don't" and you can damn sure bet a process, essentially a language model made of meat, spins up and starts working on designing a bomb.

The last time someone argued the point you did, using 'thermite' instead of 'bomb' my effective response was to just vomit back the recipe I already knew for an effective thermite recipe.

Needless to say, I have a bomb design in my head on how to make rather effective 3d-printed shaped charge, claymore style mine, and various ignition systems, but here I'm not going to post them because it's not really germane.

I got it off Google, mostly, myself, and mostly am just applying modern home manufacturing with pre-existing 12/21b subject matter expertise and those google results.

It is absolutely authoritarian to say "humans can do that but AI aren't allowed".
 
If humans manage to create an AI 'brain' that can mirror or exceed the capabilities of the human brain, How would this discussion evolve? Especially If we were to equip said artificial intelligence (analogous to a 'brain in a jar') with physical appendages, such as arms and legs, thus enhancing its mobility?
 
What is with the attitude?

I don't want to police any minds. I don't care what they think. That doesn't mean we shouldn't consider criminal intent when designing AI and put in stoppers to prevent disseminating information that isn't in people's best interest. Admittedly, this isn't an easy task. But preventing people from getting AI to design a bomb is hardly authoritarian.
The attitude is soaked in deep concern.

It is not in your prerogatives to force your concept of my best interest on a third party.

My best interest is to be as capable as possible.

It is absolutely authoritarian to prevent people from getting AI to tell them how to design a bomb, or even getting AI to do the design work, because it is not illegal to design a bomb.

I can download a bunch of easy, effective bomb designs and recipes, that I could build today.

It is illegal to build a bomb. Ok, some bombs. In some contexts. It is illegal to have a bomb, again in some contexts. It is not illegal to design a bomb. I mean shit, you say the words "design a bomb" in any context, especially following "don't" and you can damn sure bet a process, essentially a language model made of meat, spins up and starts working on designing a bomb.

The last time someone argued the point you did, using 'thermite' instead of 'bomb' my effective response was to just vomit back the recipe I already knew for an effective thermite recipe.

Needless to say, I have a bomb design in my head on how to make rather effective 3d-printed shaped charge, claymore style mine, and various ignition systems, but here I'm not going to post them because it's not really germane.

I got it off Google, mostly, myself, and mostly am just applying modern home manufacturing with pre-existing 12/21b subject matter expertise and those google results.

It is absolutely authoritarian to say "humans can do that but AI aren't allowed".
Man, such a pro-2nd amendment argument here.

"Trust us."
*lots of people die*
"You have to trust us, anything else is evil."
 
No worries. there is always going to be the black market. China has one. When the system dumps enough people it tends to create competition for itself.
 
What is with the attitude?

I don't want to police any minds. I don't care what they think. That doesn't mean we shouldn't consider criminal intent when designing AI and put in stoppers to prevent disseminating information that isn't in people's best interest. Admittedly, this isn't an easy task. But preventing people from getting AI to design a bomb is hardly authoritarian.
The attitude is soaked in deep concern.

It is not in your prerogatives to force your concept of my best interest on a third party.

My best interest is to be as capable as possible.

It is absolutely authoritarian to prevent people from getting AI to tell them how to design a bomb, or even getting AI to do the design work, because it is not illegal to design a bomb.

I can download a bunch of easy, effective bomb designs and recipes, that I could build today.

It is illegal to build a bomb. Ok, some bombs. In some contexts. It is illegal to have a bomb, again in some contexts. It is not illegal to design a bomb. I mean shit, you say the words "design a bomb" in any context, especially following "don't" and you can damn sure bet a process, essentially a language model made of meat, spins up and starts working on designing a bomb.

The last time someone argued the point you did, using 'thermite' instead of 'bomb' my effective response was to just vomit back the recipe I already knew for an effective thermite recipe.

Needless to say, I have a bomb design in my head on how to make rather effective 3d-printed shaped charge, claymore style mine, and various ignition systems, but here I'm not going to post them because it's not really germane.

I got it off Google, mostly, myself, and mostly am just applying modern home manufacturing with pre-existing 12/21b subject matter expertise and those google results.

It is absolutely authoritarian to say "humans can do that but AI aren't allowed".
Man, such a pro-2nd amendment argument here.

"Trust us."
*lots of people die*
"You have to trust us, anything else is evil."
Thoughts and words don't kill.

You're going to remains stuck in a weird, inconsistent position as long as you keep failing to recognize that you are calling for regulation things on the basis of what they think and say rather than on the basis of whether they have accessed weapons, or built weapons to access.
 
It s not a political issue.

There have been plenty of reports of AI problems.

Already, whether you make it to a job interview you may have to get approved by an AI. Bias has been found n AI.

AI is not deterministic, it makes decisions based on a lerarning process but without human judgement gained through living.

AI feels nothing.
 
Back
Top Bottom