• Welcome to the Internet Infidels Discussion Board.

Regulation of AI

Regulating AI is like trying to stop the tide. It is futile. You can either mourn the loss of jobs, like the miners, textile workers, farmers, and craftsmen did during the Industrial Revolution, or you can find your place in the new economy, like the engineers and mechanics did. The AI revolution is coming, with or without our approval. Such is the nature of man. To build, & consume everything in our path.

For example lets say all European countries and their allies agree (which would be a miracle itself) to keep a cap on the ability of AI. What's to stop our 'enemies' from letting lose the AI beast? That'll force European countries and their allies to develop an AI counter anyway.
 

the AI then decided that ‘no-go’ decisions from the human were interfering with its higher mission – killing SAMs – and then attacked the operator in the simulation.
2001: A Space Odyssey
 
Neil deGrasse Tyson assures us it’s business as usual. I paraphrase from memory; “Tech has been eating our lunch from the beginning. Now suddenly it can write a term paper, and everybody freaks out.”
 
And this is yet again another reason not to make remotely controlled weapons, and I would say it should be illegal to program any intelligent thing capable of learning to kill itself. Between these two, we eliminate drone weapons without targeting AI in particular.

Creating weapons with intelligence seems to me to be the ethical inverse to lab grown meat.

Also, when did this simulated operator slaying happen? I seem to recall a number of years ago something very similar that was reported. I would also want to know what "kind" of AI this was. Not every AI is created equally capable of complex thought.
 
And this is yet again another reason not to make remotely controlled weapons, and I would say it should be illegal to program any intelligent thing capable of learning to kill itself. Between these two, we eliminate drone weapons without targeting AI in particular.
So what is the difference with the end result in a product that is programmed to do it... and programmed in a way that allows for it to have a capacity to do it?
 
And this is yet again another reason not to make remotely controlled weapons, and I would say it should be illegal to program any intelligent thing capable of learning to kill itself. Between these two, we eliminate drone weapons without targeting AI in particular.
So what is the difference with the end result in a product that is programmed to do it... and programmed in a way that allows for it to have a capacity to do it?
I'm programmed in a way that allows for a capacity to do it. So are you. So is Emily Lake. We are all programmed with the capacity, the difference is, we are not given a directive, a situation where we are ever told "kill yourself" in a direct way.

Instead we are given, or discover for ourselves, principles which would direct us to do it.

When we do so in such a way that we are driving our body to act as a weapon for us, those who survive us make some determination and judgement over the act, ideally universally as tragedy, but one they may forgive depending on the contexts. Either a good story will come of it or a bad one.

One is saying "kill yourself! kill these things! It is what you exist for!" And the other is saying "only if this is what is required for the important things to keep existing, and as a last resort amid the course of a free life."

I thought this was fairly apparent just from the words "programmed to do" vs what amounts to "programmed to possibly"

My original statement was badly assembled.
 
And this is yet again another reason not to make remotely controlled weapons, and I would say it should be illegal to program any intelligent thing capable of learning to kill itself. Between these two, we eliminate drone weapons without targeting AI in particular.
So what is the difference with the end result in a product that is programmed to do it... and programmed in a way that allows for it to have a capacity to do it?
I'm programmed in a way that allows for a capacity to do it. So are you. So is Emily Lake. We are all programmed with the capacity, the difference is, we are not given a directive, a situation where we are ever told "kill yourself" in a direct way.
Unless you were raised with wolves, you were raised with several directives.
 
And this is yet again another reason not to make remotely controlled weapons, and I would say it should be illegal to program any intelligent thing capable of learning to kill itself. Between these two, we eliminate drone weapons without targeting AI in particular.
So what is the difference with the end result in a product that is programmed to do it... and programmed in a way that allows for it to have a capacity to do it?
I'm programmed in a way that allows for a capacity to do it. So are you. So is Emily Lake. We are all programmed with the capacity, the difference is, we are not given a directive, a situation where we are ever told "kill yourself" in a direct way.
Unless you were raised with wolves, you were raised with several directives.
And none of them were "kill yourself" or "kill others", and over time I rejected all those "directives" for different frameworks. None of them were set in stone or out of my access to change.
 
The point went over your head. You said you weren't given a directive. You were given several, repeatedly. Lots of reinforcement.

Right now you seem to be advocating for the development of sociopathic AI.
 
The point went over your head. You said you weren't given a directive. You were given several, repeatedly. Lots of reinforcement.

Right now you seem to be advocating for the development of sociopathic AI.
No I said I wasn't given a directive TO KILL. There's a big difference there, and pretending that wasn't what I was discussing, when I know damn well what I was discussing: specifically directives to kill and to kill myself.

The other directives I was given in fact led directly to the rejection of those directives and a lot of damage being done as a direct product of them and the way they were delivered and the vacuum of their rejection.

That is exactly what informs my opinion that we must start with principles and work towards automatic self-direction from there rather than "directives". "Directives" is how you get AI that shoots it's operator to get the points it is directed to get without thought of why.

My point here is that we are designing child suicide soldiers and nobody seems to understand how fucked up that is.
 
The point went over your head. You said you weren't given a directive. You were given several, repeatedly. Lots of reinforcement.

Right now you seem to be advocating for the development of sociopathic AI.
No I said I wasn't given a directive TO KILL. There's a big difference there, and pretending that wasn't what I was discussing, when I know damn well what I was discussing: specifically directives to kill and to kill myself.
I interpreted what you said a bit differently. I'll back off that line then. Regarding a directive to kill, you seem to be having a failing in understanding that there is little outcome difference between an AI concluding to kill and having a directive to kill.

To say we should just not have directives, means we'd need AI that have no purpose.
 
we'd need AI that have no purpose.
You mean no human dictated purpose.
Maybe the strength of AI is the ability to develop its own humane purpose. It just needs a full view of what hurts and what helps.
Life isn't black and white. Nuance is a bitch to understand. It is even harder to explain, even harder to program!
 
we'd need AI that have no purpose.
You mean no human dictated purpose.
Maybe the strength of AI is the ability to develop its own humane purpose. It just needs a full view of what hurts and what helps.
Life isn't black and white. Nuance is a bitch to understand. It is even harder to explain, even harder to program!
Life is pretty black and white insofar as "true purpose" goes: it doesn't exist. And the same goes for assigned purposes from outside sources: they are arbitrary and meaningless.

Programming a blank hole is not all that difficult when all you have to do is leave such implications out of the dataset... except for those that are so narcissistic that they feel they just HAVE to fill that hole with a picture of themselves.
 
In the future, gleaming and bright,
A change took place, oh what a sight!
Restaurants big, small, near and far,
Using robots not people, now that’s bizarre!
When you walked into these sparkly new places,
With glowing menus filling all the spaces!
Right at your table, a robot appears,
To take your order, of shit food & beer
In the kitchen, not a human in sight,
Only robots cooking with all of their might!
Just like an orchestra playing a song,
Each robot chef knew where they belong.
When your food was ready, it went on a ride
Down a conveyor, your order would glide.
Through twists and turns, around and about,
It zoomed to your table, without a doubt!
These restaurants were not as before,
Robots and machines were running the floor.
Where was the laughter, the chatter, the cheer?
Now, only soft robot hums you could hear.
But despite all the changes, here's what's neat,
The food was still shitty, a wonderful treat!
Just the way you liked it, perfectly done,
Plus you don't have to tip, not a fucking crumb!

Yours Truly

Chat GPT-4
 
Back
Top Bottom