• Welcome to the Internet Infidels Discussion Board.

Google's Driverless Cars Legally Approved

It means the National Highway Transportation Safety Administration has approved them for use on US public roadways.

But that's not true - neither the article or the letter indicate that. This is basically a clarification on specific items which relate to the status of the 'driver' in a driverless car.

A simple search for the term 'exemption from these provisions' shows that there are still areas where Google would need to petition for exemptions and it clearly states that Google cannot be certified for the sections regarding manual control of turn-signals and headlights.

So far, the cars tested on public roads were mostly conventional vehicles with all the required equipment. They also required that a human operator be on board. I would assume those cars are now approved by the NHTSA. I'm sure they would still have to pass state regulations.

I agree the latest version of Google cars without the proper equipment would still need rule adjustments.
 
Ugh... So, yes, you have an ethical obligation to risk your life for others in any situation you would want others to risk their life for you.

It's not a difficult problem, just one that selfish assholes throw tantrums over. Yes, your car might be programmed to kill you in certain situations. But other people's cars are programmed to NOT kill you on the other side if it. Wanting the benefit of being less likely to be offed by someone else but shirking the responsiblity to die for them too makes you a goddamn filthy hypocrite.

We all have an ethical obligation to accept that and live with it. People need to stop being selfish pussies and grow up.
 
Ugh... So, yes, you have an ethical obligation to risk your life for others in any situation you would want others to risk their life for you.

It's not a difficult problem, just one that selfish assholes throw tantrums over. Yes, your car might be programmed to kill you in certain situations. But other people's cars are programmed to NOT kill you on the other side if it. Wanting the benefit of being less likely to be offed by someone else but shirking the responsiblity to die for them too makes you a goddamn filthy hypocrite.

We all have an ethical obligation to accept that and live with it. People need to stop being selfish pussies and grow up.

But they talk about that exact point in the article. People are selfish assholes, so if they know that a car is programmed to kill them in certain situations instead of protecting them, that could lead to less people buying self-driving cars and more people continuing to drive manually, which leads to far more deaths. By programming in this ethical obligation to maximzie the number of lives saved, you could easily be contributing to far more deaths.
 
Nothing will ever replace the horse. These iron horses are doomed to failure!
 
Ugh... So, yes, you have an ethical obligation to risk your life for others in any situation you would want others to risk their life for you.

It's not a difficult problem, just one that selfish assholes throw tantrums over. Yes, your car might be programmed to kill you in certain situations. But other people's cars are programmed to NOT kill you on the other side if it. Wanting the benefit of being less likely to be offed by someone else but shirking the responsiblity to die for them too makes you a goddamn filthy hypocrite.

We all have an ethical obligation to accept that and live with it. People need to stop being selfish pussies and grow up.

But they talk about that exact point in the article. People are selfish assholes, so if they know that a car is programmed to kill them in certain situations instead of protecting them, that could lead to less people buying self-driving cars and more people continuing to drive manually, which leads to far more deaths. By programming in this ethical obligation to maximzie the number of lives saved, you could easily be contributing to far more deaths.
Yes, that's why the solution is to save the passenger at all costs.
 
I hope google has auto liability insurance.

aa

There is an interesting dilemma with the automation software:

https://www.technologyreview.com/s/542626/why-self-driving-cars-must-be-programmed-to-kill/

The question boils down to: Do you get in a car that might be programmed to kill you?

The question that was posed is actually bullshit. Driverless cars will not have AI that approaches the human capacity to think about life and death, or morality in general. The car will simply be able to solve physics problems very quickly. Even if it could think about morality, there is no guarantee that anyone will live or die in a specific situation. If it is able to differentiate between pedestrians and inanimate objects, it will be programmed to avoid the pedestrians, and trust that the car's other safety features, such as restraints and airbags are sufficient to protect the driver.

There will likely be other safety features involved. The car will not be able to exceed the speed limit, so it would never be driving at a high rate of speed on a roadway where there is a significant chance of pedestrians being present. It likely won't have to swerve at all in the situation provided, but just notice the pedestrians, and apply the brakes well before a human driver would be able to react.
 
There is an interesting dilemma with the automation software:

https://www.technologyreview.com/s/542626/why-self-driving-cars-must-be-programmed-to-kill/

The question boils down to: Do you get in a car that might be programmed to kill you?

The question that was posed is actually bullshit. Driverless cars will not have AI that approaches the human capacity to think about life and death, or morality in general. The car will simply be able to solve physics problems very quickly. Even if it could think about morality, there is no guarantee that anyone will live or die in a specific situation. If it is able to differentiate between pedestrians and inanimate objects, it will be programmed to avoid the pedestrians, and trust that the car's other safety features, such as restraints and airbags are sufficient to protect the driver.

There will likely be other safety features involved. The car will not be able to exceed the speed limit, so it would never be driving at a high rate of speed on a roadway where there is a significant chance of pedestrians being present. It likely won't have to swerve at all in the situation provided, but just notice the pedestrians, and apply the brakes well before a human driver would be able to react.
Regardless of AI capacity, in certain situations it would have to choose whether or not to crash into incoming semi that lost control or swerve into walking pedestrians.
 
The question that was posed is actually bullshit. Driverless cars will not have AI that approaches the human capacity to think about life and death, or morality in general. The car will simply be able to solve physics problems very quickly. Even if it could think about morality, there is no guarantee that anyone will live or die in a specific situation. If it is able to differentiate between pedestrians and inanimate objects, it will be programmed to avoid the pedestrians, and trust that the car's other safety features, such as restraints and airbags are sufficient to protect the driver.

There will likely be other safety features involved. The car will not be able to exceed the speed limit, so it would never be driving at a high rate of speed on a roadway where there is a significant chance of pedestrians being present. It likely won't have to swerve at all in the situation provided, but just notice the pedestrians, and apply the brakes well before a human driver would be able to react.
Regardless of AI capacity, in certain situations it would have to choose whether or not to crash into incoming semi that lost control or swerve into walking pedestrians.
The AI capacity may be able to run through a hundred different options before you even were able to link the crashing semi to your brain to get your foot on the brake. By that time, the car already has a decent idea of the potential limits of the crashing semi, whether you need to laterally evade it or just put on the brakes, and has been following the evasion effort necessary a quarter second before you start putting on the brake and you are trying to instinctively figure the best place to maneuver and hoping you are right. Fractions of a second can make a significant difference there. The computer may also be able to brake in the most efficient manner to best reduce speed and maintain as much mobility for the car to maneuver as necessary.

And of course, that crashing semi... with pedestrians in the area sounds like a very far fetched scenario in the first place.

The AI may also be better at stopping when animals or children decide running into the road is a good idea, depending on the proximity of the animal/child to the car.
 
The question that was posed is actually bullshit. Driverless cars will not have AI that approaches the human capacity to think about life and death, or morality in general. The car will simply be able to solve physics problems very quickly. Even if it could think about morality, there is no guarantee that anyone will live or die in a specific situation. If it is able to differentiate between pedestrians and inanimate objects, it will be programmed to avoid the pedestrians, and trust that the car's other safety features, such as restraints and airbags are sufficient to protect the driver.

There will likely be other safety features involved. The car will not be able to exceed the speed limit, so it would never be driving at a high rate of speed on a roadway where there is a significant chance of pedestrians being present. It likely won't have to swerve at all in the situation provided, but just notice the pedestrians, and apply the brakes well before a human driver would be able to react.
Regardless of AI capacity, in certain situations it would have to choose whether or not to crash into incoming semi that lost control or swerve into walking pedestrians.

Not really. As I noted, if it has the capacity to determine the difference between pedestrians, and inanimate object (or animate vehicles), it is going to be programmed to avoid the pedestrian, and trust that the other safety features are sufficient to protect the driver. If it does not have that capability, the question is moot.

Yes, it's going to suck for you, in your driverless car, if a semi comes barreling down a neighborhood street at 75 mph straight for you, and you are surrounded by pedestrians, but that is why you probably won't buy a driverless car, and will instead plow into the pedestrians yourself.
 
I hope google has auto liability insurance.

aa

I'd suspect that this would probably be rolled into homeowner's or renter's insurance rather than Google's auto policy.

We we can roll any physical damages and medical payments for no fault accidents into an HO policy. But I'm more questioning legal and tortious liability. If a google car causes damage through either deliberate or faulty programming, why would your insurance pay anything? You are a passenger in a car driven by someone else.

aa
 
I'd suspect that this would probably be rolled into homeowner's or renter's insurance rather than Google's auto policy.

We we can roll any physical damages and medical payments for no fault accidents into an HO policy. But I'm more questioning legal and tortious liability. If a google car causes damage through either deliberate or faulty programming, why would your insurance pay anything? You are a passenger in a car driven by someone else.

aa
Why? The same reason why you are liable for fraudulent charges made on a credit card a private company gave to someone else in your name.
 
Regardless of AI capacity, in certain situations it would have to choose whether or not to crash into incoming semi that lost control or swerve into walking pedestrians.
The AI capacity may be able to run through a hundred different options before you even were able to link the crashing semi to your brain to get your foot on the brake. By that time, the car already has a decent idea of the potential limits of the crashing semi, whether you need to laterally evade it or just put on the brakes, and has been following the evasion effort necessary a quarter second before you start putting on the brake and you are trying to instinctively figure the best place to maneuver and hoping you are right. Fractions of a second can make a significant difference there. The computer may also be able to brake in the most efficient manner to best reduce speed and maintain as much mobility for the car to maneuver as necessary.

And of course, that crashing semi... with pedestrians in the area sounds like a very far fetched scenario in the first place.
I did not say it was close fetched. But still it has to be programmed to choose something.
It does not have to be pedestrian, it could be some other driver who left his broken car.
The AI may also be better at stopping when animals or children decide running into the road is a good idea, depending on the proximity of the animal/child to the car.
Point is, it has to make a decision about who is going to die.
 
We we can roll any physical damages and medical payments for no fault accidents into an HO policy. But I'm more questioning legal and tortious liability. If a google car causes damage through either deliberate or faulty programming, why would your insurance pay anything? You are a passenger in a car driven by someone else.

aa
Why? The same reason why you are liable for fraudulent charges made on a credit card a private company gave to someone else in your name.

What? I don't think you should be liable for that. If BofA gives someone else a credit card with my name on it, I say 'that's not me and I didn't make those charges' and BofA corrects it. (Is that what you're talking about? It seems I'm slow today).

aa
 
Regardless of AI capacity, in certain situations it would have to choose whether or not to crash into incoming semi that lost control or swerve into walking pedestrians.

Not really. As I noted, if it has the capacity to determine the difference between pedestrians, and inanimate object (or animate vehicles), it is going to be programmed to avoid the pedestrian
Not only it has to have such capacity it already has it.
Of course it will be programmed to avoid pedestrians, the question is, what to do when avoiding means likely death for the passenger?
 
Why? The same reason why you are liable for fraudulent charges made on a credit card a private company gave to someone else in your name.

What? I don't think you should be liable for that. If BofA gives someone else a credit card with my name on it, I say 'that's not me and I didn't make those charges' and BofA corrects it. (Is that what you're talking about? It seems I'm slow today).

aa
I'm referring to identity theft, which can destroy your life. They now sell services to help prevent identify theft. IE, you have to spend money to help prevent third parties from letting another third party steal your identity.
 
The AI capacity may be able to run through a hundred different options before you even were able to link the crashing semi to your brain to get your foot on the brake. By that time, the car already has a decent idea of the potential limits of the crashing semi, whether you need to laterally evade it or just put on the brakes, and has been following the evasion effort necessary a quarter second before you start putting on the brake and you are trying to instinctively figure the best place to maneuver and hoping you are right. Fractions of a second can make a significant difference there. The computer may also be able to brake in the most efficient manner to best reduce speed and maintain as much mobility for the car to maneuver as necessary.

And of course, that crashing semi... with pedestrians in the area sounds like a very far fetched scenario in the first place.
I did not say it was close fetched. But still it has to be programmed to choose something.
It does not have to be pedestrian, it could be some other driver who left his broken car.
The AI may also be better at stopping when animals or children decide running into the road is a good idea, depending on the proximity of the animal/child to the car.
Point is, it has to make a decision about who is going to die.
What is this, the Kobayashi Maru? The car while analyzing the situation can also start blaring the horn and flashing lights to give pedestrians the most heads up to move their butts.
 
Not really. As I noted, if it has the capacity to determine the difference between pedestrians, and inanimate object (or animate vehicles), it is going to be programmed to avoid the pedestrian
Not only it has to have such capacity it already has it.

I'm not so sure about that, but for the sake of argument, I will take it as given.

Of course it will be programmed to avoid pedestrians, the question is, what to do when avoiding means likely death for the passenger?

The car will have no programming that determines how likely death is, in fact, in most practical driving applications, this is impossible to determine. It will be programmed to avoid pedestrians, and the programmers will have to trust that the other safety features of the car are sufficient to protect the driver in most situations.
 
The question that was posed is actually bullshit. Driverless cars will not have AI that approaches the human capacity to think about life and death, or morality in general. The car will simply be able to solve physics problems very quickly. Even if it could think about morality, there is no guarantee that anyone will live or die in a specific situation. If it is able to differentiate between pedestrians and inanimate objects, it will be programmed to avoid the pedestrians, and trust that the car's other safety features, such as restraints and airbags are sufficient to protect the driver.

There will likely be other safety features involved. The car will not be able to exceed the speed limit, so it would never be driving at a high rate of speed on a roadway where there is a significant chance of pedestrians being present. It likely won't have to swerve at all in the situation provided, but just notice the pedestrians, and apply the brakes well before a human driver would be able to react.
Regardless of AI capacity, in certain situations it would have to choose whether or not to crash into incoming semi that lost control or swerve into walking pedestrians.

Or very fast put in reverse and drive the other direction.
 
What? I don't think you should be liable for that. If BofA gives someone else a credit card with my name on it, I say 'that's not me and I didn't make those charges' and BofA corrects it. (Is that what you're talking about? It seems I'm slow today).

aa
I'm referring to identity theft, which can destroy your life. They now sell services to help prevent identify theft. IE, you have to spend money to help prevent third parties from letting another third party steal your identity.

Ok, that does suck. But I don't think you are held liable for purchases if your identity is stolen are you? I also lost the parallel with liability for driverless cars.

For the record, it does appear that Google has accepted legal liability for any accidents caused by its cars. http://dailycaller.com/2015/10/12/car-companies-intend-to-accept-full-liability-for-self-driving-car-accidents/

I would imagine their intent to accept liability was one of the key components in getting their cars 'legally approved' as drivers.

aa
 
Back
Top Bottom