• Welcome to the Internet Infidels Discussion Board.

Google's Driverless Cars Legally Approved

I'm referring to identity theft, which can destroy your life. They now sell services to help prevent identify theft. IE, you have to spend money to help prevent third parties from letting another third party steal your identity.

Ok, that does suck. But I don't think you are held liable for purchases if your identity is stolen are you? I also lost the parallel with liability for driverless cars.
The person isn't responsible for the harm, yet is held liable.

For the record, it does appear that Google has accepted legal liability for any accidents caused by its cars. http://dailycaller.com/2015/10/12/c...ull-liability-for-self-driving-car-accidents/
Cool. Just sign this waiver please before purchasing our car. ;)
I would imagine their intent to accept liability was one of the key components in getting their cars 'legally approved' as drivers.
One major thing is that the owner has to maintain and upkeep the vehicle. Liability probably has limits. And folks, lets remember Net Neutrality here. We don't want to tell the car to take us to a nice locally owned pizzeria and end up being driven to Pizza Hut, the official Pizza sponsor of Google Cars!
 
I'd suspect that this would probably be rolled into homeowner's or renter's insurance rather than Google's auto policy.

We we can roll any physical damages and medical payments for no fault accidents into an HO policy. But I'm more questioning legal and tortious liability. If a google car causes damage through either deliberate or faulty programming, why would your insurance pay anything? You are a passenger in a car driven by someone else.

aa

I would assume the car's owner would still have to carry car insurance as they do now. It might even be cheaper.
 
Regardless of AI capacity, in certain situations it would have to choose whether or not to crash into incoming semi that lost control or swerve into walking pedestrians.

Or very fast put in reverse and drive the other direction.
Or the programming may make the car total itself so the owner has to keep on buying a new car. A kid darts in front of the car, car finds nearest tree and crashes. Cat walking in home nearby, finds nearest tree and crashes. Cloudy, finds nearest tree and crashes.
 
We we can roll any physical damages and medical payments for no fault accidents into an HO policy. But I'm more questioning legal and tortious liability. If a google car causes damage through either deliberate or faulty programming, why would your insurance pay anything? You are a passenger in a car driven by someone else.

aa

I would assume the car's owner would still have to carry car insurance as they do now. It might even be cheaper.

Yes. It varies by policy but my liability coverages are easily 50% of my total premium (you can verify your own on your carrier's website, probably). It's quite a selling point 'buy a google car and cut your auto insurance premiums in half'.

aa
 
Let me specify my concern with robot delivery. Obviously anything can be hijacked but an automated deliver vehicle would be much easier. It would be carrying valuable merchandise so organized thieves have greater motive to target them. You wouldn't need sophisticated hacking. Just something to block its path and a handheld blow torch to disintegrate the tires then pry open the cargo. As long as a robot vehicle is traveling with public safety as a concern impeding it would be trivially easy.

Amazon drones wouldn't have this problem. But I don't see a future for robots replacing trucking without Orwellian surveillance and very quick police response times.

Why can't they do that with current delivery vehicles that have a driver? It's not like the average UPS driver is armed. Such an organized attack could easily subdue a single delivery person.
 
I would assume the car's owner would still have to carry car insurance as they do now. It might even be cheaper.

Yes. It varies by policy but my liability coverages are easily 50% of my total premium (you can verify your own on your carrier's website, probably). It's quite a selling point 'buy a google car and cut your auto insurance premiums in half'.

aa

I think you'd still have to carry liability. But my guess is that a driverless car will be considered far more likely to follow the rules of the road and be safer than many human drivers out there.
 
I think you'd still have to carry liability. But my guess is that a driverless car will be considered far more likely to follow the rules of the road and be safer than many human drivers out there.

Slavish adherence to the rules of the road can make the Google car more dangerous, especially when most cars around it are driven by people.
 
I think you'd still have to carry liability. But my guess is that a driverless car will be considered far more likely to follow the rules of the road and be safer than many human drivers out there.

Slavish adherence to the rules of the road can make the Google car more dangerous, especially when most cars around it are driven by people.

That's not the fault of the google car.
 
Nothing will ever replace the horse. These iron horses are doomed to failure!
Well horses are self-driving and look at what happened to them. :p

I think self driving is nice and useful as an option but I would not want to buy a car without a manual override. Especially if it's programmed to kill me.

- - - Updated - - -

That's not the fault of the google car.
It is since it does not take into account complexities in the real world. If being in a Google car makes you more likely to be in an accident (compared to a competent human driver) because - for example - it creeps at 55 when everybody else is doing 70-75, then there is a problem with the Google car.
 
Why can't they do that with current delivery vehicles that have a driver? It's not like the average UPS driver is armed. Such an organized attack could easily subdue a single delivery person.
I know organized crime has hijacked 18 wheelers for the cargo. My point is than its easier and far less risky to commit property crime than against a human victim.
 
That's not the fault of the google car.
It is since it does not take into account complexities in the real world. If being in a Google car makes you more likely to be in an accident (compared to a competent human driver) because - for example - it creeps at 55 when everybody else is doing 70-75, then there is a problem with the Google car.

Someone doing 75 in a 55 zone gets in an accident and it's the fault of the person doing 55? I don't think the cops investigating would see it that way.

Personally, when I drive on the expressway, I set the cruise to 70 (the MI limit). Yes, I get passed by most cars on the road. I let all the other fools fight their way through traffic. And I have seen a lot of fighting. Fuck that shit. I don't need that added anxiety.

When I got my GPS, it made me realize the time I saved by speeding only amounted to usually less than 3-4 minutes over a couple hour drive. Not only that, but speeding significantly reduces fuel efficiency. It's not worth it.
 
Or very fast put in reverse and drive the other direction.
Or the programming may make the car total itself so the owner has to keep on buying a new car. A kid darts in front of the car, car finds nearest tree and crashes. Cat walking in home nearby, finds nearest tree and crashes. Cloudy, finds nearest tree and crashes.
So basically the same business model as Microsoft Windows. One day you want to get to work in the morning, and you notice your car has been updated overnight and there is no more start button...
 
Not only it has to have such capacity it already has it.

I'm not so sure about that, but for the sake of argument, I will take it as given.

Of course it will be programmed to avoid pedestrians, the question is, what to do when avoiding means likely death for the passenger?

The car will have no programming that determines how likely death is, in fact, in most practical driving applications, this is impossible to determine. It will be programmed to avoid pedestrians, and the programmers will have to trust that the other safety features of the car are sufficient to protect the driver in most situations.
But that's not how humans drive. Humans try to avoid the biggest thing on the road, like semi which lost control.
So, do you see the problem now?
 
A radar and GPS equipped car, possibly with telemetric communications to other similarly equipped vehicles in the area, has VASTLY more information about what is about to happen than a human driver.

The human gets into the situation where a 'who to kill' decision is needed, because unforeseen hazards come into view with too little time to take avoiding action.

A driverless car can 'see' through darkness, fog, rain and snow; it knows what vehicles or pedestrians are around the corners, concealed from line-of-sight by buildings or trees; It knows how fast those vehicles and pedestrians are moving, and in what directions. It is vastly less likely to get into a situation that is hard (or impossible) to get out of without someone getting hurt, than any human driver.

We tolerate human drivers that cause absolute CARNAGE on the roads; why the FUCK would we not tolerate self driving vehicles that cause one millionth of the damage, just because they are not guaranteed to be 100% safe?

I mean, we are not talking about vaccinating children, nuclear power plants, or genetically modified food here; There isn't a huge, 'moronic irrationalist' lobby, opposed to one driverless car death every thousand years, to the point where they insist we have to stick with a death every few minutes, that must be appeased by our spineless governments.

Yet.
 
Yes, driverless car is probably better than human in fog, but it's still much worse during nice sunny day. Human driver can look at pedestrian or other driver and determine what he/she is going to do next, computers can't do that well yet.
 
A radar and GPS equipped car, possibly with telemetric communications to other similarly equipped vehicles in the area, has VASTLY more information about what is about to happen than a human driver.

The human gets into the situation where a 'who to kill' decision is needed, because unforeseen hazards come into view with too little time to take avoiding action.

A driverless car can 'see' through
darkness, fog, rain and snow; it knows what vehicles or pedestrians are around the corners, concealed from line-of-sight by buildings or trees; It knows how fast those vehicles and pedestrians are moving, and in what directions. It is vastly less likely to get into a situation that is hard (or impossible) to get out of without someone getting hurt, than any human driver.

We tolerate human drivers that cause absolute CARNAGE on the roads; why the FUCK would we not tolerate self driving vehicles that cause one millionth of the damage, just because they are not guaranteed to be 100% safe?

I mean, we are not talking about vaccinating children, nuclear power plants, or genetically modified food here; There isn't a huge, 'moronic irrationalist' lobby, opposed to one driverless car death every thousand years, to the point where they insist we have to stick with a death every few minutes, that must be appeased by our spineless governments.

Yet.
It can? I thought this was still a problem.
 
Yes, driverless car is probably better than human in fog, but it's still much worse during nice sunny day. Human driver can look at pedestrian or other driver and determine what he/she is going to do next, computers can't do that well yet.

Describe a scenario where this matters.
 
It can? I thought this was still a problem.
In a manner of speaking. Radars can "see" a car in front of you even in dense fog/snow.
But we all heard a story of car's AI reacting to steam/fog.


Yes, driverless car is probably better than human in fog, but it's still much worse during nice sunny day. Human driver can look at pedestrian or other driver and determine what he/she is going to do next, computers can't do that well yet.

Describe a scenario where this matters.
Drunk pedestrian from Sweden.
 
Back
Top Bottom