• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Self driving robot cars are racially biased

So about the time when it's finally programmed to recognize regular types of pedestrians, what's it going to make of Halloween Night?
 
LOL. It doesn't need to be intelligent, nor to recognise who it's avoiding. Just that a given object fits the profile of 'things to avoid'. A human driver would likely swerve to avoid a mannequin in the road - why would we need a driverless vehicle to be any more discriminating than that?

We don't need cars that can write epic poetry or pick out a bank robber from a police line up or philosophize on the meaning and value of human life. We just need cars that crash less often than ones driven by humans.

That's a MUCH lower bar than most of us want to admit. Humans are utterly shit at driving.
 
In many places seat belts are required by law, and helmets for bikers, so should there be similar requirements for some kind of highly reflective belt, stripe, design, whatever in clothing as a pedestrian? If not as an legal requirement, then at least as a factor in liability in lawsuits - driver responsibility 0% or very low if no reflectors (regardless of race). Seems logical, but I've not heard of such - is there anywhere such an approach is used?

In Denver, it is illegal to text on your phone while crossing the street.
Nothing about clothing.

If you hit a pedestrian in a crosswalk, it does not matter what they are wearing.
If you hit a pedestrian anywhere other than a crosswalk, then what they were wearing, what they were doing, how they were doing it... all matters in terms of liability. The driver has the obligation to see and avoid any pedestrian anywhere, but if dressed in black, at night, jaywalking... not likely to have much liability on the driver.
 
Driverless cars are going to wreak havoc on the personal injury law industry.

Accidents will happen less often, so there will be fewer claims and less need for the lawyers/paralegals and insurance rates will likely drop (or profits will soar). But some cases that do happen will be absolutely nuts as the lawyers fight over who is at fault when a car decides to kill its occupant or swerves to hit pedestrians, or when a hacker takes control and murders thousands all at once. Is the occupant of the vehicle at fault for not having the latest or a particular software or security patch? Is the manufacturer at fault? The programmers?
 
Self-driving cars do not identify pedestrians with optical sensors alone, but humans use optical-only sensors called "eyes". Autonomous vehicles do not have any software that allows them to do anything special with humans, as opposed to other kinds of hazards. Bilby was quite right to point out that human drivers will swerve to avoid a mannequin, but the difference is that a self-driving car will not mistake the mannequin for a human. It will simply treat it as an obstacle. So the concern in the OP about the skin complexion of pedestrians is nonsense. The concern is based on the incorrect impression that this new technology models the world like humans model it. However, humans do not use radar or ladar sensors to detect objects.

Therein lies the problem with the "Buck Rogers" mentality that we are on the verge of large scale deployment of fully autonomous vehicles. What this new technology should bring us is augmented human operation of vehicles. That is where its real value lies.

The airline industry has achieved enormous success in reducing the number of catastrophic accidents, and much of that has to do with the advanced automation technology that has enhanced the ability of human pilots to avoid disasters. However, pilots are much more highly trained than ordinary drivers, especially in techniques for unusual conditions that full automation would not be able to handle. In such cases, they have what is euphemistically called a Non-Normal Checklist to help them diagnose and take corrective action.

We do not know what caused the latest disaster in Ethiopia--a six-month old Boeing 737-Max8 that crashed and killed all passengers. However, there was a similar Lion Air catastrophe recently in which the cause was identified as a new anti-stall flight control system. The aircraft system took corrective action for a nonexistent problem and angled the nose sharply downward. The pilots struggled against the control column to force the nose up, but they lost that battle. They could have succeeded, if they had simply switched off the anti-stall automation. In the heat of the moment, they may not have had the presence of mind to do it. It turns out that the maintenance crews may also have made an error or two in servicing the aircraft system. So this was not just an automation error. It was a cascade of machine and human malfunctions.

Self-driving cars are nowhere near as sophisticated as aircraft technology, but the general public is far more exposed to automobile technology than aircraft technology. And drivers are not as well-trained as airline pilots. They will, however, encounter the same difficulties that pilots do as automobiles become more sophisticated. That is, they will tend to rely on the technology, misunderstand what it is doing, and not know what to do when things go awry.

Bilby correctly points out that we do not need perfect automation, just automation that makes cars safer than they are now. I agree, but I don't think that will happen with self-driving vehicles. And the cost of trying to get us to that point statistically may well exceed the gains. I'm for augmented human-operated automobiles, not fully autonomous ones. The technology is not good enough to risk the rush to deployment.
 
Driverless cars are going to wreak havoc on the personal injury law industry.

Accidents will happen less often, so there will be fewer claims and less need for the lawyers/paralegals and insurance rates will likely drop (or profits will soar). But some cases that do happen will be absolutely nuts as the lawyers fight over who is at fault when a car decides to kill its occupant or swerves to hit pedestrians, or when a hacker takes control and murders thousands all at once. Is the occupant of the vehicle at fault for not having the latest or a particular software or security patch? Is the manufacturer at fault? The programmers?

The law is an ass, so you are probably right, though there's no need for any of that - it's not as though occasional deaths due to automated equipment are not something that the law has already encountered.

People have been killed and injured by automatic machinery since such machinery has existed.
 
Self-driving cars do not identify pedestrians with optical sensors alone, but humans use optical-only sensors called "eyes". Autonomous vehicles do not have any software that allows them to do anything special with humans, as opposed to other kinds of hazards. Bilby was quite right to point out that human drivers will swerve to avoid a mannequin, but the difference is that a self-driving car will not mistake the mannequin for a human.
That's not a difference. A human will almost certainly mistake a mannequin for a human too - the difference being that they will change their story after the event once more information has been processed.
It will simply treat it as an obstacle. So the concern in the OP about the skin complexion of pedestrians is nonsense. The concern is based on the incorrect impression that this new technology models the world like humans model it. However, humans do not use radar or ladar sensors to detect objects.

Therein lies the problem with the "Buck Rogers" mentality that we are on the verge of large scale deployment of fully autonomous vehicles. What this new technology should bring us is augmented human operation of vehicles. That is where its real value lies.
Nonsense. Adding humans to the loop makes them less safe, and is unnecessary and stupid.
The airline industry has achieved enormous success in reducing the number of catastrophic accidents, and much of that has to do with the advanced automation technology that has enhanced the ability of human pilots to avoid disasters. However, pilots are much more highly trained than ordinary drivers, especially in techniques for unusual conditions that full automation would not be able to handle. In such cases, they have what is euphemistically called a Non-Normal Checklist to help them diagnose and take corrective action.
And pilot errors are now the leading cause of commercial aviation accidents. We would be safer without them - despite their being far more highly trained than car drivers - and they persist due to a false perception that they enhance safety.
We do not know what caused the latest disaster in Ethiopia--a six-month old Boeing 737-Max8 that crashed and killed all passengers. However, there was a similar Lion Air catastrophe recently in which the cause was identified as a new anti-stall flight control system. The aircraft system took corrective action for a nonexistent problem and angled the nose sharply downward. The pilots struggled against the control column to force the nose up, but they lost that battle. They could have succeeded, if they had simply switched off the anti-stall automation. In the heat of the moment, they may not have had the presence of mind to do it. It turns out that the maintenance crews may also have made an error or two in servicing the aircraft system. So this was not just an automation error. It was a cascade of machine and human malfunctions.
As are most such incidents. We see here that adding highly trained humans to the mix did not prevent or mitigate the accident. Yet we persist in having them there at great expense, when their only value is to provide a scapegoat for engineering failures that they cannot defend us against.
Self-driving cars are nowhere near as sophisticated as aircraft technology, but the general public is far more exposed to automobile technology than aircraft technology. And drivers are not as well-trained as airline pilots. They will, however, encounter the same difficulties that pilots do as automobiles become more sophisticated. That is, they will tend to rely on the technology, misunderstand what it is doing, and not know what to do when things go awry.
Which is why it's pointless to give them any power to act. They are less likely to make things better than they are to make things worse.
Bilby correctly points out that we do not need perfect automation, just automation that makes cars safer than they are now. I agree, but I don't think that will happen with self-driving vehicles. And the cost of trying to get us to that point statistically may well exceed the gains. I'm for augmented human-operated automobiles, not fully autonomous ones. The technology is not good enough to risk the rush to deployment.

I am sure that we are already well past that point; And I an convinced that you have a much higher opinion of human abilities than is deserved. The bar is far lower than you seem to think, not because automation is incredibly good, but because humans are incredibly bad. Car crashes are routine, and unremarkable events - largely because airbags, seat belts, crumple zones etc. have made even high-speed collisions survivable, while low speed impacts rarely cause injuries at all.

We have engineered our vehicles around the assumption that they will frequently collide with each other, or with pedestrians, or with stationary obstacles such as trees and street furniture. Car crashes are a normal part of our lives to the point where we don't consider them newsworthy. Airline crashes, by comparison, are vanishingly rare, and a crash on another continent will make headlines in the local news.

Automated vehicles could easily make car crashes newsworthy - indeed, it's already world news if an automated car does crash. That skews our risk perception (lots of people think flying is dangerous for this reason); But it's a good thing - it means both that crashes are rare, and that each one leads to a detailed investigation and the adoption of measures to prevent any recurrence.
 
Well, the key point here is that humans work best in environments that are predictable relative to human interactions. Robots do not have anything approaching human understanding of how to interact with other vehicles. They are not as good as people at recognizing unsafe behavior in drivers and pedestrians.

I think your understanding of AI is a few decades out of date. It's a very rapidly evolving field, and things that were science fiction twenty years ago are commonplace today.

I wasn't aware that you had expertise in robotics and were able to assess my competence in that subject.

I owe you an apology; Your gross error here is not, I now realize, due to a lack of understanding of the state of the art in automation; But instead is due to a massive overestimation of the abilities of humans, particularly with regards to simple routine activities like driving an automobile - or perhaps to a massive overestimation of the difficulty of driving a vehicle. Or perhaps a bit of both.

Just because modern 'AI' is shit at mimicking humans and at doing some tasks at which humans excel, does NOT imply that it is not already FAR superior to humans at driving cars.

What a human can do while sending a text message, smoking a cigarette, and disciplining the unruly toddler in the seat behind them, a specialist system that does not get distracted can do FAR better.
 
Well, AI is better than distracted human driver but it's far behind drivers which are not distracted.
And of course planes are irrelevant here, they don't have to deal with humans and trash on the road. Well, planes have to deal with pilots and it seems they regularly fail at that, last one which did so is Indonesian B737
 
Well, AI is better than distracted human driver but it's far behind drivers which are not distracted.

Sure. And if you can find a way to prevent drivers from being distracted, good luck to you.

But there are ~37,000 dead people, ~2,200,000 injuries, and ~5,400,000 crashes every year just in the USA (source), that say you cannot stop drivers from being distracted (or tired, or drink, or drugged, or angry, or emotionally unstable, or ...)

So frankly it doesn't matter - replacing ALL drivers with "AI" will save lives, reduce injuries, and prevent billions of dollars of property damage, every year. Even if "AI" is only as good as the average, and still kills at the (implausible) rate of ten people per billion vehicle miles traveled - (by the way, that's a thousandth of the fatality rate we saw from human driven vehicles as recently as the late 1940s).

Nobody cares about the good drivers (who are a FAR smaller subset of the total than most people like to think, and who are themselves only good drivers most of the time). They are not killing anyone (except in extremely unusual circumstances), so replacing them with an "AI" that also doesn't kill anyone (except in extremely unusual circumstances) will not affect the accident statistics much at all.

The issue is to get the BAD drivers off the road - and they are the majority, as anyone who has watched other people driving can attest.
 
Well, AI is better than distracted human driver but it's far behind drivers which are not distracted.
And of course planes are irrelevant here, they don't have to deal with humans and trash on the road. Well, planes have to deal with pilots and it seems they regularly fail at that, last one which did so is Indonesian B737

No, the last one is Ethiopian Airlines, but that is exactly why AI can be more dangerous than people realize. Because there is an unrealistic understanding in the public of what AI is capable of, people tend to rely on it without realizing that these machines are not really intelligent in a human sense. They are dreadfully underequipped to deal with emergency situations that require human analysis. In the case of automobiles, there are just too many possible ways for traffic conditions to go wrong in a way that human programmers are unaware of. Human brains are evolved to deal with chaotic interactions of the sort that occur on the road. Machines that simulate intelligent behavior are much more limited.

So it turns out that pilots have been experiencing problems with the anti-stall sensor software technology since Airbus introduced it. The Boeing Max aircraft are just the latest version of that problem, but even Airbus pilots have experienced incidents in which they had to fight to lift the nose because the autopilot program misinterpreted sensor readings. That is not in a fully autonomous machine, but in a very sophisticated machine with human pilots. Ordinary drivers will be even more careless about trusting automation on the road, because they are not as well-trained or prepared as pilots. Drivers don't spend hours in driving simulators that train them to react to hazardous conditions. Pilots do.
 
Well, AI is better than distracted human driver but it's far behind drivers which are not distracted.
And of course planes are irrelevant here, they don't have to deal with humans and trash on the road. Well, planes have to deal with pilots and it seems they regularly fail at that, last one which did so is Indonesian B737

No, the last one is Ethiopian Airlines, but that is exactly why AI can be more dangerous than people realize. Because there is an unrealistic understanding in the public of what AI is capable of, people tend to rely on it without realizing that these machines are not really intelligent in a human sense. They are dreadfully underequipped to deal with emergency situations that require human analysis. In the case of automobiles, there are just too many possible ways for traffic conditions to go wrong in a way that human programmers are unaware of. Human brains are evolved to deal with chaotic interactions of the sort that occur on the road. Machines that simulate intelligent behavior are much more limited.

So it turns out that pilots have been experiencing problems with the anti-stall sensor software technology since Airbus introduced it. The Boeing Max aircraft are just the latest version of that problem, but even Airbus pilots have experienced incidents in which they had to fight to lift the nose because the autopilot program misinterpreted sensor readings. That is not in a fully autonomous machine, but in a very sophisticated machine with human pilots. Ordinary drivers will be even more careless about trusting automation on the road, because they are not as well-trained or prepared as pilots. Drivers don't spend hours in driving simulators that train them to react to hazardous conditions. Pilots do.
My post was written on the next day of Ethiopian crash when it was not yet clear that MCAS was to blame.
And stop pretending that MCAS was not a new, untested and utterly faulty system which should never have been allowed on the plane. FAA is under investigation.
 
Well, AI is better than distracted human driver but it's far behind drivers which are not distracted.
And of course planes are irrelevant here, they don't have to deal with humans and trash on the road. Well, planes have to deal with pilots and it seems they regularly fail at that, last one which did so is Indonesian B737

No, the last one is Ethiopian Airlines, but that is exactly why AI can be more dangerous than people realize. Because there is an unrealistic understanding in the public of what AI is capable of, people tend to rely on it without realizing that these machines are not really intelligent in a human sense. They are dreadfully underequipped to deal with emergency situations that require human analysis. In the case of automobiles, there are just too many possible ways for traffic conditions to go wrong in a way that human programmers are unaware of. Human brains are evolved to deal with chaotic interactions of the sort that occur on the road. Machines that simulate intelligent behavior are much more limited.

So it turns out that pilots have been experiencing problems with the anti-stall sensor software technology since Airbus introduced it. The Boeing Max aircraft are just the latest version of that problem, but even Airbus pilots have experienced incidents in which they had to fight to lift the nose because the autopilot program misinterpreted sensor readings. That is not in a fully autonomous machine, but in a very sophisticated machine with human pilots. Ordinary drivers will be even more careless about trusting automation on the road, because they are not as well-trained or prepared as pilots. Drivers don't spend hours in driving simulators that train them to react to hazardous conditions. Pilots do.

Unlike aircraft pilots, a car driver (human or artificial) can always react to a situation they cannot handle by just stopping.

This is almost never going to result in any kind of 'at fault' collision. Particularly if the other vehicles involved are smart enough to do the same.

Perhaps the only exception to this would be a railway level crossing. But these are very few and far between, and very closely regulated and clearly marked. It takes a human level of lack of concentration to enter such a crossing such that exit is not assured.

Bearing in mind that the goal is only to be better than human drivers, I am confident that the strategy of stopping when confused would render automated vehicles safer than human piloted ones - indeed, it is a strategy that could be adopted to great success amongst humans, if only we could reprogram them to be less cocky.
 
Well, AI is better than distracted human driver but it's far behind drivers which are not distracted.
And of course planes are irrelevant here, they don't have to deal with humans and trash on the road. Well, planes have to deal with pilots and it seems they regularly fail at that, last one which did so is Indonesian B737

No, the last one is Ethiopian Airlines, but that is exactly why AI can be more dangerous than people realize. Because there is an unrealistic understanding in the public of what AI is capable of, people tend to rely on it without realizing that these machines are not really intelligent in a human sense. They are dreadfully underequipped to deal with emergency situations that require human analysis. In the case of automobiles, there are just too many possible ways for traffic conditions to go wrong in a way that human programmers are unaware of. Human brains are evolved to deal with chaotic interactions of the sort that occur on the road. Machines that simulate intelligent behavior are much more limited.

So it turns out that pilots have been experiencing problems with the anti-stall sensor software technology since Airbus introduced it. The Boeing Max aircraft are just the latest version of that problem, but even Airbus pilots have experienced incidents in which they had to fight to lift the nose because the autopilot program misinterpreted sensor readings. That is not in a fully autonomous machine, but in a very sophisticated machine with human pilots. Ordinary drivers will be even more careless about trusting automation on the road, because they are not as well-trained or prepared as pilots. Drivers don't spend hours in driving simulators that train them to react to hazardous conditions. Pilots do.

Unlike aircraft pilots, a car driver (human or artificial) can always react to a situation they cannot handle by just stopping.

This is almost never going to result in any kind of 'at fault' collision. Particularly if the other vehicles involved are smart enough to do the same.

"At fault" is irrelevant. Slamming the breaks everytime their is perceptual uncertainty within an AI system will result in many more collisions and deaths. Insurance rules about fault have minimal relevance to what does or even should rationally happen in many real world driving situation. In high congestion situations, vehicles are almost never far enough apart to allow for safe stopping speeds and there is no chance of that changing. Thus, sudden stops due to false alarms will always cause more accidents and deaths. Thus, AI cars that are calibrated to be overly cautious and just stop under uncertainty will cause more accidents and injuries, including to their own passengers who will not be expecting a stop b/c their superior intelligence to any AI car will tell them there is no danger.
 
Unlike aircraft pilots, a car driver (human or artificial) can always react to a situation they cannot handle by just stopping.

This is almost never going to result in any kind of 'at fault' collision. Particularly if the other vehicles involved are smart enough to do the same.

"At fault" is irrelevant. Slamming the breaks everytime their is perceptual uncertainty within an AI system will result in many more collisions and deaths. Insurance rules about fault have minimal relevance to what does or even should rationally happen in many real world driving situation. In high congestion situations, vehicles are almost never far enough apart to allow for safe stopping speeds and there is no chance of that changing. Thus, sudden stops due to false alarms will always cause more accidents and deaths. Thus, AI cars that are calibrated to be overly cautious and just stop under uncertainty will cause more accidents and injuries, including to their own passengers who will not be expecting a stop b/c their superior intelligence to any AI car will tell them there is no danger.

Sudden stops in congested traffic are already common - and a common cause of crashes.

The safe stopping distance for an autonomous vehicle is FAR smaller than for a human piloted vehicle, so we should expect that such crashes will become less common as autonomous vehicles become a greater proportion of all traffic.

And confusion leading to a stop is not something that would occur as a matter of routine - these events would be rare even for current autonomous vehicles, just as situations where the autopilot of a modern jetliner disengages and hands back control to human pilots (without the pilots commanding it) are rare.
 
Unlike aircraft pilots, a car driver (human or artificial) can always react to a situation they cannot handle by just stopping.

This is almost never going to result in any kind of 'at fault' collision. Particularly if the other vehicles involved are smart enough to do the same.

"At fault" is irrelevant. Slamming the breaks everytime their is perceptual uncertainty within an AI system will result in many more collisions and deaths. Insurance rules about fault have minimal relevance to what does or even should rationally happen in many real world driving situation. In high congestion situations, vehicles are almost never far enough apart to allow for safe stopping speeds and there is no chance of that changing. Thus, sudden stops due to false alarms will always cause more accidents and deaths. Thus, AI cars that are calibrated to be overly cautious and just stop under uncertainty will cause more accidents and injuries, including to their own passengers who will not be expecting a stop b/c their superior intelligence to any AI car will tell them there is no danger.

Sudden stops in congested traffic are already common - and a common cause of crashes.

The safe stopping distance for an autonomous vehicle is FAR smaller than for a human piloted vehicle, so we should expect that such crashes will become less common as autonomous vehicles become a greater proportion of all traffic.

You have argued that AI cars can avoid crashes simply by stopping anytime there is uncertainty. That would masssively increase the amount of false alarm stops in the middle of intersections, lane changes, and everywhere else on the road where such sudden stops lead to accidents. The world is an infinitely complex place filled with unpredictable uncertainty and infinite combinations of patterns in the visual field that AI will not be programmed to anticipate. If 100% of vehicles were AI, then such a low threshold for false alarm stops would not be a problem, except for all the whiplash caused to the passengers. But in any world with both AI and human driven cars, this low threshold of stopping for AI cars will cause more accidents than if there were no AI cars. Cars stopping and slowing down when there is no cause for it is a massive contributor to accidents.

And your emphasis on "no fault" accidents is telling. AI cars will be programmed to reduce liability to the manufacturer not to minimize the probability of people getting hurt. IOW, they will programmed to avoid being "at fault", not to avoid accidents, which are often contradictory goals. Often the response that will most reduce one being "at fault" will increase the odds of an accident. For example, there are millions of instances per year where a car ahead suddenly stops and the car behind must stop, but if it stops too quickly then the odds of getting hit by a third car behind it increase. The algorithm to reduce "at fault" would be to stop as quickly as possible. But the algorithm that human drivers use is to avoid any type of collision, so they brake in a way to balance the odds of hitting the guy in front (and being at fault) with avoiding getting hit from behind, which would not be their fault but still cause harm to passengers of both cars.
 
Sudden stops in congested traffic are already common - and a common cause of crashes.

The safe stopping distance for an autonomous vehicle is FAR smaller than for a human piloted vehicle, so we should expect that such crashes will become less common as autonomous vehicles become a greater proportion of all traffic.

You have argued that AI cars can avoid crashes simply by stopping anytime there is uncertainty.
Yes.
That would masssively increase the amount of false alarm stops in the middle of intersections, lane changes, and everywhere else on the road where such sudden stops lead to accidents.
So you say. But I disagree. Do you have a good reason to imagine that such high levels of uncertainty would be common?
The world is an infinitely complex place filled with unpredictable uncertainty and infinite combinations of patterns in the visual field that AI will not be programmed to anticipate.
Indeed. But roads are not. And autonomous vehicles are not dependent on visual input alone. Yet another way that they can easily be better than humans.
If 100% of vehicles were AI, then such a low threshold for false alarm stops would not be a problem, except for all the whiplash caused to the passengers.
If we assume that these stops can only imply instant and full application of the brakes. Which is an unwarranted assumption. Bear in mind that this suggestion is one that I introduced as a counterpoint to the question of what aircraft autopilots can do. A car can stop just about anywhere, while a plane cannot land except at a suitable airfield.

So an autopilot must hand control to a pilot, if confused (eg due to a sensor failure). An autonomous car (or indeed one driven by a human) can simply pull over.

Nothing about that argument says that a car must slam on the brakes while traveling at speed - that's your strawman.
But in any world with both AI and human driven cars, this low threshold of stopping for AI cars will cause more accidents than if there were no AI cars.
I disagree.
Cars stopping and slowing down when there is no cause for it is a massive contributor to accidents.
And would occur less often if those cars were not being driven by humans. Humans are AWFUL at driving.
And your emphasis on "no fault" accidents is telling. AI cars will be programmed to reduce liability to the manufacturer not to minimize the probability of people getting hurt.
Human drivers minimise neither.
IOW, they will programmed to avoid being "at fault", not to avoid accidents, which are often contradictory goals.
But far more often, are complementary goals.
Often the response that will most reduce one being "at fault" will increase the odds of an accident. For example, there are millions of instances per year where a car ahead suddenly stops and the car behind must stop, but if it stops too quickly then the odds of getting hit by a third car behind it increase. The algorithm to reduce "at fault" would be to stop as quickly as possible. But the algorithm that human drivers use is to avoid any type of collision, so they brake in a way to balance the odds of hitting the guy in front (and being at fault) with avoiding getting hit from behind, which would not be their fault but still cause harm to passengers of both cars.
Humans are FAR more likely to get into such a situation, and not likely to actually handle it any better than autonomous vehicles once in it.

You, like almost everyone, seem to have a faith in the abilities of human drivers that is completely at odds with observed reality.

Humans are shit drivers. Even the best human drivers are often shit; The average human is usually shit; And there are plenty of humans who are almost always shit.

Autonomous cars - even today's early models - are far better. Perfection is not needed.
 
Back
Top Bottom