In many places seat belts are required by law, and helmets for bikers, so should there be similar requirements for some kind of highly reflective belt, stripe, design, whatever in clothing as a pedestrian? If not as an legal requirement, then at least as a factor in liability in lawsuits - driver responsibility 0% or very low if no reflectors (regardless of race). Seems logical, but I've not heard of such - is there anywhere such an approach is used?
Driverless cars are going to wreak havoc on the personal injury law industry.
Accidents will happen less often, so there will be fewer claims and less need for the lawyers/paralegals and insurance rates will likely drop (or profits will soar). But some cases that do happen will be absolutely nuts as the lawyers fight over who is at fault when a car decides to kill its occupant or swerves to hit pedestrians, or when a hacker takes control and murders thousands all at once. Is the occupant of the vehicle at fault for not having the latest or a particular software or security patch? Is the manufacturer at fault? The programmers?
That's not a difference. A human will almost certainly mistake a mannequin for a human too - the difference being that they will change their story after the event once more information has been processed.Self-driving cars do not identify pedestrians with optical sensors alone, but humans use optical-only sensors called "eyes". Autonomous vehicles do not have any software that allows them to do anything special with humans, as opposed to other kinds of hazards. Bilby was quite right to point out that human drivers will swerve to avoid a mannequin, but the difference is that a self-driving car will not mistake the mannequin for a human.
Nonsense. Adding humans to the loop makes them less safe, and is unnecessary and stupid.It will simply treat it as an obstacle. So the concern in the OP about the skin complexion of pedestrians is nonsense. The concern is based on the incorrect impression that this new technology models the world like humans model it. However, humans do not use radar or ladar sensors to detect objects.
Therein lies the problem with the "Buck Rogers" mentality that we are on the verge of large scale deployment of fully autonomous vehicles. What this new technology should bring us is augmented human operation of vehicles. That is where its real value lies.
And pilot errors are now the leading cause of commercial aviation accidents. We would be safer without them - despite their being far more highly trained than car drivers - and they persist due to a false perception that they enhance safety.The airline industry has achieved enormous success in reducing the number of catastrophic accidents, and much of that has to do with the advanced automation technology that has enhanced the ability of human pilots to avoid disasters. However, pilots are much more highly trained than ordinary drivers, especially in techniques for unusual conditions that full automation would not be able to handle. In such cases, they have what is euphemistically called a Non-Normal Checklist to help them diagnose and take corrective action.
As are most such incidents. We see here that adding highly trained humans to the mix did not prevent or mitigate the accident. Yet we persist in having them there at great expense, when their only value is to provide a scapegoat for engineering failures that they cannot defend us against.We do not know what caused the latest disaster in Ethiopia--a six-month old Boeing 737-Max8 that crashed and killed all passengers. However, there was a similar Lion Air catastrophe recently in which the cause was identified as a new anti-stall flight control system. The aircraft system took corrective action for a nonexistent problem and angled the nose sharply downward. The pilots struggled against the control column to force the nose up, but they lost that battle. They could have succeeded, if they had simply switched off the anti-stall automation. In the heat of the moment, they may not have had the presence of mind to do it. It turns out that the maintenance crews may also have made an error or two in servicing the aircraft system. So this was not just an automation error. It was a cascade of machine and human malfunctions.
Which is why it's pointless to give them any power to act. They are less likely to make things better than they are to make things worse.Self-driving cars are nowhere near as sophisticated as aircraft technology, but the general public is far more exposed to automobile technology than aircraft technology. And drivers are not as well-trained as airline pilots. They will, however, encounter the same difficulties that pilots do as automobiles become more sophisticated. That is, they will tend to rely on the technology, misunderstand what it is doing, and not know what to do when things go awry.
Bilby correctly points out that we do not need perfect automation, just automation that makes cars safer than they are now. I agree, but I don't think that will happen with self-driving vehicles. And the cost of trying to get us to that point statistically may well exceed the gains. I'm for augmented human-operated automobiles, not fully autonomous ones. The technology is not good enough to risk the rush to deployment.
Well, the key point here is that humans work best in environments that are predictable relative to human interactions. Robots do not have anything approaching human understanding of how to interact with other vehicles. They are not as good as people at recognizing unsafe behavior in drivers and pedestrians.
I think your understanding of AI is a few decades out of date. It's a very rapidly evolving field, and things that were science fiction twenty years ago are commonplace today.
I wasn't aware that you had expertise in robotics and were able to assess my competence in that subject.
Well, AI is better than distracted human driver but it's far behind drivers which are not distracted.
Well, AI is better than distracted human driver but it's far behind drivers which are not distracted.
And of course planes are irrelevant here, they don't have to deal with humans and trash on the road. Well, planes have to deal with pilots and it seems they regularly fail at that, last one which did so is Indonesian B737
My post was written on the next day of Ethiopian crash when it was not yet clear that MCAS was to blame.Well, AI is better than distracted human driver but it's far behind drivers which are not distracted.
And of course planes are irrelevant here, they don't have to deal with humans and trash on the road. Well, planes have to deal with pilots and it seems they regularly fail at that, last one which did so is Indonesian B737
No, the last one is Ethiopian Airlines, but that is exactly why AI can be more dangerous than people realize. Because there is an unrealistic understanding in the public of what AI is capable of, people tend to rely on it without realizing that these machines are not really intelligent in a human sense. They are dreadfully underequipped to deal with emergency situations that require human analysis. In the case of automobiles, there are just too many possible ways for traffic conditions to go wrong in a way that human programmers are unaware of. Human brains are evolved to deal with chaotic interactions of the sort that occur on the road. Machines that simulate intelligent behavior are much more limited.
So it turns out that pilots have been experiencing problems with the anti-stall sensor software technology since Airbus introduced it. The Boeing Max aircraft are just the latest version of that problem, but even Airbus pilots have experienced incidents in which they had to fight to lift the nose because the autopilot program misinterpreted sensor readings. That is not in a fully autonomous machine, but in a very sophisticated machine with human pilots. Ordinary drivers will be even more careless about trusting automation on the road, because they are not as well-trained or prepared as pilots. Drivers don't spend hours in driving simulators that train them to react to hazardous conditions. Pilots do.
Well, AI is better than distracted human driver but it's far behind drivers which are not distracted.
And of course planes are irrelevant here, they don't have to deal with humans and trash on the road. Well, planes have to deal with pilots and it seems they regularly fail at that, last one which did so is Indonesian B737
No, the last one is Ethiopian Airlines, but that is exactly why AI can be more dangerous than people realize. Because there is an unrealistic understanding in the public of what AI is capable of, people tend to rely on it without realizing that these machines are not really intelligent in a human sense. They are dreadfully underequipped to deal with emergency situations that require human analysis. In the case of automobiles, there are just too many possible ways for traffic conditions to go wrong in a way that human programmers are unaware of. Human brains are evolved to deal with chaotic interactions of the sort that occur on the road. Machines that simulate intelligent behavior are much more limited.
So it turns out that pilots have been experiencing problems with the anti-stall sensor software technology since Airbus introduced it. The Boeing Max aircraft are just the latest version of that problem, but even Airbus pilots have experienced incidents in which they had to fight to lift the nose because the autopilot program misinterpreted sensor readings. That is not in a fully autonomous machine, but in a very sophisticated machine with human pilots. Ordinary drivers will be even more careless about trusting automation on the road, because they are not as well-trained or prepared as pilots. Drivers don't spend hours in driving simulators that train them to react to hazardous conditions. Pilots do.
Well, AI is better than distracted human driver but it's far behind drivers which are not distracted.
And of course planes are irrelevant here, they don't have to deal with humans and trash on the road. Well, planes have to deal with pilots and it seems they regularly fail at that, last one which did so is Indonesian B737
No, the last one is Ethiopian Airlines, but that is exactly why AI can be more dangerous than people realize. Because there is an unrealistic understanding in the public of what AI is capable of, people tend to rely on it without realizing that these machines are not really intelligent in a human sense. They are dreadfully underequipped to deal with emergency situations that require human analysis. In the case of automobiles, there are just too many possible ways for traffic conditions to go wrong in a way that human programmers are unaware of. Human brains are evolved to deal with chaotic interactions of the sort that occur on the road. Machines that simulate intelligent behavior are much more limited.
So it turns out that pilots have been experiencing problems with the anti-stall sensor software technology since Airbus introduced it. The Boeing Max aircraft are just the latest version of that problem, but even Airbus pilots have experienced incidents in which they had to fight to lift the nose because the autopilot program misinterpreted sensor readings. That is not in a fully autonomous machine, but in a very sophisticated machine with human pilots. Ordinary drivers will be even more careless about trusting automation on the road, because they are not as well-trained or prepared as pilots. Drivers don't spend hours in driving simulators that train them to react to hazardous conditions. Pilots do.
Unlike aircraft pilots, a car driver (human or artificial) can always react to a situation they cannot handle by just stopping.
This is almost never going to result in any kind of 'at fault' collision. Particularly if the other vehicles involved are smart enough to do the same.
Unlike aircraft pilots, a car driver (human or artificial) can always react to a situation they cannot handle by just stopping.
This is almost never going to result in any kind of 'at fault' collision. Particularly if the other vehicles involved are smart enough to do the same.
"At fault" is irrelevant. Slamming the breaks everytime their is perceptual uncertainty within an AI system will result in many more collisions and deaths. Insurance rules about fault have minimal relevance to what does or even should rationally happen in many real world driving situation. In high congestion situations, vehicles are almost never far enough apart to allow for safe stopping speeds and there is no chance of that changing. Thus, sudden stops due to false alarms will always cause more accidents and deaths. Thus, AI cars that are calibrated to be overly cautious and just stop under uncertainty will cause more accidents and injuries, including to their own passengers who will not be expecting a stop b/c their superior intelligence to any AI car will tell them there is no danger.
Unlike aircraft pilots, a car driver (human or artificial) can always react to a situation they cannot handle by just stopping.
This is almost never going to result in any kind of 'at fault' collision. Particularly if the other vehicles involved are smart enough to do the same.
"At fault" is irrelevant. Slamming the breaks everytime their is perceptual uncertainty within an AI system will result in many more collisions and deaths. Insurance rules about fault have minimal relevance to what does or even should rationally happen in many real world driving situation. In high congestion situations, vehicles are almost never far enough apart to allow for safe stopping speeds and there is no chance of that changing. Thus, sudden stops due to false alarms will always cause more accidents and deaths. Thus, AI cars that are calibrated to be overly cautious and just stop under uncertainty will cause more accidents and injuries, including to their own passengers who will not be expecting a stop b/c their superior intelligence to any AI car will tell them there is no danger.
Sudden stops in congested traffic are already common - and a common cause of crashes.
The safe stopping distance for an autonomous vehicle is FAR smaller than for a human piloted vehicle, so we should expect that such crashes will become less common as autonomous vehicles become a greater proportion of all traffic.
Yes.Sudden stops in congested traffic are already common - and a common cause of crashes.
The safe stopping distance for an autonomous vehicle is FAR smaller than for a human piloted vehicle, so we should expect that such crashes will become less common as autonomous vehicles become a greater proportion of all traffic.
You have argued that AI cars can avoid crashes simply by stopping anytime there is uncertainty.
So you say. But I disagree. Do you have a good reason to imagine that such high levels of uncertainty would be common?That would masssively increase the amount of false alarm stops in the middle of intersections, lane changes, and everywhere else on the road where such sudden stops lead to accidents.
Indeed. But roads are not. And autonomous vehicles are not dependent on visual input alone. Yet another way that they can easily be better than humans.The world is an infinitely complex place filled with unpredictable uncertainty and infinite combinations of patterns in the visual field that AI will not be programmed to anticipate.
If we assume that these stops can only imply instant and full application of the brakes. Which is an unwarranted assumption. Bear in mind that this suggestion is one that I introduced as a counterpoint to the question of what aircraft autopilots can do. A car can stop just about anywhere, while a plane cannot land except at a suitable airfield.If 100% of vehicles were AI, then such a low threshold for false alarm stops would not be a problem, except for all the whiplash caused to the passengers.
I disagree.But in any world with both AI and human driven cars, this low threshold of stopping for AI cars will cause more accidents than if there were no AI cars.
And would occur less often if those cars were not being driven by humans. Humans are AWFUL at driving.Cars stopping and slowing down when there is no cause for it is a massive contributor to accidents.
Human drivers minimise neither.And your emphasis on "no fault" accidents is telling. AI cars will be programmed to reduce liability to the manufacturer not to minimize the probability of people getting hurt.
But far more often, are complementary goals.IOW, they will programmed to avoid being "at fault", not to avoid accidents, which are often contradictory goals.
Humans are FAR more likely to get into such a situation, and not likely to actually handle it any better than autonomous vehicles once in it.Often the response that will most reduce one being "at fault" will increase the odds of an accident. For example, there are millions of instances per year where a car ahead suddenly stops and the car behind must stop, but if it stops too quickly then the odds of getting hit by a third car behind it increase. The algorithm to reduce "at fault" would be to stop as quickly as possible. But the algorithm that human drivers use is to avoid any type of collision, so they brake in a way to balance the odds of hitting the guy in front (and being at fault) with avoiding getting hit from behind, which would not be their fault but still cause harm to passengers of both cars.