• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Self driving robot cars are racially biased

They didn't use systems taken from actual self driving cars b/c the manufacturers refuse to allow independent tests. No self driving car should be allowed anywhere near public roads until they do and until there has been extensive highly replicated independent research that the manufacturers have no influence over.
I don't know why we would insist on such high standards for AI driven cars. The cars driven by people go through much less rigor.

Human systems have gone through hundreds of thousands of years of constant testing with poor "designs" being "scrapped". Nothing in AI is within lightyears of coming close to having the perceptual, interpretive, and inferential power of humans.
 
They didn't use systems taken from actual self driving cars b/c the manufacturers refuse to allow independent tests. No self driving car should be allowed anywhere near public roads until they do and until there has been extensive highly replicated independent research that the manufacturers have no influence over.
I don't know why we would insist on such high standards for AI driven cars. The cars driven by people go through much less rigor.

Human systems have gone through hundreds of thousands of years of constant testing with poor "designs" being "scrapped". Nothing in AI is within lightyears of coming close to having the perceptual, interpretive, and inferential power of humans.
This.
And, not only do humans have generation-tested reflexes for avoiding impacts, sometimes you hzvd to say fuck it. Because sometimes stopping or swerving to avoid the bag of trash, the mattress, or the squirrel, in traffic, would be more dangerous than hitting it.
But who is going to look forward to stating, in court, that the car was programmed to choose to hit the obstacle.
 
They didn't use systems taken from actual self driving cars b/c the manufacturers refuse to allow independent tests. No self driving car should be allowed anywhere near public roads until they do and until there has been extensive highly replicated independent research that the manufacturers have no influence over.
I don't know why we would insist on such high standards for AI driven cars. The cars driven by people go through much less rigor.

Human systems have gone through hundreds of thousands of years of constant testing with poor "designs" being "scrapped". Nothing in AI is within lightyears of coming close to having the perceptual, interpretive, and inferential power of humans.
I disagree about a lot of the 'scrapping' that happens these days, especially if we consider drivers as the design. There are shitty drivers all over the roads (literally in both senses). Our current system doesn't make 'scrapping' them (taking away their actual ability to own/drive a vehicle) viable or realistic. This is especially true because most of our cities lack the infrastructure to live car-free (Oh, how I loved living in Germany!).

As a bicyclist and motorcyclist that mostly only drives when the weather doesn't allow me to safely be on 2 wheels, I see some of the worst of it on a daily basis. Hell, a good start to improve the human drivers would be to get rid of cars with automatic transmissions. :D AI is very close to be as good as "I" now, IMO.
 
Aren't self-driving cars equipt with some form of radar to navigate with? I drive a truck for a living and they have anti colision radar that is rather sensitive.

The "science" in the article was a little but by proxy and doesn't directly support the headline. They didn't use actual self driving cars and actual pedestrians.

It would seem a bit silly to me to design a car that didn't avoid any moving object it could detect.

So you reckon a car that slams on the brakes on a freeway to avoid a leaf or a plasic bag that is blowing across the carriageway is a better design than one that recognises these things as safe to ignore, and ignores them?

This seems to be a real problem for a lot of your arguments - you fail to account for the fact that reality is far more messy than it seems to you, and that there may be circumstances, issues, or situations that cause your simple solutions to break down, and lead to (often wildly) suboptimal results.

It's a very messy world. Ignoring the mess is not a good option.
 
Human systems have gone through hundreds of thousands of years of constant testing with poor "designs" being "scrapped". Nothing in AI is within lightyears of coming close to having the perceptual, interpretive, and inferential power of humans.
This.
And, not only do humans have generation-tested reflexes for avoiding impacts, sometimes you hzvd to say fuck it. Because sometimes stopping or swerving to avoid the bag of trash, the mattress, or the squirrel, in traffic, would be more dangerous than hitting it.
But who is going to look forward to stating, in court, that the car was programmed to choose to hit the obstacle.

Humans are demonstrably very bad at driving. We don't need AI to be perfect, we just need it to be better than humans (a very low bar), to mean that adopting it will save lives.

Unfortunately, many people have the bizarre opinion that new systems need to be perfect before it's acceptable to replace the old system. So we have arguments that AI driven cars cannot be allowed if they might kill pedestrians - despite the massive pedestrian death toll from NOT adopting AI driven cars.

The same argument is used against other technologies that demonstrably save lives - it's the 'but Chernobyl' argument. People died at Chernobyl, therefore we must not adopt the safest power generation option in history. Because it's not perfect. Never mind that all the other options (including not generating power at all) kill at least an order of magnitude (and in the case of coal power, five or six orders of magnitude) more people.

AI driven cars are not perfect; But they are already FAR superior to cars driven by humans, from a safety perspective. Picking over and fixing their remaining flaws should be an important focus for the people who design them; But it should not be allowed as an excuse for their prohibition, or for delaying their adoption.
 
Exactly right. Detecting pedestrians is of no use to an automated vehicle, because they need to avoid hitting any obstacle, human, animal, or just a trash can sitting in the path of the vehicle. If a dog is in a crosswalk, you still don't want your car to run over it. This idea that an automated vehicle has to distinguish humans from other obstacles is just plain nutty.

Completely wrong.
It needs to detect that there is an obstacle at all, and it cannot detect that something is an obstacle unless it has been trained to do so.
AI systems have no capacity to distinguish a person from a shadow, light reflecting off a window, or a plastic bag blowing across the street unless it is trained to make those distinctions based on fine grained perceptual information.

Er, yeah. That is exactly my point. You don't want the vehicle to run over dogs and trash cans in the road. It is worse if it runs over a person, but there is no particular reason for an autonomous vehicle to distinguish between a pedestrian and trash can. It has to stop for both. As for your concept of what AI systems are capable of detecting, you are kind of half right, although they can certainly be programmed to make some of those distinctions. My point was that modern AI techniques for object recognition are not good enough, and I take it that you agree. Further, autonomous vehicles don't have to capability of making sophisticated judgments about the intentions of other drivers.

Have you ever worked with robots? I have. In fact, I once had an opportunity to ride in one of Stanford's prototypes back during the DARPA challenges. Lots of interesting discussions and debates over the feasibility of autonomous cars. That's why I am so skeptical of all the rush to deploy them. The technology is not good enough to make them safe. Besides that, who gets sued when one gets in an accident and harms someone or destroys property? The owner? The manufacturer? The software developer?

Any AI car that you designed would either slam on the brakes every two seconds, which would be only slightly less dangerous than one that failed to stop for real objects in the road.

That's a hoot. Any autonomous vehicle that a single individual designed would be unlikely to run. They are very complex systems that require large teams of engineers to design. You need expertise from several different fields to manufacture one. What is really good about all the research going into these vehicles is that a lot of it is very useful in augmenting the safety of human-driven vehicles. Full autonomy is too risky, as we've already discovered from news stories about accidents they have been involved in.
 
In many places seat belts are required by law, and helmets for bikers, so should there be similar requirements for some kind of highly reflective belt, stripe, design, whatever in clothing as a pedestrian? If not as an legal requirement, then at least as a factor in liability in lawsuits - driver responsibility 0% or very low if no reflectors (regardless of race). Seems logical, but I've not heard of such - is there anywhere such an approach is used?

Comparative liability states at least partially do this--if you're found more than 50% at fault for your injuries you can't collect.

However, this doesn't avoid collecting via innocent victims. (Dressed in black and carrying a baby. Comparative liability doesn't stop a claim made in the name of the baby.) This is one of the things that needs to be fixed in our current liability system.

There's also the problem that juries are all too prone to seeing a badly hurt victim and a company with deep pockets and awarding large sums without good reason.

Also, these morons are normally jaywalking which makes them at fault anyway.
 
Exactly right. Detecting pedestrians is of no use to an automated vehicle, because they need to avoid hitting any obstacle, human, animal, or just a trash can sitting in the path of the vehicle. If a dog is in a crosswalk, you still don't want your car to run over it. This idea that an automated vehicle has to distinguish humans from other obstacles is just plain nutty.

Disagree. If you have plenty of distance you stop no matter what it is. However, if you don't (normally because the object just moved into your path) you have to make a decision as to what to do--and in some cases the correct answer is to hit the object. To make that decision you need to know whether it's a person or not.

They even teach this in driver's ed--don't evade non-human popups in residential areas. Better to hit the object than evade and sometimes hit the kid chasing the object that was obscured until too late.
 
AI systems have no capacity to distinguish a person from a shadow, light reflecting off a window, or a plastic bag blowing across the street unless it is trained to make those distinctions based on fine grained perceptual information.

Any AI car that you designed would either slam on the brakes every two seconds, which would be only slightly less dangerous than one that failed to stop for real objects in the road.

Yup. My car has a lane-alert system. It beeps when it thinks I'm crossing a road line. So long as I'm going fast enough it's quite reliable at detecting if I actually cross a line (it's prone to beeping at me because I clip a spot where a turn lane starts, the asphalt widens but the edge stripe doesn't follow the asphalt for a bit) but about once per 1000 miles it's beeped at me despite being solidly in my lane--I've never been completely sure but the primary culprit seems to be shadows from nearby power wires.

I also was talking to a woman who has a car with a system that doesn't just beep, but puts you back in your lane. It's not easily confused but she quickly turned it to only beep because it didn't like her crossing the midline to give a bicyclist extra space. We both hike, at this time of the year one of the most common areas is served by a 2-lane, 50mph road with pretty wide bike areas to the side that see pretty heavy use on the weekends. It's just common courtesy to move over as much as you feel safe doing when passing them because the wind blast is going to be pretty strong. Oops, that often means crossing the yellow.
 
Exactly right. Detecting pedestrians is of no use to an automated vehicle, because they need to avoid hitting any obstacle, human, animal, or just a trash can sitting in the path of the vehicle. If a dog is in a crosswalk, you still don't want your car to run over it. This idea that an automated vehicle has to distinguish humans from other obstacles is just plain nutty.

Disagree. If you have plenty of distance you stop no matter what it is. However, if you don't (normally because the object just moved into your path) you have to make a decision as to what to do--and in some cases the correct answer is to hit the object. To make that decision you need to know whether it's a person or not.

They even teach this in driver's ed--don't evade non-human popups in residential areas. Better to hit the object than evade and sometimes hit the kid chasing the object that was obscured until too late.

You are right, but think about what you are saying here. What they teach in drivers ed courses, besides basic rules and operations, is strategy. You need to know when to hit the brakes, when to swerve, and when to drive over an object. If you knew anything about AI, you would realize that that kind of knowledge is not something that you could program into a machine. AI does not create sentient or intelligent programs. It simulates intelligent behavior, but we can't scale up modern programming techniques to that level of operation. I'm not trying to make a point here about whether it is desirable for drivers to be able to distinguish objects. Object recognition itself is something that intelligent animals, especially humans know how to do. But WE don't know how brains work well enough to program machines to do the same thing. Their situational awareness is too impoverished and too limited. People tend to overestimate the capabilities of modern AI.
 
Humans are demonstrably very bad at driving. We don't need AI to be perfect, we just need it to be better than humans (a very low bar), to mean that adopting it will save lives.
Better than the average? worst? or best human? There is a huge difference between "worst" and "best" here.
Unfortunately, many people have the bizarre opinion that new systems need to be perfect before it's acceptable to replace the old system. So we have arguments that AI driven cars cannot be allowed if they might kill pedestrians - despite the massive pedestrian death toll from NOT adopting AI driven cars.
What if we got AI which on average kill less number of humans but they are all children crossing the road all according to rules?
 
Humans are demonstrably very bad at driving. We don't need AI to be perfect, we just need it to be better than humans (a very low bar), to mean that adopting it will save lives.
Better than the average? worst? or best human? There is a huge difference between "worst" and "best" here.
Better than average.
Unfortunately, many people have the bizarre opinion that new systems need to be perfect before it's acceptable to replace the old system. So we have arguments that AI driven cars cannot be allowed if they might kill pedestrians - despite the massive pedestrian death toll from NOT adopting AI driven cars.
What if we got AI which on average kill less number of humans but they are all children crossing the road all according to rules?
Do you have such an AI?

Do you think it's plausible that we might have one?

You can play the silly 'what if' game forever, and in any situation.

Systems need to cope with any plausible scenario; Not any imaginable scenario. What if an AI driven car invented a perpetual motion machine and used it to blow up the world????
 
I do think that both bilby and barbos make good points. We do not need our robots to be perfect, just reasonably good at safe operation. However, that still leaves open the question of what criteria one uses to judge "reasonably good". My problem with bilby's general point is that robots work best in relatively predictable environments. When you mix them in with the general public, the conditions are much more chaotic, so safety criteria become much more difficult to figure out. We are currently struggling to deal with unanticipated dangers posed by human-operated drones. Imagine the magnitude of problems that can arise if large numbers of fully autonomous cars, busses, and trucks begin to take to the roads.
 
I do think that both bilby and barbos make good points. We do not need our robots to be perfect, just reasonably good at safe operation. However, that still leaves open the question of what criteria one uses to judge "reasonably good". My problem with bilby's general point is that robots work best in relatively predictable environments. When you mix them in with the general public, the conditions are much more chaotic, so safety criteria become much more difficult to figure out. We are currently struggling to deal with unanticipated dangers posed by human-operated drones. Imagine the magnitude of problems that can arise if large numbers of fully autonomous cars, busses, and trucks begin to take to the roads.

Humans also work best in relatively predictable environments; That's why freeways see fewer crashes than suburban streets. It's also why we have lane markings, road signs, and rules about what is and is not acceptable - even on an empty road, drivers are expected to stay on the proscribed side, for example.

I think your understanding of AI is a few decades out of date. It's a very rapidly evolving field, and things that were science fiction twenty years ago are commonplace today.

When it comes to a limited and rule-bound activity like driving a car, the human ability to be distracted and to fool itself into imagining that it is aware of far more than it actually is is a liability. A robotic vehicle has significant advantages over a human; It can take in more information from more sources, it never gets tired or emotional, it doesn't fill in gaps in it's awareness with guesses about what ought to be there, it doesn't suffer attentional blindness, and it can communicate its intentions to other road users and negotiate a mutually beneficial set of actions in a way that humans simply cannot.

In surveys, over 90% of human drivers rated their own ability as 'above average'. AIs don't kid themselves that they are good enough to break the rules and get away with it. AIs don't try to impress their friends, or the girls, by pulling stupid stunts. AIs don't think that they will probably be fine, because they only had a few beers.
 
Exactly right. Detecting pedestrians is of no use to an automated vehicle, because they need to avoid hitting any obstacle, human, animal, or just a trash can sitting in the path of the vehicle. If a dog is in a crosswalk, you still don't want your car to run over it. This idea that an automated vehicle has to distinguish humans from other obstacles is just plain nutty.

Disagree. If you have plenty of distance you stop no matter what it is. However, if you don't (normally because the object just moved into your path) you have to make a decision as to what to do--and in some cases the correct answer is to hit the object. To make that decision you need to know whether it's a person or not.

They even teach this in driver's ed--don't evade non-human popups in residential areas. Better to hit the object than evade and sometimes hit the kid chasing the object that was obscured until too late.

You are right, but think about what you are saying here. What they teach in drivers ed courses, besides basic rules and operations, is strategy. You need to know when to hit the brakes, when to swerve, and when to drive over an object. If you knew anything about AI, you would realize that that kind of knowledge is not something that you could program into a machine. AI does not create sentient or intelligent programs. It simulates intelligent behavior, but we can't scale up modern programming techniques to that level of operation. I'm not trying to make a point here about whether it is desirable for drivers to be able to distinguish objects. Object recognition itself is something that intelligent animals, especially humans know how to do. But WE don't know how brains work well enough to program machines to do the same thing. Their situational awareness is too impoverished and too limited. People tend to overestimate the capabilities of modern AI.

Disagree. While we can't truly teach an AI to think we can give it a ranked list of the value of things so it can pick the least harmful collision.
 
You are right, but think about what you are saying here. What they teach in drivers ed courses, besides basic rules and operations, is strategy. You need to know when to hit the brakes, when to swerve, and when to drive over an object. If you knew anything about AI, you would realize that that kind of knowledge is not something that you could program into a machine. AI does not create sentient or intelligent programs. It simulates intelligent behavior, but we can't scale up modern programming techniques to that level of operation. I'm not trying to make a point here about whether it is desirable for drivers to be able to distinguish objects. Object recognition itself is something that intelligent animals, especially humans know how to do. But WE don't know how brains work well enough to program machines to do the same thing. Their situational awareness is too impoverished and too limited. People tend to overestimate the capabilities of modern AI.

Disagree. While we can't truly teach an AI to think we can give it a ranked list of the value of things so it can pick the least harmful collision.

And crucially, unlike a human driver, it doesn't have to come up with this list 'on the fly', nor does it get paralysed by indecision. An AI choosing whether to run down a human or a dog won't stop to think about how cute the dog is, or how much it looks like its own pet, or that the human is of the wrong race, or wearing a t-shirt supporting a controversial political view...

The responsiblity for the decision remains in human hands - but not in the hands of one stressed out human in an unexpected and time constrained situation. And if the comittee that sets the rules and priorities finds that it leads to unexpected bad outcomes, then that can be fixed by a simple software update. Human drivers are much harder to influence to change their bad behaviours.
 
Better than average.
That may not be good enough for children crossing a street.
Unfortunately, many people have the bizarre opinion that new systems need to be perfect before it's acceptable to replace the old system. So we have arguments that AI driven cars cannot be allowed if they might kill pedestrians - despite the massive pedestrian death toll from NOT adopting AI driven cars.
What if we got AI which on average kill less number of humans but they are all children crossing the road all according to rules?
Do you have such an AI?

Do you think it's plausible that we might have one?

You can play the silly 'what if' game forever, and in any situation.


Systems need to cope with any plausible scenario; Not any imaginable scenario. What if an AI driven car invented a perpetual motion machine and used it to blow up the world????
Current systems are not close to deal with plausible. That poor woman killed by Uber AI had no chance.
 
That may not be good enough for children crossing a street.
So what? You would rather that even more kids got run down - but that you can console the parents with the fact that a human was driving?
Unfortunately, many people have the bizarre opinion that new systems need to be perfect before it's acceptable to replace the old system. So we have arguments that AI driven cars cannot be allowed if they might kill pedestrians - despite the massive pedestrian death toll from NOT adopting AI driven cars.
What if we got AI which on average kill less number of humans but they are all children crossing the road all according to rules?
Do you have such an AI?

Do you think it's plausible that we might have one?

You can play the silly 'what if' game forever, and in any situation.


Systems need to cope with any plausible scenario; Not any imaginable scenario. What if an AI driven car invented a perpetual motion machine and used it to blow up the world????
Current systems are not close to deal with plausible. That poor woman killed by Uber AI had no chance.

Nor do the thousands killed annually by human drivers.

No system is perfectly safe. Falling short of perfection is not a valid reason to e
reject a new system that is safer than the status quo.

It's not preferable to have a dozen people killed by human drivers than to allow AI drivers that kill a twelfth as many people. (And realistically it's going to be more like a hundredth or a thousandth).
 
Nobody demands perfect here. It's just current systems are obviously worse then even worst drivers. Companies are trying to get ahead in the market place by endangering public.
 

Humans also work best in relatively predictable environments; That's why freeways see fewer crashes than suburban streets. It's also why we have lane markings, road signs, and rules about what is and is not acceptable - even on an empty road, drivers are expected to stay on the proscribed side, for example.

Well, the key point here is that humans work best in environments that are predictable relative to human interactions. Robots do not have anything approaching human understanding of how to interact with other vehicles. They are not as good as people at recognizing unsafe behavior in drivers and pedestrians.

I think your understanding of AI is a few decades out of date. It's a very rapidly evolving field, and things that were science fiction twenty years ago are commonplace today.

I wasn't aware that you had expertise in robotics and were able to assess my competence in that subject. What is your experience with AI? Until 2012, the last year of DARPA's Grand Challenge, I was involved in a number of robotics projects with several universities and companies. I'm not aware of any significant advances recently that would invalidate anything I've said here, but I'd be happy to hear your thoughts on why you think I'm decades out of date. I can tell you that command and control voice interfaces have not advanced much beyond directed dialogues, assuming that you know what those are. Absolutely critical to significant progress is the need to improve machine learning strategies, but none of the current approaches scale up to the kind of experience-based reasoning that is needed for human-like reasoning. Real world traffic conditions are too chaotic for current technology, but there are many situations (e.g. in manufacturing) where they lend themselves well to less chaotic environments.

When it comes to a limited and rule-bound activity like driving a car, the human ability to be distracted and to fool itself into imagining that it is aware of far more than it actually is is a liability. A robotic vehicle has significant advantages over a human; It can take in more information from more sources, it never gets tired or emotional, it doesn't fill in gaps in it's awareness with guesses about what ought to be there, it doesn't suffer attentional blindness, and it can communicate its intentions to other road users and negotiate a mutually beneficial set of actions in a way that humans simply cannot.

Have you ever actually interacted with autonomous vehicles? Just for starters, you should know that machines break down over time. So, even if you have a very good system, there are lots of things that can go wrong with it. In working with robots, I've experienced several situations where nobody, even the programmers, understood why the robots were not behaving as expected. Unfortunately, robots lack self-awareness, so it is really difficult to get them to explain why they are doing what they are doing. Programming self-awareness into machines is a very hot topic in the field these days.

One NASA project involving a Mars rover that I once witnessed had a situation where the robot simply froze. It took an hour to figure out that one of the optical sensors had failed, which made it impossible for the robot to carry out a command. It had no way to inform its human controllers that that was what the problem was. In aircraft incidents, many, if not the majority, of problems are caused by the pilot not understanding what the automated systems are doing.

In surveys, over 90% of human drivers rated their own ability as 'above average'. AIs don't kid themselves that they are good enough to break the rules and get away with it. AIs don't try to impress their friends, or the girls, by pulling stupid stunts. AIs don't think that they will probably be fine, because they only had a few beers.

Yup. They aren't human. That is also where their weaknesses lie. They don't reason like human beings and do not handle operation under uncertain conditions as well. Hence, mixing them in with people, who tend to anthropomorphize them, can lead to some serious consequences. Where autonomous vehicles operate around humans, the consequences can be quite disastrous.


Disagree. While we can't truly teach an AI to think we can give it a ranked list of the value of things so it can pick the least harmful collision.

Of course. And those ranked lists depend on the ability of the programmer to imagine what could possibly go wrong. All they have to do is be more knowledgeable about possible collision scenarios than they are about writing sophisticated AI programs. What could possibly go wrong with that? A truly intelligent machine ought to be able to reassess and change its priorities over time. That is, the "ranked list" cannot remain fixed and rigid. We can design programs that do that to a limited extent, but they are really very tricky things to work with.

...While we can't truly teach an AI to think we can give it a ranked list of the value of things so it can pick the least harmful collision.

And crucially, unlike a human driver, it doesn't have to come up with this list 'on the fly', nor does it get paralysed by indecision. An AI choosing whether to run down a human or a dog won't stop to think about how cute the dog is, or how much it looks like its own pet, or that the human is of the wrong race, or wearing a t-shirt supporting a controversial political view...

You see this as an advantage that machines have over humans, but it really isn't. Part of what it means to be intelligent is to have the flexibility to readjust priorities over time--to become better at anticipating and solving problems. Uncertainty is not a bug in human intelligence. It is a feature.

The responsiblity for the decision remains in human hands - but not in the hands of one stressed out human in an unexpected and time constrained situation. And if the comittee that sets the rules and priorities finds that it leads to unexpected bad outcomes, then that can be fixed by a simple software update. Human drivers are much harder to influence to change their bad behaviours.

Have you ever experienced a situation where your computer screen froze or the computer crashed? You've seen a blue screen crash, haven't you? If you have any experience in programming, you know that very complex programs can get "confused", albeit not emotionally upset in the human sense. That reminds me. My Toyota dealer issued a recall for my Prius not long ago. I have to take it in to have the software upgraded. Apparently, my car can suddenly lose power at any time, although such failures are very rare.
 
Back
Top Bottom