• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Trouble for Self-Driving Cars?

lpetrich

Contributor
Joined
Jul 27, 2000
Messages
25,311
Location
Eugene, OR
Gender
Male
Basic Beliefs
Atheist
7 Arguments Against the Autonomous-Vehicle Utopia - The Atlantic
Bear Case 1: They Won’t Work Until Cars Are as Smart as Humans
Bear Case 2: They Won’t Work, Because They’ll Get Hacked
Bear Case 3: They Won’t Work as a Transportation Service
Bear Case 4: They Won’t Work, Because You Can’t Prove They’re Safe
Bear Case 5: They’ll Work, But Not Anytime Soon
Bear Case 6: Self-Driving Cars Will Mostly Mean Computer-Assisted Drivers
Bear Case 7: Self-Driving Cars Will Work, But Make Traffic and Emissions Worse
The first one is a case of  Moravec's paradox -- it's much easier to get computers to do high-level reasoning than it is to get them to do real-world sensorimotor tasks, even low-level ones like recognizing objects, manipulating them, or walking. Driving clearly falls into the latter category, with a very complicated environment that one must navigate. One has to know what's bad to collide with and what's not, so one doesn't get stymied by snow, for instance. One also has to recognize traffic lights and traffic signs and construction workers waving signal flags.

The most successful self-driving vehicles have run in much simpler environments. Ships and airplanes have had autopilot systems for decades, and spacecraft are almost universally controlled by their onboard computers -- computers which often relay instructions from the spacecraft operators' computers. The most successful automated land vehicles to date are subway and rapid-transit trains, because they have completely isolated rights-of-way.

The second one is underappreciated, I think. It also causes problems for a proposed application: platooning, essentially computerized mass tailgating.

The third one I don't consider very strong.

The fourth one I think is very serious, even though one ought to ask what level of safety that one should reasonably expect. A self-driving car will have to successfully drive through conditions that human drivers drive through all the time, meaning that the first problem implies a big problem here also.

The fifth one I consider a likely possibility if self-driving cars will ever drive with the proficiency of good human drivers. A LOT of testing and tweaking will be necessary.

The sixth one I consider the most likely near-future possibility. It may even turn into a halfway sort of self-driving where the car requests assistance in difficult situations. So car drivers will become backseat drivers for their cars.

The seventh one does seem at least a little possible. But self-driving cars also mean self-driving buses and trains, and self-driving buses could operate much like taxis. Those who dislike the prospect of sharing a vehicle with drunkards and bums may like some subscription service where one can have similar fellow passengers.


There is a further problem, a problem that that article did not mention. What happens to manual driving? In Isaac Asimov's short story "Sally", it was outlawed as needlessly dangerous, an action that was denounced as everything from fascism to communism.
 
With enough testing and analysis, self-driving cars can be made. But the companies doing the testing need the financial payback to have the incentive to continue the testing. Thus the short-term profit model tugs against the long-term safety model.

Of course, the original cars weren't designed to be perfectly safe before coming to market. But the original cars were far and away different enough from other modes of transport that consumers desired them despite their poor safety record. The rewards outweighed the risks.

But a self-driving car isn't much different from a human-driven car, so the reward/risk ratio is lower.
 
My experience says the potential combinations of threats and vehicle actions can not be predetermined. There will always be a risk. Commercial aviation continues to demonist the idea

It is not about degree of testing, it is about the complexity of deterring the combinations of conditions and required autonomous actions that will avoid crash and injury to pedestrians..
 
I too think #2 is underappreciated. It can be solved of course but it is in fact underappreciated currently.
I wrote about #1 t before and will repeat it here. Computer has to be as good or better than human in every situation for it to be considered good enough for taking full control. This requirement in my view includes human level ability at things like recognizing all objects not just "something on the road", everything styrofoam cups, dead cats, man in mask, literally everything. I remember report of AI-driver being confused by fog/vapor, and that poor woman I understand was run over because it misrecognized her as plastic bag floating in the air, she was carrying plastic bags.
 
The skill of driving begins when we are kids. Sensing distane, hand eye coordination. Ctaching a vaseball is not rule based, it is our neural net learning the skill.

Rule based computers can not mimic the human capacity to instantaneously adapt to a new situation. You are driving and a car steers into your lane coming at you. How do you decode to brake, turn right, or turn left? Maybe a kid is on the sidewalk if you turn right. Do you accelerate and try to steer around the oncoming car?
 
I feel like this is an "all or nothing" proposal.
Either all cars are computer controlled, or none are. mingling computer brains with human brains is a recipe for disaster... and exploit.
 
The second one was always problematic to me. You could technically have self driving cars outfitted with weapons, programmed to go around kidnapping people (no one ever thinks about that, either).
 
And then there is hacking. Cars have been hacked through wireless links.

There is the possibly of a bad actor crashing the electronics through intentional EM interference, same way it is dome in the military.

GPS has no backup. LORAN was ended in the 90s.
 
Back
Top Bottom