Saturday, March 24, 2018

some brief thoughts on the recent Uber collision death, i can't shake a misgiving that self-driving cars are basically just fire-and-forget weapons in an era where too many humans kill humans with cars as it is

Living in Seattle where tech is one of the gods of the region, I was once asked if I would ever consider using a self-driving car.

Nope.

Why not?

There's two parts to this.  The first is that to get from point A to point B you have to trust someone to get there, either because you can drive yourself and trust the craft of the vehicle you use, or because you trust whoever is driving you and the vehicle they use.

The second part ...  and this is the part I'm going to focus on a bit more, I am generally of the impression that the kind of technology that was developed in the last fifty years to create self-driving vehicles that, once set in motion to reach their destinations, go the distance has had one reliable application.

Fire and forget weapons.

I have no problem with an AIM-120 being a fire and forget air-to-air missile.  I expect that a self-directing vehicle literally hits its target destination because that's what it's built to do. 

The idea of adapting such technology to civilian transit scares me not because I think it's horrifying that military technology should somehow stain the civilian sphere.  We have all sorts of glorious innovations in medical practice, surgery and so on thanks to military field innovations.

No, the problem is that I already fear the human drivers who are no less likely to kill someone on accident than a driverless car.  But you can sue the human driver of a car for something. 

So when I read about the recent death caused by a self-driving car my initial feeling was a depressed sense of the inevitability of this.  Of course someone was going to get killed it was only a matter of how and under exactly what circumstances. 

 
On Sunday night, one of Uber’s self-driving cars struck and killed a woman in Tempe, Arizona.
 
Elaine Herzberg, a 49-year-old woman, was walking her bicycle across a road when a Volvo SUV, outfitted with Uber’s radar technology and in fully autonomous mode, collided with her. The car was traveling at 38 miles per hour in a 35-mile-per-hour zone, and it did not attempt to brake before striking her, according to Tempe police.

It is the first time that a self-driving car, operating in fully autonomous mode, has killed a pedestrian. Sylvia Moir, the police chief of Tempe, announced on Tuesday that Uber was likely not at fault for the collision. But after her department released footage of the collision on Wednesday, transportation experts said it showed a “catastrophic failure” of Uber’s technology.

The two stories did not perform equally in the press. By the middle of the week, the Uber news had drifted off the front pages of The New York Times, The Washington Post, and CNN. It often sat near the middle or bottom of the page on Techmeme, a website that aggregates technology news from dozens of outlets. The Cambridge Analytica story, meanwhile, consistently clanged around above the fold of every outlet. I found myself asking: Why?

Perhaps it’s because people still mostly believe the hype around self-driving cars. This isn’t surprising: I still mostly believe the hype. Statistically speaking, cars of all types are super-ubiquitous, high-speed murder machines. Automobiles kill about 102 Americans every day, according to government data. “Accidents,” a category which includes car crashes, are the fourth leading cause of death in the United States, according to the CDC.

Nor are nondrivers exempt from the carnage. Nearly 6,000 pedestrians were killed by a car in the United States in 2016. Hundreds of cyclists die every year as well.

So maybe the relative lack of coverage of the Uber crash represents a healthy perspective. It suggests, perhaps, that journalists and the public understand the difference between anecdote and data. Sixteen Americans die every day while walking near a street. Most of us never learn their names. What makes Elaine Herzberg different?
 

Almost any given day I commute to and from work in the Puget Sound area I think of how I could very possibly die today because of how people are.  You or I are far, far more likely to die at the hands of an inattentive driver than at the hands of someone wielding a gun.  I'm more worried that of those wielding guns cops will be trigger happy than some random stranger might show up.  It's not that I can't imagine someone being murdered by some person with a gun.  The year iMonk died was also the year someone I met from my college years was murdered by a stalker who had been stalking her for years. 

All the same, I am always more afraid I'm going to get maimed or killed by some inattentive driver than I am afraid that I might be killed by someone with a gun. 

A self-driving vehicle for civilian use doesn't appeal to me.  At least with the fire and forget weapon the expected probable death of the thing on the receiving end is intended.  It may be me being old fashioned and close-minded about the limited applicability of self-driving vehicles but I think civilian society would be better off leaving this set of technical advances made within weapons tech firmly on the side of weapons tech.  If there comes a day when I should, God forbid, get killed by a vehicle, I at least want to be able to have a second's moment of being able to see whoever it is that hits me and not see some harbinger of a Stunticon attack.

Or maybe just put the Decepticon logo on every self-driving car and people will know to look for it.

5 comments:

Cal of Chelcice said...

This event made me think of a couple things. One was that the Tempe's police chief initially, without any reason, exempted Uber's tech. It reminds me of all those sci-fi movies where criminal justice/law enforcement and tech corporations have become welded to one another (Robocop?).

Another thing was how the criminal justice system has this almost ritual purification about it. We all know those TV show cliches about how seeking revenge doesn't bring someone back etc. etc. Well, for many, there is relief in being able to point a finger at the law-breaker, know your attacker, and have a sense that you've hurled him into the abyss. But is this procedure about justice or about life, or is it about some need to focus the guilt and expunge your conscience about what happened. But with automaton cars, there is no one guilty. It's not exactly the CEOs fault the tech failed. Maybe the worse that happens is a couple of fines, a couple of sacks, and a precipitous fall of the stocks. There's no "sense " of justice, but that avoids answering what exactly we're looking for in the first place.

Wenatchee the Hatchet said...

To reference the Torah, you can't do "eye for an eye" restitution or justice when there's an automaton involved. Taking the principle as a constraint on retaliation rather than enforced punitive revenge, we're still stuck with a matter of what the nature of the loss is that can be addressed. Somebody died. If a judge were to somehow rule the woman was to blame that would be infuriating for people who would believe otherwise but it's another possibility to consider if nobody on the Uber side is "at fault". Knowing how people drive I'm not sure I feel any "more" safe knowing that thousands of people die each year because driven cars end up in similar situations.

But a self-driving car killed someone. Do we treat that car, so to speak, like an ox that gores someone per the Mosaic case law? If so then, well, if the ox was known to gore people then ... .

The conundrum doesn't seem to me that it's that there's NO ONE guilty, it's that the technology and its application have so DIFFUSED responsibility at so many levels and stages it's difficult to say that anyone may be legally responsible for the death in a meaningful way on the corporate side. If Uber's tech is exempted then crossing the road at night with your bike means "you" are taking your own life into your hands and no one else can be to blame for death, least of all a robot. That might be what could turn out to be weird about the situation. But it's proverbially hot off the presses as news as these things go. Maybe other stuff has been coming to light in the last five hours I don't know about.

Eric Love said...

Uber's driving software already did not have the safest record, compared to Waymo. Not sure how it compared to human drivers. Also noteworthy is that in this case there was a human driver as well.

I'm a software guy so my instinct is to defend the software...

In a case like this, unlike the typical factory accident, theme park ride accident or everyday vehicle accident, there should be a huge collection of data recorded by the car. All it's inputs, outputs and many things in between, both for the benefit of both the developers and any legal authorities.

The developers will presumably go over it thoroughly. If it was using regular code to determine what road users were around it, they could scrutinise exactly where it went wrong. If those determinations instead use just new-fangled machine learning, well that could be a bit of a black box.

If courts have access to video records of any accident, again that's an improved situation from a normal car accident where that isn't the case. (Consider police shootings where officers deny wrongdoing until video records emerge)

I think the best legal arrangement is one whereby the manufacturer and/or software house agrees to pay penalties in case of accidents where their error results in death/injury/damage. There won't be an individual doing jail time, but wouldn't you rather have a cash payout than see a bad driver imprisoned? Better yet, you want not to have your friend killed in a crash - and on current trends, more self-driving cars will mean less of your friends killed in crashes.

I have never worked on software that if it failed would result in anything worse than inconvenience. I am not the most careful programmer and I wouldn't want to work on anything so critical.

Eric Love said...

Interestingly, on submitting that last comment, in lieu of a captcha, I was presented with a pictures of two driving scenes where I had to identify where the vehicles were. Surely they're not using this to train driving software to pick cars on the road!

Wenatchee the Hatchet said...

there's a line that chief engineer Sakaki gets in the first Patlabor movie where he says machines aren't necessarily good or bad, they do what people build them to do, more or less well.

Payment of penalties, at the moment, seems like what would be most likely.

One of my younger acquaintences said the age of Uber is count-intuitive in a way--there's a joke that says that when you're a kid grown-ups tell you to never get in a car with a stranger and then with taxis or uber that's exactly what grown ups do these days.