I wonder if self-driving cars are going to have a "nuclear safety effect", where the early models are actually dangerous and then by the time safe ones are developed, it's too late -- popular perception is already established.


Specifically, I'm wondering if a few celebrities getting killed by Teslas might be enough to spur some legislation against self-driving cars and quash the whole thing.

· · Web · 1 · 1 · 2

@varx it’s software. has anyone ever succeeded in making software that’s safe, unhackable, bug free, and competently maintained over multiple decades and across many different engineering teams, management, and feature driven marketing pushes?

@varx meanwhile there’s a proven safe “self driving” car technology that’s been left rotting on the vine by capitalism for years: trains.

@zens Well, sure. :-) But here's a question: Do you suppose people might give it a free pass *because* it's software and they're used to it sucking?

@varx people seem to be only willing to do that only as long as their lives don’t directly and obviously depend on the crappy software perfectly piloting multiple tons of steel.

Anton Yelchin and 737 max seem to have only been tolerated examples of this because people didn’t know or it wasn’t obvious

@varx i bet plenty of people get killed by terrible software in industrial settings all the time too, we just mostly don’t hear about since it’s not celebrities, but it certainly doesn’t get tolerated because corporations don’t like paying worker’s comp. if that weren’t a thing, i am willing to bet software quality would hit the floor in no time

@zens One thing that still baffles me is why Tesla hasn't gotten sued and/or regulated into the ground after outright lying about their cars' autonomousness and getting people killed.

@zens So that's the big question, right? How much will people think about it as they drive the car, or see other cars around them? Will they perceive the risk as personal, or something that only happens to other people?

@varx it depends on their background. my naive non scientific experience says that people without any tech or programming background tend towards having an unreasonable level of trust in the correctness of software. They explain away software problems and mistakes as their own stupidity or ineptness with tech, and this gets reinforced by many IT departments .
rhat said, i think people fundamentally percieve software differently when it’s directly controling a machine that can kill people.

@varx you know, even if they’d be fine with software making a decision about killing people that gets unquestingly carried out by some other non automated process

Sign in to participate in the conversation
Infosec Exchange

A Mastodon instance for info/cyber security-minded people.