Driverless cars: When coding decisions determine life or death

Andrew Leavitt:

Are you familiar with “The Trolley Problem?” It’s an ethical thought experiment with a central moral dilemma.

An observer must decide whether to switch a trolley track when either choice will result in negative consequences for innocent bystanders. If he intervenes more lives are saved [practical] but he has then taken an active role in who lives or dies [immoral]. If he leaves the switch alone he is moral but more lives are certainly lost. There is no unequivocal “right choice” – especially as the scenario becomes more complex.

Interesting post that touches on Aasimov’s three laws of robotics and MIT’s moral machine web experiment.