Feed on
Posts
Comments

Ethical Behavior and the Self-Driving Car

[This piece originally ran in the New England Journal of Higher Education.]

By Jennifer Ware
Editor, MindEdge Learning

Machines have changed our lives in many ways. But the technological tools we use on a day-to-day basis are still largely dependent on our direction. I can set the alarm on my phone to remind me to pick up my dry cleaning tomorrow, but as of now, I don’t have a robot that will keep track of my dry cleaning schedule and decide, on its own, when to run the errand for me.

As technology evolves, we can expect that robots will become increasingly independent in their operations. And with their independence will come concerns about their decision-making. When robots are making decisions for themselves, we can expect that they’ll eventually have to make decisions that have moral ramifications–the sort of decisions that, if a person had made them, we would consider blameworthy or praiseworthy.

Perhaps the most talked-about scenario illustrating this type of moral decision-making involves self-driving cars and the “Trolley Problem.” The Trolley Problem, introduced by Phillipa Foot in 1967, is a thought experiment that is intended to clarify the kinds of things that factor into our moral evaluations. Here’s the gist:

the trolley problem

Imagine you’re driving a trolley, and ahead you see three people standing on the tracks. It’s too late to stop, and these people don’t see you coming and won’t have time to move. If you hit them, the impact will certainly kill them. But you do have the chance to save their lives! You can divert the trolley onto another track, but there’s one person in that path that will be killed if you chose to avoid the other three. What should you do?

Intuitions about what is right to do in this case tend to bring to light different moral considerations. For instance: Is doing something that causes harm (diverting the trolley) worse than doing nothing and allowing harm to happen (staying the course)? Folks who think you should divert the trolley, killing one person but saving three, tend to care more about minimizing bad consequences. By contrast, folks who say you shouldn’t divert the trolley tend to argue that you, as the trolley driver, shouldn’t get to decide who lives and dies.

The reality is that people usually don’t have time to deliberate when confronted with these kinds of decisions. But automated vehicles don’t panic, and they’ll do what we’ve told them to do. We get to decide, before the car ever faces such a situation, how it ought to respond. We can, for example, program the car to run onto the sidewalk if three people are standing in the crosswalk who would otherwise be hit–even if someone on the sidewalk is killed as a result.

This, it seems, is an advantage of automation. If we can articulate idealized moral rules and program them into a robot, then maybe we’ll all be better off. The machine, after all, will be more consistent than most people could ever hope to be.

But articulating a set of shared moral guidelines is not so easy. While there’s good reason to think most people are consequentialists—responding to these situations by minimizing pain and suffering–feelings about what should happen in Trolley cases are not unanimous. And additional factors can change or complicate people’s responses: What if the person who must be sacrificed is the driver? Or what if a child is involved? Making decisions about how to weigh people’s lives should make any ethically minded person feel uncomfortable. And programming those values into a machine that can act on them may itself be unethical, according to some moral theories.

Given the wide range of considerations that everyday people take into account when reaching moral judgments, how can a machine be programmed to act in ways that the average person would always see as moral? In cases where moral intuitions diverge, what would it mean to program a robot to be ethical? Which ethical code should it follow?

Finally, using the Trolley Problem to think about artificial intelligence assumes that the robots in question will recognize all the right factors in critical situations. After all, asking what an automated car should do in a Trolley Problem-like scenario is only meaningful if the automated car actually “sees” the pedestrians in the crosswalk. But at this early stage in the evolution of AI, these machines don’t always behave as expected. New technologies are being integrated into our lives before we can be sure that they’re foolproof, and that fact raises important moral questions about responsibility and risk.

As we push forward and discover all that we can do with technology, we must also include in our conversations questions about what we should do. Although those questions are undoubtedly complicated, they deserve careful consideration—because the stakes are so high.

For a look at MindEdge’s Ethics Learning Resource, click here.


Copyright © 2018 MindEdge, Inc.

They’re not trying to rack up big surpluses, but nonprofit organizations are still businesses—and that means they’ve got to pay strict attention to their budgets. Perhaps most important is the cash budget, says Corrine Hasbany, an accounting and finance instructor who has served as a corporate controller and a nonprofit treasurer. Why is that? Because even for nonprofits, “cash is king.”

For a complete listing of MindEdge’s nonprofit management courses, click here.


Copyright © 2018 MindEdge, Inc.

MindEdge’s quote of the week comes from Doris Lessing, British novelist, poet and playwright.

This is what learning is. You suddenly understand something you've understood all your life, but in a new way. -Doris Lessing, author.


Copyright © 2018 MindEdge, Inc.

Older Posts »