,

Automated Systems aren’t Human Exclusive

control-cabinet-778639_1920

 

Today, most of us carry handheld devices that are orders of magnitude faster and more advanced than the computers which took us to the moon half a century ago. And, every day, these devices only get better at anticipating and fulfilling our needs.

 

As such—and because of our all-too-human propensity toward error—it should come as no surprise that we’re increasingly relegating a number of safety-related tasks to machines that have outpaced us in speed, reliability, and attention to detail. These reasons are why automated systems are gaining steam in all industries. However, a human touch and human intuition carries greater ethical decision making, that machines can’t make for us. 

 

recent article from MIT’s Technology Review notes that we’re eventually going to be forced to program machines—even ones tasked with personal safety—to make these ethically complex, life-or-death decisions for us:

 

“…many car manufacturers are beginning to think about cars that take the driving out of your hands altogether…These cars will be safer, cleaner, and more fuel-efficient than their manual counterparts. And yet they can never be perfectly safe. And that raises some difficult issues. How should the car be programmed to act in the event of an unavoidable accident? Should it minimize the loss of life, even if it means sacrificing the occupants, or should it protect the occupants at all costs? Should it choose between these extremes at random?”

 

Naturally, the theoretical value of self-driving cars—which are projected to prevent loss of human life due to driver fatigue, alcohol consumption, and inattention—represent a potentially enormous means of improving overall human safety and quality of life. And yet, under the promise of these gains lingers the shadow of complex questions like those expressed above.

 

Of course, we’re only more likely to see more debate over problems like these as automated and artificial intelligences perforate an increasing number of industries. Consider another example, this time from a construction site:

 

An automated wrecking ball has been programmed to take down a building. A sensor alerts the onboard computer that the wrecking ball’s cable is going to snap, and the computer doesn’t have time to lower it safely. All it can do, in the time that’s allotted, is select where the wrecking ball should fall: 1) onto the crowd it’s currently above, or 2) onto a single construction worker that’s a few degrees north.

 

If the computer moves the ball’s location, the wrecking ball will choose to kill, a decision in opposition to its original goals of preventing human injury. Simultaneously, it will produce a net gain in protection of human life. How should the machine be programmed to address a situation like this?

 

While critics might (quite fairly) suggest that these are highly unlikely scenarios, the truth is that less extreme moral dilemmas exist in everything we program—from stoplights to air traffic control towers. For better or worse, we’ll never completely eradicate the need for human intervention in safety decisions.

 

Instead, we’ll simply be making these decisions in advance, as programmers develop the technologies we come to rely on every day.

 

This is the first in our series on automated systems; their advantages and human disconnections. What are your thoughts on automation in our society? 

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *