Teaching Artificial Intelligence Morality

Posted

Keith Tucker

BY KEITH TUCKER

Special to The Enterprise

There is considerable effort underway to develop artificial intelligence, better known as AI. So, what's the difference between a regular computer with an operating system and AI? Well, AI is still just a computer, but it has something called machine-learning software. It keeps up with success and failures and makes adjustments based on those outcomes. We would say it learns and that's the difference.

So how does mortality factor into how these machines learn? Let's say a self-driving car is taking you to town and a child runs out into the road. Now this car must decide what to do. Let's say there is a car coming toward you and grandma is on the sidewalk to the right.

The car must decide on one of three outcomes: turn into the opposing car’s lane, which is the least acceptable choice, which leaves hit either the child in front, or swerve onto the sidewalk and strike grandma. Do you choose an outcome based on the child having more years to live than grandma?

Or is the outcome based on if you leave the roadway, your liability goes up because the kid is in the wrong by being in the road.

Someone must install in the AI program a set of moral decisions based on someone's moral and legal values. So far, these decisions are based on parameters we decide are relevant. However, when the day comes that the AI makes its own decisions based on its own values, we are all in trouble. How much depends on how much responsibility we have given over to them.

There has already been an engineer that thought his AI was self-aware. I doubt it. He just became overly invested in his work. Meaning everybody thinks their kid is the next Mozart, when in reality, they are just another kid banging on the piano.

Editor’s note: Keith Tucker is a Greenfield resident and owner of The Marble Shop. He may be contacted by email at keithtucker06@gmail.com.