Can AI robots become moral agents?

How do we safeguard human agency when intelligent, moral robots determine every aspect of how we live our lives? 

Nothing less than Kant's Categorical Imperative: let's challenge engineers to invent an AI system that appreciates the supreme principle of morality.

Until then, keep a close eye to scrutinize the developers, owners, and operators of AI systems to hold them accountable.  Biases in the AI originate from flaws in human organizations, society, and history--the problem of many hands.

Yet, even when AI becomes capable of moral reasoning, how could our feeble hearts and minds rest assure that these artificial moral agents will be unwavering in their respect for human rights?  Who serve whom?  We will need a human-machine social contract and Constitution--etched in the blockchain, of course.

Finally, how minuscule will be the remainder of humans free will and agency in the midst of these autonomous omniscient AI creatures?

Photo by Mick Haupt on Unsplash