A more moderate approach would be democratic robot learning, in which programmers hard-code a small number of fundamental norms into the robot, and let it learn the remaining context-specific norms through its interactions with the community in which it is raised. Fundamental norms will have to include prevention of harm (especially to humans) but also politeness and respect, without which social interactions could not succeed. A host of specific norms will then translate abstract norms into concrete behavior (e.g., what it means to be polite in a particular context) and define conditions under which one fundamental norm can supersede another (e.g., it’s OK to drop politeness when one tries to save someone from harm). 

Democratic robot learning would also guide a robot in dealing with contradictory teachers. Say one person tries to teach the robot to share, and another tries to teach it to steal. In that example, the robot should ask the community at large who the legitimate teacher is. After all, the norms and morals of a community are typically held by at least a majority of members in that community. Just like humans have a natural tendency to look to their peers for guidance, thoughtful crowdsourcing should be another principle that learning robots must obey.

But won’t such learning robots take over the world and wipe out humanity? They likely won’t, because the community in which they grow up will teach them better.