Advertisement
Canada markets closed
  • S&P/TSX

    22,167.03
    +59.95 (+0.27%)
     
  • S&P 500

    5,254.35
    +5.86 (+0.11%)
     
  • DOW

    39,807.37
    +47.29 (+0.12%)
     
  • CAD/USD

    0.7387
    +0.0015 (+0.20%)
     
  • CRUDE OIL

    83.11
    +1.76 (+2.16%)
     
  • Bitcoin CAD

    95,881.19
    +1,941.98 (+2.07%)
     
  • CMC Crypto 200

    885.54
    0.00 (0.00%)
     
  • GOLD FUTURES

    2,254.80
    +16.40 (+0.73%)
     
  • RUSSELL 2000

    2,124.55
    +10.20 (+0.48%)
     
  • 10-Yr Bond

    4.2060
    +0.0100 (+0.24%)
     
  • NASDAQ futures

    18,465.00
    -38.75 (-0.21%)
     
  • VOLATILITY

    13.01
    +0.23 (+1.80%)
     
  • FTSE

    7,952.62
    +20.64 (+0.26%)
     
  • NIKKEI 225

    40,168.07
    -594.66 (-1.46%)
     
  • CAD/EUR

    0.6843
    +0.0038 (+0.56%)
     

Thank God someone is now teaching robots to disobey human orders

pepper robot 1
pepper robot 1

(Yuya Shino/Reuters)

When you think about teaching robots to say “no” to human commands, your immediate reaction might be, “that seems like a truly horrible idea.” It is, after all, the bread and butter of science fiction nightmares; the first step to robots taking over the world.

But Gordon Briggs and Matthias Scheutz, two researchers at Tufts University’s Human-Robot Interaction Lab, think teaching robots to say “no” is an important part of developing a code of ethics for the future.

Consider this: the first robots to do evil deeds will definitely be acting on human orders. In fact, depending on your definition of a “robot” and of “evil,” they already have. And the threat of a human-directed robot destroying the world is arguably greater than that of a rogue robot doing so.

ADVERTISEMENT

That's where Briggs and Scheutz come in. They want to teach robots when to say "absolutely not," to humans.

To do so, the pair have created a set of questions their robots need to answer before they will accept a command from a human:

  1. Knowledge: Do I know how to do X?

  2. Capacity: Am I physically able to do X now? Am I normally physically able to do X?

  3. Goal priority and timing: Am I able to do X right now?

  4. Social role and obligation: Am I obligated based on my social role to do X?

  5. Normative permissibly: Does it violate any normative principle to do X?

These questions work as a simplified version of the calculations humans make every day, except they hew more closely to logic than our thought processes do. There's no, “Do I just not feel like getting out of bed right now" question.

Briggs and Scheutz’s efforts evoke science fiction superstar Isaac Asimov’s classic three laws of robotics:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

In Briggs and Scheutz’s formulation, the second law is decidedly more complicated. They are giving robots a lot more reasons to say no than simply, "It might hurt a human being."

Watch a video of how this programming actually functions in a robot below. The robot refuses to walk off the edge of the table until the researcher promises to catch him:

NOW WATCH: The 19-year-old model who quit Instagram explains the lies behind her most popular photo



More From Business Insider