Advertisement
Canada markets closed
  • S&P/TSX

    22,269.12
    +197.41 (+0.89%)
     
  • S&P 500

    5,277.51
    +42.03 (+0.80%)
     
  • DOW

    38,686.32
    +574.84 (+1.51%)
     
  • CAD/USD

    0.7339
    +0.0028 (+0.38%)
     
  • CRUDE OIL

    77.18
    -0.73 (-0.94%)
     
  • Bitcoin CAD

    92,344.66
    +44.22 (+0.05%)
     
  • CMC Crypto 200

    1,425.02
    -3.55 (-0.25%)
     
  • GOLD FUTURES

    2,347.70
    -18.80 (-0.79%)
     
  • RUSSELL 2000

    2,070.13
    +13.53 (+0.66%)
     
  • 10-Yr Bond

    4.5140
    -0.0400 (-0.88%)
     
  • NASDAQ

    16,735.02
    -2.06 (-0.01%)
     
  • VOLATILITY

    12.92
    -1.55 (-10.71%)
     
  • FTSE

    8,275.38
    +44.33 (+0.54%)
     
  • NIKKEI 225

    38,487.90
    +433.77 (+1.14%)
     
  • CAD/EUR

    0.6762
    +0.0016 (+0.24%)
     

The Moral Implications Of Robots That Kill

Robot competing in Robotics Challenge Trials
Robot competing in Robotics Challenge Trials

Andrew Innerarity / Reuters

ATLAS, a robot by Boston Dynamics

Lethal autonomous weapons — robots that can kill people without human intervention — aren't yet on our battlefields, but the technology is right there.

As you can imagine, the killer robot issue is one that raises a number of concerns in the arenas of wartime strategy, morality, and philosophy. The hubbub is probably best summarized with this soundbite from The Washington Post : "Who is responsible when a fully autonomous robot kills an innocent? How can we allow a world where decisions over life and death are entirely mechanized?"

They are questions the United Nations is taking quite seriously, discussing them in-depth at a meeting last month. Nobel Peace Prize laureates Jody Williams, Archbishop Desmond Tutu, and former South African President F.W. de Klerk are among a group calling for an outright ban on such technology, but others are skeptical about that method's efficacy as there's historical precedent that banning weapons is counterproductive:

ADVERTISEMENT

While some experts want an outright ban, Ronald Arkin of the Georgia Institute of Technology pointed out that Pope Innocent II tried to ban the crossbow in 1139, and argued that it would be almost impossible to enforce such a ban. Much better, he argued, to develop these technologies in ways that might make war zones safer for non-combatants.

Arkin suggests that "if these robots are used illegally, the policymakers, soldiers, industrialists and, yes, scientists involved should be held accountable." He's quite literally suggesting that if a robot kills a person outside its rules or boundaries, the people involved in that robot's creation are responsible, but here's his hedge from a 2007 book called "Killer Robots":

"It is not my belief that an unmanned system will be able to be perfectly ethical in the battlefield. But I am convinced that they can perform more ethically than human soldiers."

This is one of several issues we'll have to resolve as technology continues to develop like a runaway train.



More From Business Insider