Advertisement
Canada markets open in 6 hours 59 minutes
  • S&P/TSX

    21,837.18
    -12.02 (-0.06%)
     
  • S&P 500

    5,149.42
    +32.33 (+0.63%)
     
  • DOW

    38,790.43
    +75.63 (+0.20%)
     
  • CAD/USD

    0.7373
    -0.0016 (-0.22%)
     
  • CRUDE OIL

    82.54
    -0.18 (-0.22%)
     
  • Bitcoin CAD

    88,151.48
    -4,690.11 (-5.05%)
     
  • CMC Crypto 200

    885.54
    0.00 (0.00%)
     
  • GOLD FUTURES

    2,158.90
    -5.40 (-0.25%)
     
  • RUSSELL 2000

    2,024.74
    -14.58 (-0.72%)
     
  • 10-Yr Bond

    4.3400
    0.0000 (0.00%)
     
  • NASDAQ futures

    18,185.25
    -46.25 (-0.25%)
     
  • VOLATILITY

    14.33
    -0.08 (-0.56%)
     
  • FTSE

    7,722.55
    -4.87 (-0.06%)
     
  • NIKKEI 225

    40,003.60
    +263.20 (+0.66%)
     
  • CAD/EUR

    0.6781
    -0.0011 (-0.16%)
     

People tell white lies to protect the feelings of robots: study

Sometimes we tell white lies to spare another person’s feelings, and as, it turns out, some of us may do the same for a robot.
(Google)

Sometimes we tell white lies to spare another person’s feelings and as it turns out, some of us may do the same for a robot.

The findings come from a study by researchers at the University of Bristol and University of College London, who sought to discover ways to create an effective partnership between robots and humans, given the inevitable future where they’ll work side-by-side.

In particular, researchers were interested in creating a trusting environment, given that, despite being machines, they too aren’t perfect and will make mistakes.

To simulate these conditions, the study asked 23 participants – 12 men and 11 women, between the ages of 22 and 72 with a range of experience with artificial intelligence – to work alongside a robot named BERT2 who was tasked with passing them eggs, salt and oil to make an omelette.

ADVERTISEMENT

BERT2 has a humanoid face with a digital interface, which has large eyes and a mouth, and is capable of multiple expressions.

The BERT2 platform with neutral expression (left) and BERT C's facial expression on egg drop (right)
The BERT2 platform with neutral expression (left) and BERT C's facial expression on egg drop (right)

However, participants worked with three variations of the robot: BERT A, who did not communicate, or change expressions but was the most efficient; BERT B, who also had a fixed expression and did not communicate, but made a mistake and attempt to fix it; and BERT C, who could communicate, changed expressions and would attempt to fix its mistake.

The participants and their robot assistants were given four eggs, a bowl and a whisk.

BERT A was the most efficient, passing the eggs to their human counterpart without any issues.

BERT B dropped one of the eggs but then tried the handover again using a different method successfully.

BERT C asked the candidates if they were ready to receive the eggs, and responded based on a yes-or-no reply.  In this case, the robot also dropped an egg, but “appeared conscious of its mistake and apologized,” with a face that was akin to a sad-faced emoji.

It then informed their partner that it was going to try a different type of handover. When its tasks were complete, it also asked if it performed well and would be given the job of kitchen assistant, but participants could only answer yes or no.

“We would suggest that, having seen it display human-like emotion when the egg dropped, many participants were now pre-conditioned to expect a similar reaction and therefore hesitated to say no – they were mindful of the possibility of a display of future human-like distress,” said one of the study’s authors, Adriana Hamacher, in a press release.

In fact, one participant was so worried about hurting its kitchen assistant’s feelings that they complained of “emotional blackmail,” while another lied to the robot.

Having seen the ending of “Wall-E,” you'd have to be made of stone to not sympathize with robots that show emotion.

And most of the study’s participants agreed in post-experiment questionnaires, with 15 of 21 saying the liked working with BERT C the most, despite the fact that it took 50 per cent longer than the other two.

Participants were also less frustrated with BERT C than BERT B when it made a mistake.

“Results suggest that efficiency is not the most important aspect of performance for users; a personable, expressive robot was found to be preferable over a more efficient one,” write the study's authors.

“Our findings also suggest that a robot exhibiting human-like characteristics may make users reluctant to ‘hurt its feelings,’ (and) may even lie in order to avoid this,” they added.

While the study’s sample size is small, the researchers suggest the findings are important because robots will “inevitably” malfunction, and that if they can demonstrate humanlike emotions, such as enthusiasm and regret, it could temper some of accompanying dissatisfaction.