Advertisement
Canada markets closed
  • S&P/TSX

    21,947.41
    +124.19 (+0.57%)
     
  • S&P 500

    5,127.79
    +63.59 (+1.26%)
     
  • DOW

    38,675.68
    +450.02 (+1.18%)
     
  • CAD/USD

    0.7308
    -0.0006 (-0.08%)
     
  • CRUDE OIL

    77.99
    -0.96 (-1.22%)
     
  • Bitcoin CAD

    87,141.96
    +1,277.28 (+1.49%)
     
  • CMC Crypto 200

    1,325.93
    +48.95 (+3.83%)
     
  • GOLD FUTURES

    2,310.10
    +0.50 (+0.02%)
     
  • RUSSELL 2000

    2,035.72
    +19.61 (+0.97%)
     
  • 10-Yr Bond

    4.5000
    -0.0710 (-1.55%)
     
  • NASDAQ

    16,156.33
    +315.37 (+1.99%)
     
  • VOLATILITY

    13.49
    -1.19 (-8.11%)
     
  • FTSE

    8,213.49
    +41.34 (+0.51%)
     
  • NIKKEI 225

    38,236.07
    -37.98 (-0.10%)
     
  • CAD/EUR

    0.6787
    -0.0030 (-0.44%)
     

ChatGPT’s creators say AI has been ‘biased, offensive and objectionable’ – and commits to fix it

 (Getty Images)
(Getty Images)

ChatGPT creators OpenAI say the system has been “politically biased, offensive” and “otherwise objectionable”, and has committed to changing how it works.

Those changes include making the system more likely to say things that people may strongly disagree with, and allowing new tools to let people customise how it behaves.

Since ChatGPT was released late last year, and millions of people began using it each day, some have begun to complain and worry about the kinds of things it can say. Some have criticised it for appearing to favour certain sides of political debates – with some right-wing commentators suggesting that it was biased against Donald Trump and towards Joe Biden, for instance – as well as seeming to take divisive positions on some topics.

ADVERTISEMENT

The version of ChatGPT that has been integrated into Bing has drawn even more attention for its unusual and sometimes offensive behaviour. Users have found that it appears to attack them, accuse them of lying and more.

OpenAI said that it believes that “in many cases, we think that the concerns raised have been valid and have uncovered real limitations of our systems which we want to address”. It did not say specifically which concerns it agreed with or which examples it was responding to, but said that it would be working to fix those problems in the future, through tweaks to the system.

At the moment, models such as ChatGPT are built by feeding them a vast amount of text, through which they are able to learn what is the most likely next word in a sentence. It then goes through a second phrase of fine tuning, where human reviewers help narrow down the behaviour of the system so that it behaves appropriatey.

OpenAI said that process “is more similar to training a dog than to ordinary programming”. As such, the company does not give it explicit instructions, but rather general guidance that it then follows in its interactions with people.

But OpenAI said that it was making the decision to share some of the guidelines that are given to those reviewers and which therefore help train the system. Posted online, they say in short that the system should avoid taking sides on divisive topics, and that it should instead try and help users with informational questions.

But it suggested that more could be done to avoid those situations, and that it would work harder to change how the system behaves.

That will include changing the system to “reduce both glaring and subtle biases in how ChatGPT responds to different inputs”. At the moment, ChatGPT “refuses outputs that it shouldn’t, and in some cases, it doesn’t refuse when it should”, and that could be improved, it said.

OpenAI also said that ChatGPT could be improved so that it doesn’t “make things up” quite so often.

Another of OpenAI’s suggestions is likely to prove the most controversial. The company said that it will upgrade the system so that users can “easily customise its behaviour” such as instructing it to give outputs “that other people (ourselves included) may strongly disagree with:”.

OpenAI did not say what kind of things that might include. But at the moment its guidelines explicitly discourage ChatGPT from promoting ideas that lead to massive loss of life such as genocide, slavery and terrorist attacks, for example.

The system will continue to include “hard bounds” that will stop it from ever undertaking certain behaviours, however. OpenAI said that it will look for public input on both those bounds as well as the system’s defaults, with a view to avoiding concentrating power in its own hands.

“Sometimes we will make mistakes. When we do, we will learn from them and iterate on our models and systems,” the company concluded.

“We appreciate the ChatGPT user community as well as the wider public’s vigilance in holding us accountable, and are excited to share more about our work in the three areas above in the coming months.”