Advertisement
Canada markets closed
  • S&P/TSX

    24,471.17
    +168.91 (+0.70%)
     
  • S&P 500

    5,815.03
    +34.98 (+0.61%)
     
  • DOW

    42,863.86
    +409.74 (+0.97%)
     
  • CAD/USD

    0.7270
    -0.0008 (-0.11%)
     
  • CRUDE OIL

    75.49
    -0.36 (-0.47%)
     
  • Bitcoin CAD

    86,426.23
    +3,617.84 (+4.37%)
     
  • XRP CAD

    0.74
    +0.01 (+0.77%)
     
  • GOLD FUTURES

    2,674.20
    +34.90 (+1.32%)
     
  • RUSSELL 2000

    2,234.41
    +45.99 (+2.10%)
     
  • 10-Yr Bond

    4.0730
    -0.0230 (-0.56%)
     
  • NASDAQ

    18,342.94
    +60.89 (+0.33%)
     
  • VOLATILITY

    20.46
    -0.47 (-2.25%)
     
  • FTSE

    8,253.65
    +15.92 (+0.19%)
     
  • NIKKEI 225

    39,605.80
    +224.91 (+0.57%)
     
  • CAD/EUR

    0.6642
    -0.0011 (-0.17%)
     

How AI-generated content is upping the workload for Wikipedia editors

Image Credits:Photo Illustration by Serene Lee/SOPA Images/LightRocket via Getty Images / Getty Images

As AI-generated slop takes over increasing swathes of the user-generated Internet thanks to the rise of large language models (LLMs) like OpenAI's GPT, spare a thought for Wikipedia editors. In addition to their usual job of grubbing out bad human edits, they're having to spend an increasing proportion of their time trying to weed out AI filler.

404 Media has talked to Ilyas Lebleu, an editor at the crowdsourced encyclopedia, who was involved in founding the "WikiProject AI Cleanup" project. The group is trying to come up with best practices to detect machine-generated contributions. (And no, before you ask, AI is useless for this.)

A particular problem with AI-generated content in this context is that it's almost always improperly sourced. The ability of LLMs to instantly produce reams of plausible-sounding text has even led to whole fake entries being uploaded in a bid to sneak hoaxes past Wikipedia's human experts.