Advertisement
Canada markets close in 1 hour 7 minutes
  • S&P/TSX

    21,855.16
    -87.00 (-0.40%)
     
  • S&P 500

    5,471.83
    -11.04 (-0.20%)
     
  • DOW

    39,029.69
    -134.37 (-0.34%)
     
  • CAD/USD

    0.7307
    +0.0006 (+0.08%)
     
  • CRUDE OIL

    81.31
    -0.43 (-0.53%)
     
  • Bitcoin CAD

    83,097.83
    -1,366.03 (-1.62%)
     
  • CMC Crypto 200

    1,264.55
    -19.28 (-1.50%)
     
  • GOLD FUTURES

    2,337.20
    +0.60 (+0.03%)
     
  • RUSSELL 2000

    2,040.60
    +2.26 (+0.11%)
     
  • 10-Yr Bond

    4.3370
    +0.0490 (+1.14%)
     
  • NASDAQ

    17,823.65
    -35.04 (-0.20%)
     
  • VOLATILITY

    12.32
    +0.08 (+0.65%)
     
  • FTSE

    8,164.12
    -15.56 (-0.19%)
     
  • NIKKEI 225

    39,583.08
    +241.54 (+0.61%)
     
  • CAD/EUR

    0.6818
    +0.0001 (+0.01%)
     

Thomson Reuters CEO: AI guardrails remain unclear—but all companies can start by pledging to protect their customers’ data privacy

Mark Blinch—Reuters

Artificial Intelligence (AI) has been a hot topic lately—even though AI, machine learning, and large language models are nothing new. Many companies, ours included, have been developing, adapting, and adopting these technologies for decades.

What is new is the attention the topic has received in conversations since the game-changing rollout of ChatGPT. To be sure, the advent of generative AI has leveled the playing field and made artificial intelligence much more accessible. And it is accelerating AI’s adoption in fields where it previously would have been out of reach. Gen AI can produce initial drafts of documents, automate routine tasks, draw from vast amounts of data to improve decision-making, and free up people’s time for a better work-life balance.

Unfortunately, because there are not yet enough generally agreed-upon ethical guardrails around gen AI, there have been controversies surrounding its use. That includes the collection of personal and copyrighted data from online sources to build large language models (LLMs).
 
In this fast-emerging age of AI, there is a social imperative to insist upon the responsible use of the technology. And that social imperative is very much the business imperativeof responsible AI. For example, when Apple recently announced it would incorporate ChatGPT into the iPhone, it was good to hear Apple’s commitment that a user’s queries would remain private.

To retain the public’s trust, we must all take a firm, ethical stand. We must insist that the same ethics that have long governed the practice of law, tax & accounting and other knowledge-industry professions must inform and inspire the professional use of AI. We must adhere to the time-honored practice of client privilege and non-disclosure of the sensitive information that surfaces while doing business with our customers and business partners.

ADVERTISEMENT

Our company is making the following AI pledge to all our current and potential customers: Your confidential information will not become output for a third party. That means we won’t allow a customer’s data to be used to train a third-party LLM. And we call on other companies to make a similar commitment.

However, given the rising ubiquity of professional-grade AI, the time has come to make this a public discussion.

After all, it is companies like Thomson Reuters and our community of tech peers that have invested in and developed AI. It’s our collective responsibility to ensure that this powerful technology—with its potential impact on nearly all facets of society, culture and the economy—is developed and used responsibly. The choices we make now will determine the AI future.

Reasons for optimism, with a caveat

The good news is that people in the knowledge professions tend to be overwhelmingly positive about the implications of AI. And they’re optimistic about the potential for AI to transform the way they do their jobs.

And yet, for all the transformative power of technology, it can’t fundamentally change human nature. There will always be bad actors who want to wield new technology in a manner that may not respect the intellectual property or data privacy rights of customers or the public.

That’s why we continually need the rule of law and responsible, informed public policies to create guardrails. And it’s also why developers of the technology must self-govern when it comes to AI.
 
The business imperative of responsible AI begins with ethics. It means protecting customer and client information. That’s why we’re making our pledge today.

AI can be a force for good. It can free humans from tedium to be their best professional selves. But as we move forward into the AI age, let’s be sure to keep a healthy, ethical human perspective on what even the most powerful technologies can and can’t do—or at least shouldn’t do.

The future of AI is in our hands. It’s humans who have designed this technology to serve us. And in the end, who do people want to interact with? To do business with?

It’s other people. And they deserve our full respect.

More must-read commentary published by Fortune:

The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.

This story was originally featured on Fortune.com