Google CEO Sundar Pichai will join other Big Tech chiefs on Thursday at a Congressional hearing on social media’s role in promoting extremism and misinformation. And in his written testimony ahead of the hearing, Pichai, whose company also owns YouTube, lays out exactly what Google (GOOG, GOOGL) has done to stanch the flow of such content.
But the CEO also warns committee members against making dramatic changes to their public enemy No. 1: Section 230 of the Communications Decency Act, which protects web platforms from liability for content posted by third parties and allows them to freely moderate content.
“Regulation has an important role to play in ensuring that we protect what is great about the open web, while addressing harm and improving accountability,” Pichai notes in his remarks presented to the House Commerce Committee.
“We are, however, concerned that many recent proposals to change Section 230 — including calls to repeal it altogether — would not serve that objective well. In fact, they would have unintended consequences — harming both free expression and the ability of platforms to take responsible action to protect users in the face of constantly evolving challenges.”
Pichai’s comments are a common refrain among Section 230 experts. The law, which protects websites from liability for user-generated content and allows them to remove it as they see fit, allows the web to function as it currently does.
But critics say the law, passed in 1996, needs to be updated — though Republicans and Democrats typically disagree on how it should change. Some Republicans say it allows for too much content moderation resulting in censorship, while those on the opposite side of the aisle say it allows sites to host misinformation on their sites without fear of repercussions.
Pichai, however, suggests that social media sites should create concrete content policies for dealing with objectionable user-generated posts.
“Solutions might include developing content policies that are clear and accessible, notifying people when their content is removed and giving them ways to appeal content decisions, and sharing how systems designed for addressing harmful content are working over time,” he notes.
'We strive to have clear and transparent policies'
Thursday’s hearing will mark the second such time Pichai sits before Congress to discuss Section 230 and disinformation in the last year. It will mark the third time for Facebook CEO Mark Zuckerberg and Twitter CEO Jack Dorsey.
Pichai’s recommendations for dealing with content moderation are exactly the kind of suggestions experts have been suggesting for some time. Rather than tearing up Section 230, they call for tech companies to provide users with a better understanding of how they moderate content and why, as well as a means to protest when their content is removed.
Facebook has already implemented such a strategy with its oversight board, a group of experts who can make the final call on content removal without being overruled by Zuckerberg. It’s currently deciding whether Facebook should restore former President Donald Trump’s accounts after they were revoked following the Jan. 6 Capitol attack.
Pichai also, naturally, paints Google in an admirable light by pointing to how the company has addressed disinformation and misinformation around such hot-button topics as the 2020 election, COVID-19 and vaccines, and the Jan. 6 attack on the Capitol.
He’s also careful to point out that Google made those decisions without giving thought to whether posts that run afoul of its moderation policies come from conservatives or liberals.
“Across all of this work, we strive to have clear and transparent policies and enforce them without regard to political party or point of view,” Pichai writes. “We work to raise up authoritative sources, and reduce the spread of misinformation in recommendations and elsewhere.”
Got a tip? Email Daniel Howley at firstname.lastname@example.org over via encrypted mail at email@example.com, and follow him on Twitter at @DanielHowley.