Ignoring warnings, YouTube let toxic videos run rampant

Since YouTube CEO Susan Wojcicki took the stage at the South By Southwest conference in Austin, Texas, last year, prominent conspiracy theories on the platform have drawn the ire of lawmakers eager to regulate technology companies. David Paul Morris/Bloomberg News file photo

A year ago, Susan Wojcicki was onstage to defend YouTube. Her company, hammered for months for fueling falsehoods online, was reeling from another flare-up involving a conspiracy theory video about the Parkland, Fla., high school shooting that suggested the victims were “crisis actors.”

Wojcicki, YouTube’s chief executive officer, is a reluctant public ambassador, but she was in Austin, Texas, at the South by Southwest conference to unveil a solution that she hoped would help quell conspiracy theories: a tiny text box from websites like Wikipedia that would sit below videos that questioned well-established facts like the moon landing and link viewers to the truth.

Wojcicki’s media behemoth, bent on overtaking television, is estimated to rake in sales of more than $16 billion a year. But on that day, Wojcicki compared her video site to a different kind of institution. “We’re really more like a library,” she said, staking out a familiar position as a defender of free speech. “There have always been controversies, if you look back at libraries.”

Since Wojcicki took the stage, prominent conspiracy theories on the platform — including one on child vaccinations; another tying Hillary Clinton to a Satanic cult — have drawn the ire of lawmakers eager to regulate technology companies. And YouTube is, a year later, even more associated with the darker parts of the web.

The conundrum isn’t just that videos questioning the moon landing or the efficacy of vaccines are on YouTube. The massive “library,” generated by users with little editorial oversight, is bound to have untrue nonsense. Instead, YouTube’s problem is that it allows the nonsense to flourish. And, in some cases, through its powerful artificial intelligence system, it even provides the fuel that lets it spread.

Wojcicki and her deputies know this. In recent years, scores of people inside YouTube and Google, its owner, raised concerns about the mass of false, incendiary and toxic content that the world’s largest video site surfaced and spread. One employee wanted to flag troubling videos, which fell just short of the hate speech rules, and stop recommending them to viewers. Another wanted to track these videos in a spreadsheet to chart their popularity. A third, fretful of the spread of “alt-right” video bloggers, created an internal vertical that showed just how popular they were. Each time they got the same basic response: Don’t rock the boat.

Conversations with over 20 people who work at, or recently left, YouTube reveal a corporate leadership unable or unwilling to act on internal alarms for fear of throttling “engagement,” a measure of the views, time spent and interactions with online videos.

YouTube turned down Bloomberg News’ requests to speak to Wojcicki, other executives, management at Google and the board of Alphabet Inc., its parent company. Last week, Neal Mohan, its chief product officer, told the New York Times that the company has “made great strides” in addressing its issues with recommendation and radical content.

A YouTube spokeswoman contested the notion that Wojcicki is inattentive to these issues and that the company prioritizes engagement above all else. Instead, the spokeswoman said the company has spent the last two years focused squarely on finding solutions for its content problems. Since 2017, YouTube has recommended clips based on a metric called “responsibility,” which includes input from satisfaction surveys it shows after videos. YouTube declined to describe it more fully, but said it receives “millions” of survey responses each week.

“Our primary focus has been tackling some of the platform’s toughest content challenges,” a spokeswoman said in an emailed statement. “We’ve taken a number of significant steps, including updating our recommendations system to prevent the spread of harmful misinformation, improving the news experience on YouTube, bringing the number of people focused on content issues across Google to 10,000, investing in machine learning to be able to more quickly find and remove violative content, and reviewing and updating our policies — we made more than 30 policy updates in 2018 alone. And this is not the end: responsibility remains our number one priority.”

In response to criticism about prioritizing growth over safety, Facebook has proposed a dramatic shift in its core product. YouTube still has struggled to explain any new corporate vision to the public and investors — and sometimes, to its own staff. Five senior personnel who left YouTube and Google in the last two years privately cited the platform’s inability to tame extreme, disturbing videos as the reason for their departure. Within Google, YouTube’s inability to fix its problems has remained a major gripe.

YouTube’s inertia was illuminated again after a deadly measles outbreak drew public attention to vaccinations conspiracies on social media several weeks ago. New data from Moonshot CVE, a London-based firm that studies extremism, found that fewer than 20 YouTube channels that have spread these lies reached over 170 million viewers, many who were then recommended other videos laden with conspiracy theories.