Why the EU’s Vice President Isn’t Worried About Moon-Landing Conspiracies on YouTube

When European Union vice president Věra Jourová met with YouTube CEO Neal Mohan in California last week, they fell to talking about the long-running conspiracy theory that the moon landings were fake. YouTube has faced calls from some users and advocacy groups to remove videos that question the historic missions. Like other videos denying accepted science, they have been booted from recommendations and have a Wikipedia link added to direct viewers to debunking context.

But as Mohan spoke about those measures, Jourová made something clear: Fighting lunar lunatics or flat-earthers shouldn’t be a priority. “If the people want to believe it, let them do,” she said. As the official charged with protecting Europe’s democratic values, she thinks it’s more important to make sure YouTube and other big platforms don’t spare a euro that could be invested in fact-checking or product changes to curb false or misleading content that threatens the EU’s security.

“We are focusing on the narratives which have the potential to mislead voters, which could create big harm to society,” Jourová tells WIRED in an interview. Unless conspiracy theories could lead to deaths, violence, or pogroms, she says, don’t expect the EU to be demanding action against them. Content like the recent fake news report announcing that Poland is mobilizing its troops in the middle of an election? That better not catch on as truth online.

In Jourová’s view, her conversation with Mohan and similar discussions she held last week with the CEOs of TikTok, X, and Meta show how the EU is helping companies understand what it takes to counter disinformation, as is now required under the bloc’s tough new Digital Services Act. Its requirements include that starting this year the internet’s biggest platforms, including YouTube, have to take steps to combat disinformation or risk fines up to 6 percent of their global sales.

Civil liberties activists have been concerned that the DSA ultimately could enable censorship by the bloc’s more authoritarian regimes. A strong showing by far-right candidates in the EU’s parliamentary elections taking place later this week also could lead to its uneven enforcement.

YouTube spokesperson Nicole Bell says the company is aligned with Jourová on preventing egregious real-world harm and also removing content that misleads voters on how to vote or encourages interference in the democratic processes. “Our teams will continue to work around the clock,” Bell says of monitoring problematic videos about this week’s EU elections.

Jourová, who expects her five year term to end later this year, in part because her Czech political party, ANO, is no longer in power at home in Czechia to renominate her, contends that the DSA is not meant to enable anything more than appropriate moderation of the most egregious content. She doesn’t expect Mohan or any other tech executive to go a centimeter beyond what the law prescribes. “Overusage, overshooting on the basis of the EU legislation would be a big failure and a big danger,” she says.

On the other hand, she acknowledges that if the companies aren’t seen to be stepping up to mitigate disinformation, then some influential politicians have threatened to seek stiffer rules that could border on outright censorship. “I hate this idea,” she says. “We don’t want this to happen.”

But with the DSA offering guidelines more than bright lines, how are platforms to know when to act? Jourova’s “democracy tour” in Silicon Valley, as she calls it, is part of facilitating a dialog on policy. And she expects social media researchers, experts, and the press to all contribute to figuring out the fuzzy borders between free expression and destructive disinformation. She jokes that she doesn’t want to be seen as the “European Minister of the Truth,” as tempting as that title may be. Leaving it to politicians alone to define what’s acceptable online “would pave the way to hell,” she says.

Jourová does have some clear preferences, though. “We should do everything to guarantee that lies are not the easiest way to get political positions,” she says. “If politicians are lying, there should be somebody to say immediately, ‘Guy, you are lying.’ Using clear lies, especially of the nature that increases the hostility and proliferates hate, should be stopped.”

Political candidates around the world have continued to turn to new technologies and social media to spread potentially misleading content. She says local researchers identified 70 cases of deepfakes ahead of recent elections in Slovakia. Though the impact they had on the vote has not been assessed, some audio deepfakes on the eve of the vote targeted a pro-Ukraine candidate who lost a bid to run the country to a pro-Russian opponent. WIRED has cataloged so far about 50 cases of deepfakes across elections globally this year.

Western governments and researchers have attributed some of the deepfake surge to Russia. But though Jourová is concerned about the alleged interference, she also takes it as evidence that democracy is working. There aren’t enough fellow autocrats in Europe for Putin to call up to win favor with the EU, she reasons, and so instead, he has to seed lies and hope they sway electorates towards installing leaders who support him. That’s an expensive strategy for a country in economic straits, Jourová says, and she anticipates it becoming more expensive still for the Kremlin if tech platforms successfully crack down on disinformation.

The DSA includes measures aimed at making it clear to officials and the public what action platforms are taking. Companies are required to share data and commentaries on their work on limiting disinformation such as political deepfakes. So far, the platforms’ compliance under that provision of both the DSA and the EU’s related voluntary Code of Practice on Disinformation has been variable, making it difficult to draw comparisons to assemble an overall picture of harmful untruths across the internet.

YouTube has told the EU that in the second half of last year 112 deepfake videos each received over 10,000 views before being taken down. By contrast, Meta offered no comparable data and declined to comment to WIRED on its different approach to reporting.

“It’s a bit still moving in the darkness,” Jourová says of monitoring compliance. But she insists that will change. “Comparable data in a structured way” is the goal, she says. Or else big fines—and the crippling of democracy—could follow.

Additional reporting by Morgan Meaker.

Facebook
Twitter
LinkedIn
Telegram
Tumblr