Before Google’s disastrous social network Google+ came the less remembered Google Buzz. Launched in 2010, Buzz survived less than two years. But its mishandling of people’s personal data motivated the first in a series of legal settlements that, though imperfect, are to this day the closest the US has come to establishing extensive rules for protecting privacy online.
When users set up a Buzz account, Google automatically created a friend network made up of people they email, horrifying some people by exposing private email addresses and secret relationships. Washington regulators felt compelled to act, but Google had not broken any national privacy law—the US didn’t have one.
The Federal Trade Commission improvised. In 2011 Google reached a 20-year legal settlement dubbed a consent decree with the agency for allegedly misleading users with its policies and settings. The decree created a sweeping privacy standard for just one tech company, requiring Google through 2031 to maintain a “comprehensive privacy program” and allow external assessments of its practices. The next year, the FTC signed Facebook onto a near-identical consent decree, settling allegations that the company now known as Meta had broken its own privacy promises to users.
WIRED interviews with 20 current and former employees of Meta and Google who worked on privacy initiatives show that internal reviews forced by consent decrees have sometimes blocked unnecessary harvesting and access of users’ data. But current and former privacy workers, from low-level staff to top executives, increasingly view the agreements as outdated and inadequate. Their hope is that US lawmakers engineer a solution that helps authorities keep pace with advances in technology and constrain the behavior of far more companies.
Congress does not look likely to act soon, leaving the privacy of hundreds of millions of people who entrust personal data to Google and Meta backstopped by the two consent decrees, static barriers of last resort serving into an ever-dynamic era of big tech dominance they were never designed to contain. The FTC is undertaking an ambitious effort to modernize its deal with Meta, but appeals by the company could drag the process out for years and kill the prospect of future decrees.
While Meta, Google, and a handful of other companies subject to consent decrees are bound by at least some rules, the majority of tech companies remain unfettered by any substantial federal rules to protect the data of all their users, including some serving more than a billion people globally, such as TikTok and Apple. Amazon entered its first agreement this year, and it covers just its Alexa virtual assistant after allegations that the service infringed on children’s privacy.
Joseph Jerome, who left privacy advocacy to work on Meta’s augmented reality data policies for two years before being laid off in May, says he grew to appreciate how consent decrees force companies to work on privacy. They add “checks and balances,” he says. But without clear privacy protection rules from lawmakers that bind every company, the limited scope of consent decrees allows too many problematic decisions to be made, Jerome says. They end up providing a false sense of security to users who might think they have more bite than they really do. “They certainly haven’t fixed the privacy problem,” he says.
The FTC has sometimes strengthened consent decrees after privacy lapses. In the wake of Facebook’s Cambridge Analytica data-sharing scandal, in 2020 the agency agreed to stepped-up restrictions on the company and extended Meta’s original consent decree by about a decade, to 2040. In May this year, the FTC accused Meta of failing to cut off outside developer access to user data and protect children from strangers in Messenger Kids. As a remedy, the agency wants one of its judges to impose the most drastic restrictions ever sought in a privacy decree, spooking the broader business community. Meta is fighting the proposal, calling it an “obvious power grab” by an “illegitimate decision maker.”
There is more agreement between FTC officials, Meta, Google, and the wider tech industry that a federal privacy law is overdue. Proposals raised and debated by members of Congress would set a standard all companies have to follow, similar to US state and European Union privacy laws, with new rights for users and costly penalties for violators. “Consent decrees pale in comparison,” says Michel Protti, Meta’s chief privacy officer for product.
Some key lawmakers are on board. “The single best way to increase compliance for different business models and practices is by Congress enacting a comprehensive statute that establishes a clear set of rules for collecting, processing, and transferring Americans’ personal information,” says Republican Cathy McMorris Rodgers, the chair of the House committee that has studied potential legislation for years. Until she can rally enough fellow legislators, the privacy of every American on the internet is reliant on the few safeguards offered by consent decrees.
Innocence Lost
At the time Buzz launched in 2010, Google fostered a companywide culture of freewheeling experimentation in which just a couple of employees felt they could launch ideas to the world with few precautions, according to four workers who were there during that time. The search company’s idealistic founders Larry Page and Sergey Brin closely oversaw product decisions, and head count was one-eighth of the nearly 190,000 it is today. Many of the employees “were in a utopia of trying to make information accessible and free,” says Giles Douglas, who started at Google in 2005 as software engineer and left in 2019 as head of privacy review engineering.
During the earlier era, some former employees recall privacy practices as informal, with no dedicated team. Company spokesperson Matt Bryant says it’s not true that reviews were looser before, but both sides acknowledge that it wasn’t until the FTC settlement that Google started documenting its deliberations over privacy hazards and making a clear commitment to addressing them. “The Buzz decree forced Google to think more critically,” Douglas says.
The settlement required Google to be upfront with people about the collection and use of personal data, including names, phone numbers, and addresses. The former employees, some speaking on condition of anonymity to discuss confidential practices, say Google established a central privacy team for the first time. The company learned early that the FTC’s new invention had sting. It paid $22.5 million, then the agency’s highest-ever penalty, to settle a 2012 charge that Google had violated the Buzz agreement by overriding a cookie-blocking feature on Apple’s Safari browser to track people and serve targeted ads.
Google now has an extensive bureaucracy dedicated to privacy. Its central team has hundreds of employees who oversee privacy policies and procedures, three people who worked with the unit say, like the company’s public privacy principles that promise people control over use of their data. A web of hundreds of privacy experts scattered across Google’s many divisions reviews every product launch, from a minor tweak to the debut of an entirely new service like the AI chatbot Bard to a marketing survey sent to less than a thousand people.
Though a public agency forced many of those changes, there is diminishing transparency about how Google’s consent decree operates. The agreement requires an outside consulting firm such as EY (commonly known as Ernst & Young) to certify in an FTC filing every two years that Google’s guardrails are reasonable. Yet public copies of the filings have been increasingly redacted by the agency to protect company “trade secrets,” preventing any insight into the results of the assessments or the recent evolution of Google’s safeguards. Google’s Bryant says the assessments have led to program improvements, process discipline, and well-informed feedback but declines to provide details.
Unredacted segments of older filings show that Google’s compliance with the FTC has involved measures such as training employees on best practices, expanding data-related user settings, and, most importantly in the view of former employees, analyzing the implications of everything the company releases into the world.
Inside Google today, the privacy and legal review is the only step that a team cannot remove or mark as optional in the company’s main internal tracking system for project launches, commonly referred to as Ariane, the former employees say—unlike for security assessments or quality assurance. And only someone from Google’s privacy team can mark the privacy review as completed, the people say.
Reviewers must pore through an internal management tool known as Eldar to compare product code and documentation against company guidelines about uses and storage of data. With tens of thousands or more product launches annually, many updates Google considers “privacy non-impacting” or “privacy trivial” get only a cursory examination, former employees say, and Google is trying to automate triaging of the most important reviews.
Privacy reviewers have considerable power to shape Google’s products and business, according to five people who formerly held the role. One of their most common actions is to block projects from retaining user data indefinitely without any justification beside “because we can,” the sources say. More exhaustive reviews, according to the sources, have prevented YouTube from displaying viewing statistics that threatened to reveal the identities of viewers from vulnerable populations, and required workers involved in developing Google Assistant to justify every time they play back users’ audio conversations with the chatbot.
Entire acquisitions have died at the hands of Google’s privacy reviewers, former employees say. The company evaluates the privacy risks of potential targets such as data retained unnecessarily or collected without permission, and sometimes commissions independent assessments of software code. If the privacy risks are too high, Google has canceled purchases, sources say, and efforts are underway to apply a similar process to divestitures and strategic investments.
For some Google employees, the changes demanded by privacy reviewers can be frustrating, the former reviewers say, delaying projects or limiting improvements. After a review restricted access to location data on users of Google Assistant, engineers struggled to assess the technology, one former employee involved says. For instance, they could no longer be sure whether the virtual helper’s responses to queries involving ambiguous street names, like Brown or Browne, were accurate.
Proponents of consent decrees say the roadblocks and dead ends show the settlements working as intended. “Google and its users are better off for the decree,” says Al Gidari, an attorney who handled the FTC’s Buzz deal for Google. “One might say but for it, nothing would be left of our privacy.”
For some of the Google sources and privacy experts more critical of the decrees, the sprawling compliance apparatus Google developed over the past decade is privacy theater—activity that fulfills the FTC’s demands without providing public proof that people who use its services are better off. Some former employees say that while staffing and funds for the consent decree’s “comprehensive privacy program” have ramped up, more technical projects that would give people greater protection or transparency have withered.
For instance, the Google Dashboard, which shows the type of data people have stored with different services, like the total number of emails in their Gmail account, has gotten little investment as engineers have had to focus elsewhere, two former company privacy managers say. A privacy-focused “red team,” distinct from a similar squad for cybersecurity issues, that has snuffed out unintended over-collection of data and inadequate anonymization in services available to users is still staffed by just a handful of employees, three sources claim.
New Threats
Meta’s privacy scandals show the limited power of consent decrees to encourage good behavior. The company signed its first agreement with the FTC in 2012 after disclosing some users’ friends’ lists and personal details to partner apps or the public without notice and consent. Like Google, the company pledged to establish a “comprehensive privacy program.” But it took a different tack to Google and didn’t have sufficient staff and tools to review everything it does today, says Protti, the product-focused chief privacy officer. The decree-mandated assessments didn’t catch the shortcomings.
In 2018, through media reports it became clear that Facebook for years allowed partner apps to misuse personal information. Personal data such as users’ interests and friends got into the hands of election consultancies such as Cambridge Analytica, which attempted to create psychological profiles marketed to political campaigns. Facebook re-settled with the FTC and agreed to a $5 billion penalty in 2020. The updated consent decree imposed firm new requirements, including making privacy central to the work of many more employees, tightening security around personal data, and limiting the company’s use of sensitive technologies such as facial recognition. Meta has spent $5.5 billion to comply with the revised deal, including growing staff focused on privacy to 3,000 people from hundreds, representing “a step change for the company in terms of the importance, the investment, the prioritization of privacy,” Protti says.
Meta is now required to conduct a privacy review of every launch that affects user data, conducting more than 1,200 each month and deploying automation and audits to increase their consistency and rigor while ensuring orders are followed post-launch, Protti says.
Each unit of the company has to certify internally on a quarterly basis how it’s protecting users’ data. After the $5 billion fine, people don’t take these certifications lightly, the former employees say. New hires have to review and agree to the consent decree before they can even get to work. Failing to complete regular privacy training locks employees out of corporate systems indefinitely, employees say. “I don’t think you will find an employee that doesn’t believe that privacy is absolutely mission critical for Meta,” Protti says.
The FTC contends that Meta has failed on that mission. In May, the agency alleged that Meta misled its users about the meaning of privacy settings on the Messenger Kids chat app and failed to block its business partners’ access to Facebook data as quickly as promised. The FTC wants to ban Meta from profiting off the data of people under 18 years old and require it to apply privacy commitments to companies it acquires, so no unit escapes scrutiny. Protti says the accusations and demands are unfounded.
No matter the outcome, the legal battle could be the breaking point for consent decrees.
FTC chair Lina Khan has made taking on big tech a priority, and if she wins the case the agency may feel emboldened to pursue more consent decrees and to successively tighten them to keep companies in line. But an FTC win could also weaken decrees by making companies more likely to take the chance of going to court instead of signing an agreement that could later be unilaterally revised, says Maureen Ohlhausen, an FTC commissioner from 2012 to 2018 and now a section chair at the law firm Baker Botts who has represented Meta and Google in other matters. “That changes the calculus of whether to enter a settlement,” she says.
If Meta stops the FTC’s updates to the consent decree, it might encourage other companies to try to fight the agency instead of settling. Either result in the Meta case will likely increase the pressure on US lawmakers to establish universal restrictions and precisely define the agency’s power. In the process, they could empower Americans for the first time with rights beyond the consent decrees, like to delete, transfer, and block sales of personal data held by internet giants.
Jan Schakowsky, a Democratic representative from Illinois in the congressional talks, says though the FTC has forced reform at “formerly lawless companies” through consent decrees, “a comprehensive privacy law is needed to improve Americans’ privacy across the internet and from new types of threats.” Even so, there are no clear signs that years of inaction in Congress on privacy are set to end, despite vocal support from companies including Meta and Google for a law that would not only cover their competitors but also prevent a patchwork of potentially conflicting state privacy rules.
The FTC agrees that a federal privacy law is long overdue, even as it tries to make consent decrees more powerful. Samuel Levine, director of the FTC’s Bureau of Consumer Protection, says that successive privacy settlements over the years have become more limiting and more specific to account for the growing, near-constant surveillance of Americans by the technology around them. And the FTC is making every effort to enforce the settlements to the letter, Levine says. “But it’s no substitute for legislation,” he says. “There are massive amounts of data collected on people not just from these biggest tech companies but from companies not under any consent decree.”