Last year, the White House struck a landmark safety deal with AI developers that saw companies including Google and OpenAI promise to consider what could go wrong when they create software like that behind ChatGPT. Now a former domestic policy adviser to President Biden who helped forge that deal says that AI developers need to step up on another front: protecting their secret formulas from China.
“Because they are behind, they are going to want to take advantage of what we have,” said Susan Rice regarding China. She left the White House last year and spoke on Wednesday during a panel about AI and geopolitics at an event hosted by Stanford University’s Institute for Human-Centered AI. “Whether it’s through purchasing and modifying our best open source models, or stealing our best secrets. We really do need to look at this whole spectrum of how do we stay ahead, and I worry that on the security side, we are lagging.”
The concerns raised by Rice, who was formerly President Obama’s national security adviser, are not hypothetical. In March the US Justice Department announced charges against a former Google software engineer for allegedly stealing trade secrets related to the company’s TPU AI chips and planning to use them in China.
Legal experts at the time warned it could be just one of many examples of China trying to unfairly compete in what’s been termed an AI arms race. Government officials and security researchers fear advanced AI systems could be abused to generate deepfakes for convincing disinformation campaigns, or even recipes for potent bioweapons.
There isn’t universal agreement among AI developers and researchers that their code and other components need protecting. Some don’t view today’s models as sophisticated enough to need locking down, and companies like Meta that are developing open source AI models release much of what government officials, such as Rice, would suggest holding tight. Rice acknowledged that stricter security measures could end up setting US companies back by cutting the pool of people working to improve their AI systems.
Interest in—and concern about—securing AI models appears to be picking up. Just last week, the US think tank RAND published a report identifying 38 ways secrets could leak out from AI projects, including bribes, break-ins, and exploitation of technical backdoors.
RAND’s recommendations included that companies should encourage staff to report suspicious behavior by colleagues and allow only a few employees access to the most sensitive material. Its focus was on securing so-called model weights, the values inside an artificial neural network that get tuned during training to imbue it with useful functionality, such as ChatGPT’s ability to respond to questions.
Under a sweeping executive order on AI signed by President Biden last October, the US National Telecommunications and Information Administration is expected to release a similar report this year analyzing the benefits and downsides to keeping weights under wraps. The order already requires companies that are developing advanced AI models to report to the US Commerce Department on the “physical and cybersecurity measures taken to protect those model weights.” And the US is considering export controls to restrict AI sales to China, Reuters reported last month.
Google, in public comments to the NTIA ahead of its report, said it expects “to see increased attempts to disrupt, degrade, deceive, and steal” models. But it added that its secrets are guarded by a “security, safety, and reliability organization consisting of engineers and researchers with world-class expertise” and that it was working on “a framework” that would involve an expert committee to help govern access to models and their weights.
Like Google, OpenAI said in comments to the NTIA that there was a need for both open and closed models, depending on the circumstances. OpenAI, which develops models such as GPT-4 and the services and apps that build on them, like ChatGPT, last week formed its own security committee on its board and this week published details on its blog about the security of the technology it uses to train models. The blog post expressed hope that the transparency would inspire other labs to adopt protective measures. It didn’t specify from whom the secrets needed protecting.
Speaking alongside Rice at Stanford, RAND CEO Jason Matheny echoed her concerns about security gaps. By using export controls to limit China’s access to powerful computer chips, the US has hampered Chinese developers’ ability to develop their own models, Matheny said. He claimed that has increased their need to steal AI software outright.
By Matheny’s estimate, spending a few million dollars on a cyberattack that steals AI model weights, which might cost an American company hundreds of billions of dollars to create, is well worth it for China. “It’s really hard, and it’s really important, and we’re not investing enough nationally to get that right,” Matheny said.
China’s embassy in Washington, DC, did not immediately respond to WIRED’s request for comment on theft accusations, but in the past has described such claims as baseless smears by Western officials.
Google has said that it tipped off law enforcement about the incident that became the US case alleging theft of AI chip secrets for China. While the company has described maintaining strict safeguards to prevent the theft of its proprietary data, court papers show it took considerable time for Google to catch the defendant, Linwei Ding, a Chinese national who has pleaded not guilty to the federal charges.
The engineer, who also goes by Leon, was hired in 2019 to work on software for Google’s supercomputing data centers, according to prosecutors. Over about a year starting in 2022, he allegedly copied more than 500 files with confidential information over to his personal Google account. The scheme worked in part, court papers say, by the employee pasting information into Apple’s Notes app on his company laptop, converting the files to PDFs, and uploading them elsewhere, all the while evading Google’s technology meant to catch that sort of exfiltration.
While engaged in the alleged stealing, the US claims the employee was in touch with the CEO of an AI startup in China and had moved to start his own Chinese AI company. If convicted, he faces up to 10 years in prison.