Joe Biden wants the US government to make wider use of artificial intelligence—and to keep commercial AI on a tighter leash. Those are two prominent themes of a sprawling executive order Biden will sign today, which issues dozens of directives for federal agencies to complete within the next year, on topics ranging from national security and immigration to housing and health care.
The order places reporting requirements on companies developing powerful AI technology, such as that behind OpenAI’s ChatGPT. Biden will use the Defense Production Act, a law that can compel businesses to take actions in the interest of national security, to require the makers of large AI models to report key information to the government, including when they are training a new model and what cybersecurity protections they have.
That will include disclosing results of so-called red teaming exercises, intended to reveal vulnerabilities in AI models, such as those that can be used to evade controls that prevent malicious use cases such as generating malware. The goal is to monitor the potential threats AI technology can pose to national security, public health, and the economy.
Another part of the order requires companies that acquire, develop, or possess large-scale computing clusters, essential to training the most powerful AI systems, to report their activity to the federal government. This rule is intended to help the government understand which entities, including those from nations competing with the US, have strong AI capabilities.
The executive order also directs the Department of Energy to evaluate how AI outputs can contribute to biological or chemical attacks, or cyber attacks on critical infrastructure. The UK government included the possibility of advanced AI enabling biological and chemical attacks in a report last week on potential threats posed by the technology.
White House deputy chief of staff Bruce Reed, who is chair of a newly formed White House AI Council to ensure compliance with the order, calls it “the strongest set of actions any government in the world has ever taken on AI safety, security, and trust.”
Help Wanted
The measures in Biden’s executive order aimed at powering up US government AI include the creation of a dedicated job portal hosted at AI.gov to draw more experts and researchers familiar with the technology into government. Another initiative asks for a new training program to produce 500 AI researchers by 2025.
Divyansh Kaushik, an associate director at policy research group the Federation of American Scientists, who helped draft portions of the executive order, says those could be among the most influential pieces. “People often forget that talent is the biggest bottleneck in the federal government,” he says.
Kaushik also welcomes the way Biden’s order demands changes to immigration policy to make it easier for AI talent to come to the US. A plan to allow immigrant workers to renew their visas inside the US, for example, could remove the need for hundreds of thousands of STEM students to travel to their home countries for in-person interviews.
Although the US has a majority of the world’s top AI talent today, only 20 percent of them received undergraduate degrees in the US, Kaushik says, indicating that many are immigrants. He says it’s in the US interest to make it easier for AI experts to come from overseas, to compete against other destinations such as China, Canada, or the UK.
Biden’s new executive order acknowledges that AI projects can be harmful to citizens if not carefully implemented, singling out the potential for discrimination or other unintended effects in housing or healthcare. The order calls for the White House’s Office of Management and Budget to develop guides and tools to help government employees who purchase AI services from private companies make good choices.
Suresh Venkatasubramanian, director of the Center for Technological Responsibility, Reimagination, and Redesign at Brown University, says those rules could be impactful. Within the federal government, procurement is “number one on everyone’s agenda because everyone understands that is the way to effect change,” he says. He previously helped the White House assemble an AI Bill of Rights for federal agencies issued by Biden last year.
However, Venkatasubramanian says some of the most critical government use cases for AI in the US will largely go unaffected by the new executive order. Biden’s directives apply to federal agencies but much AI used in criminal justice and policing is deployed by state and local law enforcement. False positives from AI-powered technology like ShotSpotter gunshot detection and face recognition have led to false arrests, and police departments currently use predictive policing software that doesn’t work as advertised.
To force state agencies to also adopt the standards in the executive order, Venkatasubramanian says federal lawmakers could make compliance a condition of funding for state and local law enforcement agencies.
This is the first executive order of the Biden presidency solely focused on artificial intelligence, and it follows two by former president Trump, in 2019 and 2020. So far, government agencies have a spotty record of complying with them.
The 2019 order focused on investments in AI research and development. A December 2020 executive order and the Advancing American AI Act passed last year require federal agencies to annually disclose an inventory of algorithms in use. But a Stanford Law School study found a pattern of inconsistent compliance, warning of a national AI “capacity gap.” If Biden’s new order, the most ambitious presidential directive on the technology to date, works as intended, it will significantly expand that capacity.