How a Trump Win Could Unleash Dangerous AI

If Donald Trump wins the US presidential election in November, the guardrails could come off of artificial intelligence development, even as the dangers of defective AI models grow increasingly serious.

Trump’s election to a second term would dramatically reshape—and possibly cripple—efforts to protect Americans from the many dangers of poorly designed artificial intelligence, including misinformation, discrimination, and the poisoning of algorithms used in technology like autonomous vehicles.

The federal government has begun overseeing and advising AI companies under an executive order that President Joe Biden issued in October 2023. But Trump has vowed to repeal that order, with the Republican Party platform saying it “hinders AI innovation” and “imposes Radical Leftwing ideas” on AI development.

Trump’s promise has thrilled critics of the executive order who see it as illegal, dangerous, and an impediment to America’s digital arms race with China. Those critics include many of Trump’s closest allies, from X CEO Elon Musk and venture capitalist Marc Andreessen to Republican members of Congress and nearly two dozen GOP state attorneys general. Trump’s running mate, Ohio senator JD Vance, is staunchly opposed to AI regulation.

“Republicans don’t want to rush to overregulate this industry,” says Jacob Helberg, a tech executive and AI enthusiast who has been dubbed “Silicon Valley’s Trump whisperer.”

But tech and cyber experts warn that eliminating the EO’s safety and security provisions would undermine the trustworthiness of AI models that are increasingly creeping into all aspects of American life, from transportation and medicine to employment and surveillance.

The upcoming presidential election, in other words, could help determine whether AI becomes an unparalleled tool of productivity or an uncontrollable agent of chaos.

Oversight and Advice, Hand in Hand

Biden’s order addresses everything from using AI to improve veterans’ health care to setting safeguards for AI’s use in drug discovery. But most of the political controversy over the EO stems from two provisions in the section dealing with digital security risks and real-world safety impacts.

One provision requires owners of powerful AI models to report to the government about how they’re training the models and protecting them from tampering and theft, including by providing the results of “red-team tests” designed to find vulnerabilities in AI systems by simulating attacks. The other provision directs the Commerce Department’s National Institute of Standards and Technology (NIST) to produce guidance that helps companies develop AI models that are safe from cyberattacks and free of biases.

Work on these projects is well underway. The government has proposed quarterly reporting requirements for AI developers, and NIST has released AI guidance documents on risk management, secure software development, synthetic content watermarking, and preventing model abuse, in addition to launching multiple initiatives to promote model testing.

Supporters of these efforts say they’re essential to maintaining basic government oversight of the rapidly expanding AI industry and nudging developers toward better security. But to conservative critics, the reporting requirement is illegal government overreach that will crush AI innovation and expose developers’ trade secrets, while the NIST guidance is a liberal ploy to infect AI with far-left notions about disinformation and bias that amount to censorship of conservative speech.

At a rally in Cedar Rapids, Iowa, last December, Trump took aim at Biden’s EO after alleging without evidence that the Biden administration had already used AI for nefarious purposes.

“When I’m reelected,” he said, “I will cancel Biden’s artificial intelligence executive order and ban the use of AI to censor the speech of American citizens on Day One.”

Due Diligence or Undue Burden?

Biden’s effort to collect information about how companies are developing, testing, and protecting their AI models sparked an uproar on Capitol Hill almost as soon as it debuted.

Congressional Republicans seized on the fact that Biden justified the new requirement by invoking the 1950 Defense Production Act, a wartime measure that lets the government direct private-sector activities to ensure a reliable supply of goods and services. GOP lawmakers called Biden’s move inappropriate, illegal, and unnecessary.

Conservatives have also blasted the reporting requirement as a burden on the private sector. The provision “could scare away would-be innovators and impede more ChatGPT-type breakthroughs,” Representative Nancy Mace said during a March hearing she chaired on “White House overreach on AI.”

Helberg says a burdensome requirement would benefit established companies and hurt startups. He also says Silicon Valley critics fear the requirements “are a stepping stone” to a licensing regime in which developers must receive government permission to test models.

Steve DelBianco, the CEO of the conservative tech group NetChoice, says the requirement to report red-team test results amounts to de facto censorship, given that the government will be looking for problems like bias and disinformation. “I am completely worried about a left-of-center administration … whose red-teaming tests will cause AI to constrain what it generates for fear of triggering these concerns,” he says.

Conservatives argue that any regulation that stifles AI innovation will cost the US dearly in the technology competition with China.

“They are so aggressive, and they have made dominating AI a core North Star of their strategy for how to fight and win wars,” Helberg says. “The gap between our capabilities and the Chinese keeps shrinking with every passing year.”

“Woke” Safety Standards

By including social harms in its AI security guidelines, NIST has outraged conservatives and set off another front in the culture war over content moderation and free speech.

Republicans decry the NIST guidance as a form of backdoor government censorship. Senator Ted Cruz recently slammed what he called NIST’s “woke AI ‘safety’ standards” for being part of a Biden administration “plan to control speech” based on “amorphous” social harms. NetChoice has warned NIST that it is exceeding its authority with quasi-regulatory guidelines that upset “the appropriate balance between transparency and free speech.”

Many conservatives flatly dismiss the idea that AI can perpetuate social harms and should be designed not to do so.

“This is a solution in search of a problem that really doesn’t exist,” Helberg says. “There really hasn’t been massive evidence of issues in AI discrimination.”

Studies and investigations have repeatedly shown that AI models contain biases that perpetuate discrimination, including in hiring, policing, and health care. Research suggests that people who encounter these biases may unconsciously adopt them.

Conservatives worry more about AI companies’ overcorrections to this problem than about the problem itself. “There is a direct inverse correlation between the degree of wokeness in an AI and the AI’s usefulness,” Helberg says, citing an early issue with Google’s generative AI platform.

Republicans want NIST to focus on AI’s physical safety risks, including its ability to help terrorists build bioweapons (something Biden’s EO does address). If Trump wins, his appointees will likely deemphasize government research on AI’s social harms. Helberg complains that the “enormous amount” of research on AI bias has dwarfed studies of “greater threats related to terrorism and biowarfare.”

Defending a “Light-Touch Approach”

AI experts and lawmakers offer robust defenses of Biden’s AI safety agenda.

These projects “enable the United States to remain on the cutting edge” of AI development “while protecting Americans from potential harms,” says Representative Ted Lieu, the Democratic cochair of the House’s AI task force.

The reporting requirements are essential for alerting the government to potentially dangerous new capabilities in increasingly powerful AI models, says a US government official who works on AI issues. The official, who requested anonymity to speak freely, points to OpenAI’s admission about its latest model’s “inconsistent refusal of requests to synthesize nerve agents.”

The official says the reporting requirement isn’t overly burdensome. They argue that, unlike AI regulations in the European Union and China, Biden’s EO reflects “a very broad, light-touch approach that continues to foster innovation.”

Nick Reese, who served as the Department of Homeland Security’s first director of emerging technology from 2019 to 2023, rejects conservative claims that the reporting requirement will jeopardize companies’ intellectual property. And he says it could actually benefit startups by encouraging them to develop “more computationally efficient,” less data-heavy AI models that fall under the reporting threshold.

AI’s power makes government oversight imperative, says Ami Fields-Meyer, who helped draft Biden’s EO as a White House tech official.

“We’re talking about companies that say they’re building the most powerful systems in the history of the world,” Fields-Meyer says. “The government’s first obligation is to protect people. ‘Trust me, we’ve got this’ is not an especially compelling argument.”

Experts praise NIST’s security guidance as a vital resource for building protections into new technology. They note that flawed AI models can produce serious social harms, including rental and lending discrimination and improper loss of government benefits.

Trump’s own first-term AI order required federal AI systems to respect civil rights, something that will require research into social harms.

The AI industry has largely welcomed Biden’s safety agenda. “What we’re hearing is that it’s broadly useful to have this stuff spelled out,” the US official says. For new companies with small teams, “it expands the capacity of their folks to address these concerns.”

Rolling back Biden’s EO would send an alarming signal that “the US government is going to take a hands off approach to AI safety,” says Michael Daniel, a former presidential cyber adviser who now leads the Cyber Threat Alliance, an information sharing nonprofit.

As for competition with China, the EO’s defenders say safety rules will actually help America prevail by ensuring that US AI models work better than their Chinese rivals and are protected from Beijing’s economic espionage.

Two Very Different Paths

If Trump wins the White House next month, expect a sea change in how the government approaches AI safety.

Republicans want to prevent AI harms by applying “existing tort and statutory laws” as opposed to enacting broad new restrictions on the technology, Helberg says, and they favor “much greater focus on maximizing the opportunity afforded by AI, rather than overly focusing on risk mitigation.” That would likely spell doom for the reporting requirement and possibly some of the NIST guidance.

The reporting requirement could also face legal challenges now that the Supreme Court has weakened the deference that courts used to give agencies in evaluating their regulations.

And GOP pushback could even jeopardize NIST’s voluntary AI testing partnerships with leading companies. “What happens to those commitments in a new administration?” the US official asks.

This polarization around AI has frustrated technologists who worry that Trump will undermine the quest for safer models.

“Alongside the promises of AI are perils,” says Nicol Turner Lee, the director of the Brookings Institution’s Center for Technology Innovation, “and it is vital that the next president continue to ensure the safety and security of these systems.”

Facebook
Twitter
LinkedIn
Telegram
Tumblr