The Senate Is Considering An AI Bill That Could Radically Alter The Future Of The Internet

The Senate could soon take up a bipartisan bill defining the liability protections enjoyed by artificial intelligence-generated content, which could lead to considerable impacts on online speech and the development of AI technology.

Republican Missouri Sen. Josh Hawley and Democratic Connecticut Sen. Richard Blumenthal in June introduced the No Section 230 Immunity for AI Act, which would clarify that liability protections under Section 230 of the Communications Decency Act do not apply to text and visual content created by artificial intelligence. Hawley may attempt to hold a vote on the bill in the coming weeks, his office told the Daily Caller News Foundation.

Section 230 of the Communications Decency Act of 1996 states that internet companies cannot be held liable for third-party speech posted on their platforms. The question of whether these same protections apply to content created by artificial intelligence could have a dramatic impact on online speech, especially as artificial intelligence technology such as ChatGPT come to play a large role online, as major tech companies could face a deluge of lawsuits for AI-generated content. (RELATED: White House Announces Artificial Intelligence Vows From Big Tech … But Experts Are Unimpressed)

The bill would enable Americans to file lawsuits against AI firms whose advanced technology enables the production of damaging content. The bill would target AI-generated content such as deepfakes, which are false but realistic-looking visual imitations, often of a real person. Deepfakes are becoming much more widespread, leading lawmakers to raise concerns that they could enable financial fraud and intellectual property theft.

The legislation defines generative AI as “an artificial intelligence system that is capable of generating novel text, video, images, audio, and other media based on prompts or other forms of data provided by a person.’’

Democratic Oregon Sen. Ron Wyden, one of the authors of Section 230, said it should not apply to AI in comments to The Washington Post in March.

“AI tools like ChatGPT, Stable Diffusion and others being rapidly integrated into popular digital services should not be protected by Section 230,” he told the Post. “And it isn’t a particularly close call … Section 230 is about protecting users and sites for hosting and organizing users’ speech” and it “has nothing to do with protecting companies from the consequences of their own actions and products.”

“The reality is Section 230 was not written with artificial intelligence in mind, or the idea that artificial intelligence creating content is the same thing as user-generated content,” Jon Schweppe, director of policy for American Principles Project told the DCNF. “And so, obviously, we need to consider what we want to do with AI before we just grant immunity from civil liabilities to all these firms.”

We need to put power in the hands of consumers and parents, and that’s what Senators Hawley and Blumenthal’s A.I. bill does. pic.twitter.com/6KgX3XYsuo

— Senator Hawley Press Office (@SenHawleyPress) July 6, 2023

However, some tech experts warn that removing Section 230 protections from AI-generated content could have deleterious effects on both internet users and the burgeoning AI industry.

Making AI companies liable for what their models put out could lead to companies being much more reluctant to train AI models with controversial content, James Czerniawski, senior policy analyst at Americans for Prosperity, told the DCNF.

“AI models draw information from publicly available content (Twitter, e-books, articles, etc.),” Czerniawski explained. “If you expose companies to liability for the results of the outputs of their chatbots, which is using third-party content, companies will more heavily scrutinize what makes it into the model for training to minimize the risk.”

When certain sources say content is harmful, AI companies may not want to feed it to their models, Czerniawski told the DCNF, pointing to examples of censorship of conservative news sites at the behest of organizations like the Global Disinformation Index.

Section 230 may be the reason that technology companies host controversial speech on their platforms, so limiting liability protections for AI-generated content could lead to AI relying on a narrower subset of information, Cato Institute Technology Policy Research Fellow Jennifer Huddleston told the DCNF.

“The internet has provided important opportunities for a wide range of voices but particularly those who may have lacked opportunity in traditional media outlets, including conservatives,” Huddleston said. “Section 230 has been critical in providing legal certainty for platforms that may make businesses more comfortable in carrying user-generated content and particularly in cases of controversial or sensitive discussions.”

Hawley and Blumenthal argued that extending Section 230 liability protections to AI would shield tech companies from accountability for the perceived harms of their products.

“We can’t make the same mistakes with generative AI as we did with Big Tech on Section 230,” Hawley stated. “When these new technologies harm innocent people, the companies must be held accountable. Victims deserve their day in court and this bipartisan proposal will make that a reality.”

“AI companies should be forced to take responsibility for business decisions as they’re developing products—without any Section 230 legal shield,” Blumenthal stated. “This legislation is the first step in our effort to write the rules of AI and establish safeguards as we enter this new era.”

NetChoice, a group whose members include companies like Google and TikTok, says the definition of artificial intelligence in Hawley and Blumenthal’s bill is so vague and broad that it could apply to much more technology than what is typically considered AI.

“The devil is in the definitions which open the door to frivolous lawsuits against search engines, spam blocking, and removal of lawful but awful content — the very thing that Section 230 was designed to prevent,” NetChoice’s Vice President & General Counsel Carl Szabo told the DCNF.

Conservatives, in particular, should not advocate removing Section 230 as “it will come back to haunt them in the future,” R Street Institute Senior Fellow Adam Thierer told the DCNF.

“Some conservatives keep pushing to revoke Section 230 as a way to lash out against tech companies that they feel are biased against their perspectives,” Thierer told the DCNF. “But this is a misguided strategy, especially when it comes from supporters of Donald Trump, a man who owes his success to his ability to evade traditional media outlets and use new digital platforms to build a massive following and win the White House.”

Blumenthal did not respond to the DCNF’s requests for comment.

All content created by the Daily Caller News Foundation, an independent and nonpartisan newswire service, is available without charge to any legitimate news publisher that can provide a large audience. All republished articles must include our logo, our reporter’s byline and their DCNF affiliation. For any questions about our guidelines or partnering with us, please contact [email protected].

Facebook
Twitter
LinkedIn
Telegram
Tumblr