Big Tech teams up to take action against AI deepfakes in 2024 election – Washington Examiner

Some of the technology industry’s leading companies have teamed up to combat deceptive uses of artificial intelligence to spread misinformation in the 2024 elections.

The collective 20 companies announced the “Tech Accord to Combat Deceptive Use of AI in 2024 Elections” on Friday during the Munich Security Conference. The pact is a series of commitments from companies including Google, Meta, Amazon, and Adobe to work together to design tools that can detect AI-generated images or videos intended to deceive voters, also known as deepfakes. It is the latest effort by Silicon Valley to crack down on AI-generated misinformation in the primary and general elections while Congress is running out of time to pass anything substantial.

“Elections are the beating heart of democracies,” MSC Chairman Christoph Heusgen said in a statement. “The [Accord] is a crucial step in advancing election integrity, increasing societal resilience, and creating trustworthy tech practices.”

The accord includes commitments to developing technology to identify deceptive AI content, assessing models used to create the content, identifying and addressing the distribution of the content, and providing transparency about their efforts.

The signatories include Adobe, Amazon, Anthropic, Arm, ElevenLabs, Google, IBM, Inflection AI, LinkedIn, McAfee, Meta, Microsoft, Nota, OpenAI, Snap, Stability AI, TikTok, Trend Micro, Truepic, and X.

CLICK HERE TO READ MORE FROM THE WASHINGTON EXAMINER

State and federal lawmakers are weighing new regulations to curb deceptive AI-generated media. Senate Majority Leader Chuck Schumer (D-NY) has hosted several hearings on AI and told reporters in September 2023 that he would prioritize a vote on legislation addressing AI-powered misinformation. Gov. Kathy Hochul (D-NY) proposed a ban on the deceptive use of AI in political ads.

However, the lack of legislation has led technology companies to take the initiative. Meta officials announced last week that the company would start labeling AI-generated images on its platforms and that it would use built-in detection tools to determine if an image was synthetic. OpenAI also added hidden signals to its DALL-E image generator so the photos could be easily identified as AI-generated. These signals, also known as watermarks, are part of an industry-wide adoption of tools to recognize deepfakes, although some researchers believe that watermarks can easily be stripped.

Facebook
Twitter
LinkedIn
Telegram
Tumblr