close
close
Big Tech explains its efforts to crack down on malicious use of AI in elections – MeriTalk

After months of effort, 19 major technology companies have told Senate Intelligence Committee Chairman Mark Warner (D-Virginia) of their efforts to combat the malicious use of AI this election cycle.

In February, a group of technology companies – including big names like Amazon, Google and TikTok – signed a pact at the Munich Security Conference to combat the use of malicious, AI-generated content aimed at deceiving voters in the 2024 election.

In May, Senator Warner said urged concrete answers to the measures these companies are taking to implement the Tech Accord.

Less than 100 days before the US presidential election, Senator Warner said on August 7 made public Responses to his investigation from 19 companies: Adobe, Amazon, Anthropic, Arm, Google, IBM, Intuit, LG, McAfee, Microsoft, Meta, Open AI, Snap, Stability AI, TikTok, Trend, True Media, Truepic and X.

For example, Google said it was the first technology company to introduce new disclosure requirements for election ads containing synthetic content.

Amazon said its baseline generative AI model – Titan Image Generator – is watermarked “to help curb the spread of disinformation by providing a mechanism to identify AI-generated images.”

TikTok told Senator Warner that it requires developers to label AI-generated content and has developed the unique tool that enables them to do this, “which has so far been used by more than 37 million developers worldwide.”

Meta noted that it has changed its approach to identifying and labeling AI-generated content. “This includes labeling a broader range of video, audio, and image content when we identify industry-standard AI image indicators or when people indicate they are uploading AI-generated content,” Meta wrote. “If we determine that digitally created or altered image, video, or audio content poses a particularly high risk of materially misleading the public on an important matter, we may add a more prominent label.”

Senator Warner stressed that the responses from technology companies “showed promising avenues for collaboration, information sharing and standards development, but also highlighted areas where there is still significant room for improvement.”

However, the lawmakers also said there was a “very concerning lack of specificity and resources for enforcement” of the policies, and also noted that the companies had failed to maintain relationship building with local institutions – including local media, civic institutions and election officials – to “equip them with resources to identify and address the misuse of generative AI tools in their communities.”

“With the election less than 100 days away, we must prioritize real action and robust communications to systematically catalog malicious AI-generated content,” Senator Warner said on August 7. “While this technology shows promise, generative AI still poses a serious threat to the integrity of our elections, and I am fully focused on continuing to work with public and private partners to get ahead of these real and credible threats.”

By Jasper

Leave a Reply

Your email address will not be published. Required fields are marked *