Crypto

Amidst worries, tech titans promise to reduce the impact of AI on elections.

Election-related AI content is not outright prohibited by the voluntary agreement. On Friday, February 16, twenty artificial intelligence (AI) development companies declared their intention to stop their software from meddling in elections, including those held in the US.

The agreement recognises that there is a substantial risk associated with the products of the companies, particularly in light of the fact that 4 billion people are anticipated to vote globally this year. Concerns over misleading AI election content and its ability to mislead voters are highlighted in the document, endangering the credibility of voting procedures.

The accord also recognises that the tech industry has been exploring self-regulation as a result of international legislators’ sluggish response to the swift advancements in generative AI. Microsoft President and Vice Chair Brad Smith endorsed this in a statement:

“We have a responsibility to ensure that these tools don’t become weaponized in elections as society embraces the benefits of AI,”. Tech behemoths including Microsoft, Google, Adobe, Amazon, Anthropic, Arm, ElevenLabs, IBM, Inflection AI, LinkedIn, McAfee, Meta, Nota, OpenAI, Snap, Stability AI, TikTok, TrendMicro, Truepic, and X are among the twenty signatories of the pledge.

The agreement, though, is optional and stops short of outright forbidding the use of AI in election content. Eight actions are listed in the 1,500-word paper that the corporations pledge to take this year. Developing tools to distinguish AI-generated photos from authentic content and maintaining public transparency regarding noteworthy advancements are the phases involved in this process.

Free Press, an advocacy group for a free internet, declared that the pledge was meaningless. After the 2020 election, they claimed, digital corporations broke their prior promises to maintain election integrity. The group is in favour of more human reviewers having more oversight.

Congresswoman Yvette Clarke of New York’s 9th District expressed her support for the tech deal and her desire for Congress to expand upon it. She emphasised the significance of the accord by saying:

Clarke has sponsored legislation to control deepfakes and AI-generated content in political advertisements. “This could be a defining moment for this Congress, and this may be the one unifying issue where we can band together to protect this nation and future generations of Americans to come,” Clarke said.

The Federal Communications Commission decided to forbid AI-generated robocalls with AI-generated voices on January 31. This happened in response to a spoof robocall purporting to be from President Joe Biden that raised concerns about the possibility of phoney voices, photos, and videos in politics prior to the New Hampshire primary in January.

Exit mobile version