To prevent false information from spreading ahead of the forthcoming European elections, the European Commission is requiring tech platforms including as Facebook, X, and TikTok to identify content generated by artificial intelligence (AI). Regarding suggested election security rules for very large online platforms (VLOPs) and very large online search engines (VLOSEs), the commission has opened public comment. The suggestions are meant to lessen the dangers that deepfakes and generative AI pose to democracy.
The draft guidelines provide examples of potential responses to election-related risks, such as targeted initiatives pertaining to generative AI content, risk-mitigation plans prior to or following an election, and explicit directives for elections to the European Parliament.
Generative AI has the ability to fabricate and spread fake, deceptive synthetic content about political figures, election polls, scenarios, or storylines in order to manipulate voting processes or mislead voters.
The draft election security rules, which are now available for public comment in the EU until March 7, suggest informing users on pertinent platforms of any potential errors in content generated by generative AI. The proposed guidelines also recommend that users be directed to reliable sources of information and stipulate that digital companies had to put in place measures to stop the creation of deceptive content that has the potential to significantly influence user behavior. In order to allow users to confirm the accuracy and further contextualize the information, it is now advised that VLOPs and VLOSEs producing AI-generated text “indicate, where possible, in the outputs generated the concrete sources of the information used as input data.”
The recently adopted legal proposal of the European Union, the AI Act, and its non-binding equivalent, the AI Pact, serve as models for the risk reduction “best practices” that are suggested in the draft guidance. Since generative AI went mainstream in 2023, concerns about sophisticated AI systems—such big language models—have grown, leading to increased awareness of technologies like OpenAI’s ChatGPT.
The Digital Services Act, an EU content moderation regulation, requires corporations to flag modified information, although the commission did not specify when exactly this will happen. But Meta revealed in a blog post that they will be releasing new restrictions in the upcoming months regarding AI-generated material on Facebook, Instagram, and Threads. Content identified as AI-generated, whether by deliberate watermarking or metadata, will be labeled visibly.