Crypto

US legislators swiftly propose legislation in reaction to Taylor Swift deeperfake

Removing the photographs and taking appropriate action against the accounts that disseminated them is what X said in a statement.

After numerous sexual photos of Taylor Swift appeared online, US politicians are pushing for legislation that would make creating deepfake images illegal. Some social media sites, such Telegram and X, hosted the images.

Congressman Joe Morelle of the United States strongly objected to the photographs being shared, calling them “appalling” in a post on the social media site X. In order to make non-consensual deepfakes a federal criminal, he drafted the Preventing Deepfakes of Intimate Images Act. He emphasised this legislation and advocated for immediate action on the issue.

Artificial intelligence (AI) is used in “deepfakes,” which are videos that have someone’s face or body altered. Although the distribution or production of deepfake photographs is not covered by federal law, some legislators are attempting to address this problem.

Democratic Representative Yvette D. Clarke said on the social media site X that Taylor Swift’s predicament is not new. She emphasised that women have long been the victims of this technology and that, as AI has advanced, making deepfakes has grown easier and less expensive.

X said in a statement that it is aggressively deleting the photographs and pursuing legal action against the users that disseminated them. The platform claimed that it is keeping a careful eye on things in order to ensure that any more infractions are dealt with quickly and that content is removed.

Deepfake pornography sharing was made illegal in the UK in 2023 by the Online Safety Act. According to the State of Deepfakes study from the previous year, women make up about 99% of the targets of the majority of deepfakes that are posted online and contain pornography.

Concerns over AI-generated content have grown as a result of the World Economic Forum’s (WEF) 19th Global Risks Report, which emphasised the negative effects of AI technologies on people, businesses, ecosystems, and economies, as well as some intended or unintended negative consequences of AI advances and related technological capabilities (including generative AI). The Canadian Security Intelligence Service (CSIS), Canada’s main national intelligence organisation, has likewise voiced concern about online disinformation tactics that use deepfakes created by artificial intelligence (AI).

The United Nations identified artificial intelligence (AI)-generated media as a serious and urgent danger to information integrity, particularly on social media, in a study released on June 12. The United Nations said that the threat of misinformation on the internet has grown as a result of the quick development of technology, especially in the field of generative artificial intelligence (deepfakes).

Exit mobile version