The Council of Europe has approved new rules for ethical AI application in journalism.
On December 29, the Council of Europe declared that it would establish rules for the “responsible implementation” of artificial intelligence (AI) in journalism. The recommendations were first announced on November 30. The Council’s Intergovernmental Steering Committee on Media and the Information Society endorsed them, stating that they represent a “significant contribution” to the advancement of a public communication sector that is human rights compatible and grounded in the rule of law. “They provide practical guidance to the relevant actors, in particular news media organizations, but also states, technology providers and digital platforms that disseminate news, detailing how AI systems should be used to support the production of journalism.” The recommendations include artificial intelligence (AI) systems at several stages of journalistic output, including the choice to utilize AI at the outset and the acquisition and integration of AI tools by media companies within the newsroom.A major component of the rules is the impact AI will have on audiences and society.Consequently, they suggest that member nations, platforms, and technology providers assume certain duties.46 European nations are members of the Council of Europe, which has its headquarters in Strasbourg, France.Its goals are to advance the rule of law, democracy, and human rights. Journalists have observed differing responses to artificial intelligence (AI) as it has become more widely used over the past year.One side, Channel 1 AI has announced that in 2024, a whole newsroom run entirely by AI journalists will be built in order to provide viewers with individualized news. In mid-December, the German media conglomerate Axel Springer declared that it would collaborate with OpenAI to incorporate ChatGPT into its reporting. In the meantime, copyright concerns have been plaguing traditional newsrooms, and some have even claimed that AI models are being trained illegally on the material of media companies.The New York Times’ lawsuit, filed on December 27, against Microsoft and OpenAI for misusing its content in model training, is the most recent example.Remember to peruse our “ultimate 2023 AI guide” to stay up to date on all the developments in AI in 2023.