Technology

‘Right to warn’ on AI risks is demanded by former employees of OpenAI and Anthropic.

Leading artificial intelligence (AI) developers’ former staff members are pushing these cutting-edge AI firms to strengthen their policy for protecting whistleblowers. With the public, they might then express “risk-related concerns” about the development of increasingly complex AI systems. The “Right to Warn AI” petition was established on June 4 by a group of 13 former and present workers from OpenAI (ChatGPT), Anthropic (Claude), and DeepMind (Google), as well as the seminal AI scientists Stuart Russell and Yoshua Bengio. By issuing this declaration, frontier AI businesses hope to demonstrate their commitment to enabling staff members to voice concerns about AI risks both internally and externally.

William Saunders, a former OpenAI employee and supporter of the movement, commented that, when dealing with potentially dangerous new technologies, there should be ways to share information about risks with independent experts, governments and the public.

“Today, the people with the most knowledge about how frontier AI systems work and the risks related to their deployment are not fully free to speak because of possible retaliation and overly broad confidentiality agreements.”

The idea consists of four main recommendations for the AI developers. The first is to do away with the concept of non-disparagement concerning risks, which means that employers won’t stifle employee worries about AI hazards by enforcing agreements that forbid them from doing so or penalize them if they do.

In order to foster an atmosphere that is favorable to candid criticism of AI threats, they also seek to create anonymous reporting mechanisms via which people can voice their concerns. Finally, the petition requests that firms promise not to penalize against employees who reveal material that raises major concerns regarding artificial intelligence. According to Saunders, using the suggested principles is a “proactive way” to work with AI businesses in order to produce the necessary and safe AI.

The petition is being launched as worries about AI laboratories’ “deprioritization” of the security of their most recent models grow, particularly in light of artificial general intelligence (AGI), which aims to produce software that can learn from itself and behave like a person. Daniel Kokotajlo, an ex-employee of OpenAI, stated that he “lost hope that they would act responsibly” when it came to the building of AGI.

“They and others have bought into the ‘move fast and break things’ approach and that is the opposite of what is needed for technology this powerful and this poorly understood.”

In a Ted AI podcast on May 28, Helen Toner, a former board member of OpenAI, said that Sam Altman, the CEO of the business, had been fired for allegedly hiding information from the board.

Exit mobile version