The CEO and president of OpenAI, which makes ChatGPT, Sam Altman, recently spoke in public on X.com about the recent departure of Jan Leike, who was presumably the company’s previous chief safety officer. According to current reports, Leike served as the alignment head. On May 17, he submitted his resignation, citing irreconcilable differences with the leadership of the organization.Leike said, among other things, that OpenAI’s “safety culture and processes have taken a backseat to shiny products.” After Leike’s post, Brockman and Altman responded quickly, posting on X.com within 24 hours of each other. Regarding his part, Brockman outlined a convoluted post that included a three-pronged approach for the company’s safety alignment. He started by thanking Leike for his contributions to the organization and refuted the idea that OpenAI didn’t prioritize security.
First, we have increased understanding of the opportunities and concerns associated with artificial general intelligence (AGI),” Brockman wrote, citing the company’s prioritization of the demand for international control of AGI. Secondly, we have been laying the groundwork required for the safe implementation of progressively more powerful systems. It’s difficult to figure out how to make a new piece of technology safe the first time. “The future is going to be harder than the past” was the last point made in Brockman’s presentation. To stay up with the stakes of every new model, we must continuously improve our safety efforts.
He also hinted that the business wasn’t operating according to the big tech prescription, which calls for breaking things and moving quickly. “We’re not sure when we’ll reach our safety bar for releases as we build in this direction, and it’s okay if that pushes out release timelines.” The CEO and co-founder Sam Altman spoke briefly but stated he would have more to say in the next several days: “He is correct.” In response to Leike’s remarks, Altman wrote, “We have a lot more to do; we are committed to doing it.