OpenAI has been under the spotlight lately due to some significant internal tensions. You might have heard about Peter Thiel, the well-known investor, who recently warned Sam Altman, OpenAI’s CEO, about brewing disagreements over AI safety. This all came to light thanks to a Wall Street Journal article that delves into Keach Hagey’s upcoming book, “The Optimist: Sam Altman, OpenAI, and the Race to Invent the Future.” Apparently, during a dinner in Los Angeles, Thiel advised Altman about potential conflicts between the company’s business ambitions and the concerns of AI safety advocates.
Thiel’s warning was quite direct. He said, “You don’t understand how Eliezer has programmed half the people in your company to believe in that stuff,” referring to AI researcher Eliezer Yudkowsky. Altman, however, seemed to brush off these concerns, recalling how Elon Musk had left OpenAI back in 2018 over similar safety worries.
By 2024, these tensions had escalated. Several key figures in AI safety, including Chief Scientist Ilya Sutskever and Jan Leike, left the company. Leike openly criticized OpenAI’s approach to safety, pointing out issues related to computing resources. The Wall Street Journal also highlighted how CTO Mira Murati and Sutskever gathered substantial evidence against Altman’s management practices. There were claims of misleading the board and bypassing safety protocols, especially noticeable during the GPT-4-Turbo launch. Murati’s attempts to tackle these problems were reportedly sidelined by HR.
The situation reached a critical point when four board members, including Sutskever, decided to remove Altman after compiling extensive documentation of his alleged misconduct. This decision also affected Greg Brockman, who was accused of undermining Murati’s authority. Initially, the board provided little public explanation, citing Altman’s lack of transparency. However, when faced with the possibility of massive staff resignations, both Altman and Brockman were reinstated. This reversal caught many by surprise, including Sutskever, who had expected a different reaction from the staff.
Despite their earlier involvement in Altman’s removal, both Murati and Sutskever supported his return. They’ve since moved on to pursue new opportunities in the AI industry. This whole episode highlights just how deeply AI safety concerns have influenced OpenAI’s leadership and strategic direction, prompting significant changes in their approach to AI safety.