Former OpenAI Members Launch New AI Firm
OpenAI co-founder and former chief scientist Ilya Sutskever has announced the launch of a new AI firm called Safe Superintelligence Inc. The company, co-founded by Daniel Levy and Daniel Gross, will focus on developing a “safe superintelligence” as its primary goal.
In a recent announcement, the firm stated that superintelligence is “within reach” and that ensuring its safety for humans is the most crucial technical challenge of our time. Safe Superintelligence Inc. aims to become a leading SSI lab, with safety as its primary focus and technology as its sole product.
“We are assembling a lean, cracked team of the world’s best engineers and researchers dedicated to focusing on SSI and nothing else.”
The company plans to advance its capabilities quickly while maintaining a strong focus on safety. It aims to avoid distractions such as management, overhead, and short-term commercial pressures that could divert it from its goal.
“This way, we can scale in peace.”
Safe Superintelligence Inc. emphasized that investors support its approach of prioritizing safe development above all else. The company will be based in Palo Alto, California, with offices in Tel Aviv, Israel.
Launch Follows Safety Concerns at OpenAI
The launch of Safe Superintelligence Inc. comes after a dispute at OpenAI, where Sutskever was part of a group attempting to remove CEO Sam Altman from his role in November 2023. Reports suggested safety concerns at OpenAI around that time, and concerns about a communication breakdown between Altman and the board of directors.
Sutskever left OpenAI in May without citing specific reasons, but recent events at the AI firm have highlighted the importance of AI safety. Jan Leike and Gretchen Krueger, along with other employees, have also left OpenAI due to concerns about AI safety.
In a recent interview, Sutskever mentioned that he has a good relationship with Altman and that OpenAI is aware of Safe Superintelligence Inc. in general terms.