ARTICLE AD
Ilya Sutskever, one of OpenAI’s co-founders, has launched a new company, Safe Superintelligence Inc. (SSI), just a month after leaving OpenAI.
Sutskever, who was OpenAI’s longtime chief scientist, founded SSI with former YC partner Daniel Gross and ex-OpenAI engineer Daniel Levy.
At OpenAI, Sutskever was integral to the company’s efforts to improve AI safety with the rise of “superintelligent” AI systems, an area he worked on alongside Jan Leike. Yet both Sutskever and then Leike left the company dramatically in May after falling out with leadership at OpenAI over how to approach AI safety. Leike now heads a team at Anthropic.
Sutskever’s been giving attention to the thornier aspects of AI safety for a long time now. In a blog post published in 2023, he (writing with Leike) predicted that AI with intelligence exceeding that of humans could arrive within the decade—and when it does, it won’t necessarily be benevolent, necessitating research into ways to control and restrict it.
Superintelligence is within reach.
Building safe superintelligence (SSI) is the most important technical problem of our time.
We've started the world’s first straight-shot SSI lab, with one goal and one product: a safe superintelligence.
It’s called Safe Superintelligence…
“SSI is our mission, our name, and our entire product roadmap, because it is our sole focus. Our team, investors, and business model are all aligned to achieve SSI. We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs,” the tweet reads.
“We plan to advance capabilities as fast as possible while making sure our safety always remains ahead. This way, we can scale in peace. Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures.”
SSI has offices in Palo Alto and Tel Aviv, where it is currently recruiting technical talent.