
Safe Superintelligence (SSI), a newly formed AI company co-founded by OpenAI's former chief scientist Ilya Sutskever, has secured a remarkable $1 billion in funding to advance the development of safe artificial intelligence systems that surpass human capabilities.
The company, currently a lean team of 10, plans to leverage the funding to bolster its computing power and attract top talent in AI research and engineering. SSI will operate from hubs in Palo Alto, California, and Tel Aviv, Israel. While declining to disclose its valuation, sources close to the deal suggest SSI is valued at $5 billion.
This substantial investment signals continued confidence in exceptional AI talent, even as funding for foundational AI research experiences a general decline. Many AI startup founders have been lured away by tech giants, contributing to this trend.
Prominent Investors Backing SSI's Vision
The funding round included investments from leading venture capital firms Andreessen Horowitz, Sequoia Capital, DST Global, and SV Angel. NFDG, an investment partnership led by Nat Friedman and SSI's CEO Daniel Gross, also participated.
"It's important for us to be surrounded by investors who understand, respect and support our mission, which is to make a straight shot to safe superintelligence and in particular to spend a couple of years doing R&D on our product before bringing it to market," said Gross.
Addressing AI Safety Concerns
AI safety, a crucial aspect of AI development, aims to prevent AI systems from causing harm and acting against human interests. This topic has gained prominence amid concerns about the potential for rogue AI to pose existential threats to humanity.
Sutskever, a highly influential figure in AI, co-founded SSI in June with Gross, former head of AI initiatives at Apple, and Daniel Levy, a former OpenAI researcher. The team is focused on building a small, highly trusted group of researchers and engineers with a strong emphasis on cultural fit and shared values.
A New Direction for Sutskever
Sutskever, who played a key role in the development of OpenAI's powerful AI models, explained his motivation for starting SSI: "I identified a mountain that's a bit different from what I was working on."
His departure from OpenAI followed a turbulent period involving the attempted ousting of CEO Sam Altman, which Sutskever initially supported but later reversed his decision. After leaving OpenAI, Sutskever's "Superalignment" team, dedicated to ensuring AI aligns with human values, was disbanded.
Sutskever, an early proponent of the "scaling hypothesis" that vast computing power drives AI model improvements, indicated that SSI would take a different approach to scaling.
"Everyone just says scaling hypothesis. Everyone neglects to ask, what are we scaling?" he said. "Some people can work really long hours and they'll just go down the same path faster. It's not so much our style. But if you do something different, then it becomes possible for you to do something special."
For Unparalleled coverage of India's Businesses and Economy – Subscribe to Business Today Magazine
Copyright©2025 Living Media India Limited. For reprint rights: Syndications Today