ARTICLE AD
In an interview at the Aspen Ideas Festival on Tuesday, Mustafa Suleyman, CEO of Microsoft AI, made it very clear that he admires OpenAI CEO Sam Altman.
CNBC’s Andrew Ross Sorkin asked what the plan will be when Microsoft’s enormous AI future isn’t so closely dependent on OpenAI, using a metaphor of winning a bicycling race. But Suleyman sidestepped.
“I don’t buy the metaphor that there is a finish line. This is another false frame,” he said. “We have to stop framing everything as a ferocious race.”
He then proceeded to toe the Microsoft corporate line about his company’s arrangement with OpenAI, in which it invested a reported $10 billion through some combination of cash and cloud credits. The deal gives Microsoft a big stake in OpenAI’s for-profit business, and allows it to embed its AI models into Microsoft wares and sell its tech to Microsoft cloud customers. Some reports indicate that Microsoft may also be entitled to some OpenAI payments.
“It is true that we have ferocious competition with them,” Suleyman said about OpenAI. “They are an independent company. We don’t own or control them. We don’t even have any board members. So they do entirely their own thing. But we have a deep partnership. I’m very good friends with Sam, have huge respect and, trust and faith in what they’ve done. And that’s how it’s going to roll for many, many years to come,” Suleyman said.
This close/distant relationship is important for Suleyman to profess. Microsoft’s investors and enterprise customers appreciate the close relationship. But regulators did get curious and in April, the EU agreed that its investment was not a true takeover. Should that change, in all likelihood so would the regulatory involvement.
Suleyman says he trusts Altman on AI safety
In a sense, Suleyman was the Sam Altman of AI before OpenAI. He has spent most of his career in competition with OpenAI, and is known for his own ego.
Suleyman was the founder of AI pioneer DeepMind and sold it to Google in 2014. He was reportedly put on administrative leave following allegations of bullying employees, as Bloomberg reported in 2019, then moved to other Google roles before leaving the company in 2022 to join Greylock Partners as a venture partner. A few months later, he and Greylock’s Reid Hoffman, a Microsoft board member, launched Inflection AI to build its own LLM chatbot, among other goals.
Microsoft CEO Satya Nadella tried but failed to hire Sam Altman last fall, when OpenAI fired him and then quickly reinstated him. After that, Microsoft hired Suleyman and much of Inflection in March, leaving a shell of a company and a big check. In his new role at Microsoft, Suleyman has been auditing OpenAI code, Semafor reported earlier this month. As one of OpenAI’s previous big rivals, he’s now getting to dive deep inside the crown-jewel frenemy competitor.
There’s yet another wrinkle to all of this. OpenAI was founded with a premise of doing AI safety research, to stop a one-day evil AI from destroying humankind. In 2023, when he was still an OpenAI competitor, Suleyman released a book called “The Coming Wave, Technology, Power and the 21st Century’s Greatest Dilemma” with researcher Michael Bhaskar. The book discusses the dangers of AI and how to prevent them.
A group of former OpenAI employees signed a letter earlier this month outlining their fears that OpenAI and other AI companies are not taking safety seriously enough.
When asked about that, Suleyman also proclaimed his love and trust for Altman, but also that he wants to see both regulation and a slower pace.
“Maybe it’s because I’m a Brit with European tendencies, but I don’t fear regulation in the way that sort of everyone seems to by default,” he said, describing all of this fingerpointing by the former employees as a “healthy dialogue.” He added, “I think it’s a great thing that technologists and entrepreneurs and CEOs of companies like myself and Sam, who I love dearly and think is awesome” are talking about regulation. “He is not cynical, he is sincere. He believes it genuinely.”
But he also said, “Friction is going to be our friend here. These technologies are becoming so powerful, they will be so intimate, they’ll be so everpresent, that this is a moment where it’s fine to take stock.” If all of this dialog slows down AI development by 6 to 18 months or longer “it’s time well spent.”
It’s all very cozy between these players.
OpenAI CEO Sam AltmanSuleyman wants cooperation with China, AI in classrooms
Suleyman also made some interesting comments on other issues. On the AI race with China:
“With all due respect to my good friends in DC and the military industrial complex, if it’s the default frame that it can only be a new Cold War, then that is exactly what it will be because it will become a self-fulfilling prophecy. They will fear that we fear that we’re going to be adversarial so they have to be adversarial and this is only going to escalate,” he said. “We have to find ways to cooperate, be respectful of them, whilst also acknowledging that we have a different set of values.”
Then again, he also said that China is “building their own technology ecosystem, and they’re spreading that around the world. We should really pay close attention.”
When asked his opinion on kids using AI for schoolwork, Suleyman who said he doesn’t have kids, shrugged it off. “I think we have to be slightly careful about fearing the downside of every tool, you know, just as when calculators came in, there was a kind of this gut reaction, oh, no, everyone’s gonna be able to sort of solve all the equations instantly. And it’s gonna make us dumber because we weren’t able to do mental arithmetic.”
He also envisions a time, very soon, where AI is like a teacher’s aide, perhaps chatting live in the classroom, as AI’s verbal skills improve. “What would it look like for a great teacher or educator to have a profound conversation with an AI that is live and in front of their audience?”
The upshot takeaway is that, if we want the people who are building and profiting from AI to govern and protect humanity from its worst effects, we may be setting unrealistic expectations.