Even the ‘godmother of AI’ has no idea what AGI is

1 month ago 14
ARTICLE AD

Are you confused about artificial general intelligence, or AGI? It’s that thing OpenAI is obsessed with ultimately creating in a way that “benefits all of humanity.” You may want to take them seriously since they just raised $6.6 billion to get closer to that goal.

But if you’re still wondering what the heck AGI even is, you’re not alone.

In a wide ranging discussion on Thursday at Credo AI’s responsible AI leadership summit, Fei-Fei Li, a world-renowned researcher often called the “godmother of AI,” said she doesn’t know what AGI is either. At other points, Li discussed her role in the birth of modern AI, how society should protect itself against advanced AI models, and why she thinks her new unicorn startup World Labs is going to change everything.

But when asked what she thought about an “AI singularity,” Li was just as lost as the rest of us.

“I come from academic AI and have been educated in the more rigorous and evidence-based methods, so I don’t really know what all these words mean,” said Li to a packed room in San Francisco, beside a big window overlooking the Golden Gate Bridge. “I frankly don’t even know what AGI means. Like people say you know it when you see it, I guess I haven’t seen it. The truth is, I don’t spend much time thinking about these words because I think there’s so many more important things to do…”

If anyone would know what AGI is, it’s probably Fei-Fei Li. In 2006, she created ImageNet, the world’s first big AI training and benchmarking dataset that was critical for catalyzing our current AI boom. From 2017 to 2018, she served as Chief Scientist of AI/ML at Google Cloud. Today, Li leads the Stanford Human-Centered AI Institute (HAI) and her startup World Labs is building “large world models.” (That term is nearly as confusing as AGI, if you ask me.)

OpenAI CEO Sam Altman took a stab at defining AGI in a profile with The New Yorker last year. Altman described AGI as the “equivalent of a median human that you could hire as a coworker.”

Evidently, this definition wasn’t quite good enough for a $157 billion company to be working towards. So OpenAI created the five levels it internally uses to gauge its progress towards AGI. The first level is chatbots (like ChatGPT), then reasoners (apparently, OpenAI o1 was this level), agents (that’s coming next, supposedly), innovators (AI that can help invent things), and the last level, organizational (AI that can do the work of an entire organization).

Still confused? So am I, and so is Li. Also, this all sounds like a lot more than a median human coworker could do.

Earlier in the talk, Li said she’s been fascinated by the idea of intelligence ever since she was a young girl. That lead her to studying AI long before it was profitable to do so. In the early 2000s, Li says her and a few others were quietly laying the foundation for the field.

“In 2012, my ImageNet combined with AlexNet and GPUs – many people call that the birth of modern AI. It was driven by three key ingredients: big data, neural networks, and modern GPU computing. And once that moment hit, I think life was never the same for the whole field of AI, as well as our world.”

When asked about California’s controversial AI bill, SB 1047, Li spoke carefully to not rehash a controversy that Governor Newsom just put to bed by vetoing the bill last week. (We recently spoke to the author of SB 1047, and he was more keen to reopen his argument with Li.)

“Some of you might know that I have been vocal about my concerns about this bill [SB 1047], which was vetoed, but right now I’m thinking deeply, and with a lot of excitement, to look forward,” said Li. “I was very flattered, or honored, that Governor Newsom invited me to participate in the next steps of post-SB 1047.”

California’s governor recently tapped Li, along with other AI experts, to form a task force to help the state develop guardrails for deploying AI. Li said she’s using an evidence-based approach in this role, and will do her best to advocate for academic research and funding. However, she also wants to ensure California doesn’t punish technologists.

“We need to really look at potential impact on humans and our communities rather than putting the burden on technology itself… It wouldn’t make sense if we penalize a car engineer – let’s say Ford or GM – if a car is misused purposefully or unintentionally and harms a person. Just penalizing the car engineer will not make cars safer. What we need to do is to continue to innovate for safer measures, but also make the regulatory framework better – whether it’s seatbelts or speed limits – and the same is true for AI.”

That’s one of the better arguments I’ve heard against SB 1047, which would have punished tech companies for dangerous AI models.

Although Li is advising California on AI regulation, she’s also running her startup, World Labs, in San Francisco. It’s the first time Li has founded a startup, and she’s one of the few women leading an AI lab on the cutting edge.

“We are far away from a very diverse AI ecosystem,” said Li. “I do believe that diverse human intelligence will lead to diverse artificial intelligence, and will just give us better technology.”

In the next couple years, she’s excited to bring “spatial intelligence” closer to reality. Li says language, which today’s large language models are based on, probably took a million years to develop, whereas vision and perception likely took 540 million years. That means creating large world models is a much more complicated task.

“It’s not only making computers see, but really making computer understand the whole 3D world, which I call spatial intelligence,” said Li. “We’re not just seeing to name things… We’re really seeing to do things, to navigate the world, to interact with each other, and closing that gap between seeing and doing requires spatial knowledge. As a technologist, I’m very excited about that.”

Read Entire Article