Symbolica hopes to head off the AI arms race by betting on symbolic models

7 months ago 38
ARTICLE AD

In February, Demis Hassabis, the CEO of Google‘s DeepMind AI research lab, warned that throwing increasing amounts of compute at the types of AI algorithms in wide use today could lead to diminishing returns. Getting to the “next level” of AI, as it were, Hassabis said, will instead require fundamental research breakthroughs that yield viable alternatives to today’s entrenched approaches.

Ex-Tesla engineer George Morgan agrees. So he founded a startup, Symbolica AI, to do just that.

“Traditional deep learning and generative language models require unimaginable scale, time and energy to produce useful outcomes,” Morgan told TechCrunch. “By building [novel] models, Symbolica can accomplish greater accuracy with lower data requirements, lower training time, lower cost and with provably correct structured outputs.”

Morgan dropped out of college at Rochester to join Tesla, where he worked on the team developing Autopilot, Tesla’s suite of advanced driver-assistance features.

While at Tesla, Morgan says that he came to realize that current AI methods — most of which revolved around scaling up compute — wouldn’t be sustainable over the long term.

“Current methods only have one dial to turn: increase scale and hope for emergent behavior,” Morgan said. “However, scaling requires more compute, more memory, more money to train and more data. But eventually, [this] doesn’t get you significantly better performance.”

Morgan isn’t the only one to reach that conclusion.

In a memo this year, two executives at TSMC, the semiconductor fabricator, said that, if the AI trend continues at its current pace, the industry will need a 1-trillion-transistor chip — a chip containing 10x as many transistors as the average chip today — within a decade.

It’s unclear whether that’s technologically feasible.

Elsewhere, a report co-authored by Stanford and Epoch AI, an independent AI research Institute, finds that the cost of training cutting-edge AI models has increased substantially over the past year and change. The report’s authors estimate that OpenAI and Google spent around $78 million and $191 million, respectively, training GPT-4 and Gemini Ultra.

With costs poised to climb higher still — see OpenAI’s and Microsoft’s reported plans for a $100 billion AI data center —  Morgan began investigating what he calls “structured” AI models. These structured models encode the underlying structure of data — hence the name — instead of trying to approximate insights from enormous data sets, like conventional models, enabling them to attain what Morgan characterizes as better performance using less overall compute.

“It’s possible to produce domain-tailored structured reasoning capabilities in much smaller models,” he said, “marrying a deep mathematical toolkit with breakthroughs in deep learning.”

Structured models, better known as symbolic AI, aren’t exactly a new concept. They date back decades, rooted in the idea that AI can be built on symbols that represent knowledge using a set of rules.

Symbolic AI solves tasks by defining symbol-manipulating rule sets dedicated to particular jobs, such as editing lines of text in word processor software. That’s as opposed to neural networks, which try to solve tasks through statistical approximation and learning from examples.

Neural networks are the cornerstone of powerful AI systems like OpenAI’s DALL-E 3 and GPT-4. But, Morgan claims, they’re not the end-all be-all; symbolic AI might in fact be better positioned to efficiently encode the world’s knowledge, reason their way through complex scenarios, and “explain” how they arrive at an answer, Morgan argues.

“Our models are more reliable, more transparent and more accountable,” Morgan said. “There are immense commercial applications of structured reasoning capabilities, particularly for code generation — i.e. reasoning over large codebases and generating useful code — where existing offerings fall short.”

Symbolica’s product, designed by its 16-person team, is a toolkit for creating symbolic AI models and models pre-trained for specific tasks, including generating code and proving mathematical theorems. The exact business model is in flux. But Symbolica might provide consulting services and support for companies that wish to build bespoke models using its technologies, Morgan said.

Today marks Symbolica’s launch out of stealth, so the company doesn’t have customers — at least none that it’s willing to talk about publicly. Morgan did, however, reveal that Symbolica landed a $33 million investment earlier this year led by Khosla Ventures. Other investors included Abstract Ventures, Buckley Ventures, Day One Ventures and General Catalyst.

$33 million is no small figure; Symbolica’s backers evidently have confidence in the startup’s science and roadmap. Vinod Khosla, Khosla Ventures’ founder, told me via email that he believes Symbolica is “tackling some of the most important challenges facing the AI industry today.”

“To enable large-scale commercial AI adoption and regulatory compliance, we need models with structured outputs that can achieve greater accuracy with fewer resources,” Khosla said. “George has amassed one of the best teams in the industry to do just that.”

But others are less convinced that symbolic AI is the right path forward.

Os Keyes, a Ph.D. candidate at the University of Washington focusing on law and data ethics, notes that symbolic AI models depend on highly structured data, which makes them both “extremely brittle” and dependent on context and specificity. Symbolic AI needs well-defined knowledge to function, in other words — and defining that knowledge can be highly labor-intensive.

“This could still be interesting if it marries the advantages of deep learning and symbolic approaches,” Keyes said, referring to DeepMind’s recently published AlphaGeometry, which combined neural networks with a symbolic AI-inspired algorithm to solve challenging geometry problems. “But time will tell.”

Morgan rebutted by saying that current training methods soon won’t be able to meet the needs of companies that wish to harness AI for their purposes, making any promising alternatives worth investing in. And, he claimed, Symbolica is strategically well-positioned for this future, given that it has “several years” of runway with its latest funding tranche and its models are relatively small (and therefore cheap) to train and run.

“Tasks like automating software development, for example, at scale will require models with formal reasoning capabilities, and cheaper operating costs, to parse large code databases and produce and iterate on useful code,” he said. “Public perception around AI models is still very much that ”scale is all you need.’ Thinking symbolically is absolutely necessary to makes progress in the field — structured and explainable outputs with formal reasoning capabilities will be required to meet demands.”

There’s not much to prevent a big AI lab like DeepMind from building its own symbolic AI or hybrid models and — setting aside Symbolica’s points of differentiation — Symbolica is entering an extremely crowded and well-capitalized AI field. But Morgan’s anticipating growth all the same, and expects San Francisco-based Symbolica’s staff to double by 2025.

Read Entire Article