ARTICLE AD
AI systems and large language models need to be trained on massive amounts of data to be accurate but they shouldn’t train on data that they don’t have the rights to use. OpenAI’s licensing deals with The Atlantic and Vox last week show that both sides of the table are interested in landing these AI-training content licensing agreements.
Human Native AI is a London-based startup building a marketplace to broker such deals between the many companies building LLM projects and those willing to license data to them.
It’s goal is to help AI companies find data to train their models on while ensuring the rights holders opt in, and are compensated. Rights holders upload their content for no charge and connect with AI companies to land revenue share or subscription deals. Human Native AI also helps rights holders prepare and price their content and monitors for any copyright infringements. Human Native AI takes a cut of each deal and charges AI companies for its transaction and monitoring services.
James Smith, CEO and co-founder, told TechCrunch that he got the idea for Human Native AI from his past experience working on Google’s DeepMind project. DeepMind also ran into issues with not having enough good data to properly train the system. Then he saw other AI companies run into the same issue.
“It feels like we are in the Napster-era of generative AI,” Smith said. “Can we get to a better era? Can we make it easier to acquire content? Can we give creators some level of control and compensation? I kept thinking, why is there not a marketplace?”
He pitched the idea to his friend Jack Galilee, an engineer at GRAIL, over a walk in the park with their respective kids as Smith had with many other potential startup ideas. But unlike past times, Galilee said they should go for it.
The company launched in April and is currently operating in beta. Smith said demand from both sides has been really encouraging and they’ve already signed a handful of partnerships that will be announced in the near future. Human Native AI announced a £2.8 million seed round led by LocalGlobe and Mercuri, two British micro VCs, this week. Smith said the company plans to use the funding to build out its team.
“I’m the CEO of a two-month-old company and have been able to get meetings with CEOs of 160-year-old publishing companies,” Smith said. “That suggests to me there is a high demand on the publishing side. Equally, every conversation with a big AI company goes exactly the same way.”
While still very early days, what Human Native AI is building does seem to be a missing piece of infrastructure in the growing AI industry. The big AI players need lots of data to train on and giving rights holders an easier way to work with them, while giving them full control of how their content is used, seems like a good approach that can make both sides of the table happy.
“Sony Music just sent letters to 700 AI companies asking that they cease and desist,” Smith said. “That is the size of the market and potential customers that could be acquiring data. The number of publishers and rights holders it could be thousands if not tens of thousands. We think that’s the reason we need infrastructure.”
I also think this could be even more beneficial to the smaller AI systems that don’t necessarily have the resources to ink a deal with Vox or The Atlantic to still be able to access data to train on. Smith said that they hope for that too and that all of the notable licensing deals thus far have involved the larger AI players. He hopes Human Native AI can help level the playing field.
“One of the major challenges with licensing content is you have a large upfront costs and you massively restrict who you can work with,” Smith said. “How do we increase the number of buyers for your content and reduce the barriers to entry? We think that is really exciting.”
The other interesting piece here is the future potential of the data that Human Native AI collects. Smith said that in the future they will be able to give rights holders more clarity around how to price their content based on a history of deal data on the platform.
It is also a smart time for Human Native AI to launch. Smith said with the European Union AI Act evolving, and potential AI regulation in the U.S. down the road, AI companies ethically sourcing their data — and having the receipts to prove it — will only become more pressing.
“We are optimistic about the future of AI and what it will do, but we have to make sure as an industry we are responsible and don’t decimate industries that have gotten us to this point,” Smith said. “That would not be good for human society. We need to make sure we find the correct ways to enable people to participate. We are AI optimists on the side of humans.”