Runware uses custom hardware and advanced orchestration for fast AI inference

1 month ago 15
ARTICLE AD

Sometimes, a demo is all you need to understand a product. And that’s the case with Runware. If you head over to Runware’s website, enter a prompt and hit enter to generate an image, you’ll be surprised by how quickly Runware generates the image for you — it takes less than a second.

Runware is a newcomer in the AI inference startup landscape. The company is building its own servers and optimizing the software layer on those servers to remove bottlenecks and improve inference speeds for image generation models. The startup has already secured $3 million in funding from Andreessen Horowitz’s Speedrun, LakeStar’s Halo II and Lunar Ventures.

The company doesn’t want to reinvent the wheel. It just wants to make it spin faster. Behind the scenes, Runware manufactures its own servers with as many GPUs as possible on the same motherboard. It has its own custom-made cooling system and it manages its own data centers.

When it comes to running AI models on those servers, Runware has optimized the orchestration layer with BIOS and operating system optimizations to improve cold start times. It has developed its own algorithms that allocate interference workloads.

The demo is impressive by itself. Now, the company wants to use all this work in research and development and turn it into a business. Unlike many GPU hosting companies, Runware isn’t going to rent its GPUs based on GPU time.

Instead, it believes those companies should be encouraged to speed up workloads. That’s why Runware is offering an image generation API with a traditional cost-per-API-call fee structure. It is based on popular AI models from Stable Diffusion and Flux.

“If you look at Together AI, Replicate, Hugging Face — all of them. They are selling compute based on GPU time. If you compare the amount of time it takes for us to make an image versus them. And then you compare the pricing, you will see that we are so much cheaper, so much faster,” co-founder and CEO Flaviu Radulescu told TechCrunch.

“And it’s going to be impossible for them to match this performance. Especially in a cloud provider, you have to run on a virtualized environment, which adds additional delays,” he added.

As Runware is looking at the entire inference pipeline and optimizing hardware and software, the company hopes that it will be able to use GPU from multiple vendors in the near future. This has been an important endeavor for several startups as Nvidia is the clear leader in the GPU space, which means that Nvidia GPUs tend to be quite expensive.

“Right now, we use just Nvidia GPUs. But this should be an abstraction of the software layer . . . We can switch a model from GPU memory in and out very, very fast, which allow us to put multiple customers on the same GPUs,” Radulescu said. “So we are not like our competitors. They just load a model into the GPU and then the GPU does a very specific type of task. In our case, we’ve developed this software solution, which allow us to switch a model in the GPU memory as we do inference.“

If AMD and other GPU vendors can create compatibility layers that work with typical AI workloads, Runware is well positioned to build a hybrid cloud that would rely on GPUs from multiple vendors. And that will certainly help if it wants to remain cheaper than competitors at AI inference.

Read Entire Article