Sometimes a demo is all you need to understand a product. The same goes for runwear. When you visit the Runware website, type the prompt and press Enter to generate the image, you will be amazed at how quickly Runware generates the image. It takes less than a second.
Runware is a newcomer to the startup world of AI inference, or generative AI. The company builds its own servers and optimizes the software layers on those servers to eliminate bottlenecks and improve the inference speed of image generation models. The startup has already secured $3 million in funding from Andreessen Horowitz's Speedrun, LakeStar's Halo II, and Lunar Ventures.
The company doesn't want to reinvent the wheel. I just want it to spin faster. Behind the scenes, Runware manufactures its own servers with as many GPUs as possible on the same motherboard. It has its own custom-made cooling system and manages its own data center.
When running AI models on servers, Runware has optimized the orchestration layer using BIOS and operating system optimizations to reduce cold start times. We have developed a unique algorithm to allocate interfering workloads.
The demo itself is impressive. The company now wants to leverage all this work into research and development and turn it into a business.
Unlike many GPU hosting companies, Runware does not intend to rent GPUs based on GPU hours. Rather, he believes that companies should be encouraged to accelerate their workloads. That's why Runware offers an image generation API on a traditional per-API-call pricing structure. It is based on the popular AI models of Flux and Stable Diffusion.
“If you look at companies like Together AI, Replicate, and Hugging Face, they're all selling compute based on GPU time,” co-founder and CEO Flaviu Radulescu told TechCrunch. “If you compare the time it takes us and them to create an image, and you compare the prices, you'll see that we're much cheaper and much faster.”
“It will be impossible for them to match this performance,” he added. “Especially with cloud providers, they have to run in a virtualized environment, which introduces additional delays.”
As Runware considers the entire inference pipeline and optimizes the hardware and software, we expect to be able to use GPUs from multiple vendors in the near future. This is an important initiative for some startups, as Nvidia is the clear leader in the GPU space, and Nvidia GPUs tend to be quite expensive.
“Right now, we're only using Nvidia GPUs, but this should be an abstraction of the software layer,” Radulescu said. “We can switch GPU memory models very quickly, allowing us to put multiple customers on the same GPU.
“So we're different from our competitors. We just load the model onto the GPU, and the GPU performs a very specific type of task. In our case, we developed this software solution. , allowing you to switch between models in GPU memory during inference.
If AMD and other GPU vendors can create a compatibility layer that works with common AI workloads, Runware is well-positioned to build hybrid clouds that rely on GPUs from multiple vendors. And it will certainly help if it wants to remain cheaper than its competitors in AI inference.