SambaNova, an AI chip startup that has raised more than $1.1 billion in VC funding to date, is aiming to rival OpenAI with new generative AI products for enterprise customers.
SambaNova today announced Samba-1, an AI-powered system designed for tasks such as text rewriting, coding, and language translation. The company calls this architecture “expert configuration.” This is the technical term for a bundle of 56 generative open source AI models in total.
Samba-1 allows enterprises to fine-tune and address multiple AI use cases while avoiding the challenges of implementing AI systems ad hoc, said Rodrigo Liang, co-founder and CEO of SambaNova. He says it will be possible.
“Samba-1 is fully modular, allowing enterprises to add new models asynchronously without wasting previous investments,” Liang told TechCrunch in an interview. “Likewise, they are iterative, extensible, and easy to update, giving customers room for adjustment as new models are integrated.”
Liang is a great salesman and what he says sound hopeful.But Samba-1 Really It's better than many other AI systems for business tasks, but which of OpenAI's models is the best?
It depends on your use case.
The ostensible main benefit of Samba-1 is that customers have control over how prompts and requests are routed because Samba-1 is a collection of independently trained models rather than a single large model. Requests made to large models like GPT-4 are sent unidirectionally over GPT-4. However, requests made to Samba-1 send either: 56 (to any of the 56 models that make up Samba-1) depending on the rules and policies specified by the customer.
This multi-model strategy also reduces data fine-tuning costs for customers, as they only need to worry about fine-tuning individual models or small groups of models rather than large-scale models, Liang said. claims. And in theory, answers from one model can be compared to answers from other models, even at the cost of additional computing, resulting in more reliable (e.g., hallucination-driven) responses to prompts. , he says.
“This…architecture allows us to train many small models because we don't have to break up large tasks into smaller tasks,” Liang said, adding that Samba-1 can be used on-premises or on-premises, depending on the environment. It added that it can be deployed in a hosted environment. customer needs. “With one big model; [request] Because it is expensive, the cost of training is also high. [Samba-1’s] Architecture collapses training costs. ”
I find that many vendors, including OpenAI, offer attractive pricing for fine-tuning generative models at scale, and that several startups, such as Martian and Credal, offer I would like to counter that they provide tools to route prompts between third-party models based on defined or automated rules. .
But SambaNova's selling points are not new in and of themselves. Rather, it's a set-it-and-forget package, a full-stack solution that includes everything, including an AI chip to build AI applications. And for some companies, that may be more attractive than others under consideration.
“Samba-1 'privates' data and provides every company with its own custom GPT model tailored to the needs of the organization,” Liang said. “Models are trained on customer private data hosted on a single host. [server] The rack costs 10 times less than alternative solutions. ”