AI is becoming increasingly expensive to develop and run. OpenAI's AI operating costs could reach $7 billion this year, while Anthropic's CEO recently suggested that a $10 billion-plus model could soon arrive.
Therefore, we are looking for ways to make AI more affordable.
Some researchers focus on techniques to optimize existing model architectures, the structures and components that make a model work. Some companies are developing new architectures that they believe will allow them to scale up affordably.
Karan Goel belongs to the latter camp. At the startup he helped co-found, Cartesia, Goel works on what he calls state-space models (SSMs). This is a newer, more efficient model architecture that can process large amounts of data such as text and images at once. .
“We believe that building truly useful AI models requires new model architectures,” Goel told TechCrunch. “The AI industry is competitive, both commercial and open source, and building the best models is critical to success.”
roots of learning
Before joining Cartesia, Goel completed his Ph.D. candidate in Stanford University's AI Lab, where he worked under the supervision of computer scientist Christopher Le and others. While attending Stanford University, Goel met fellow doctoral student Albert Gu. There were candidates in the lab, and two of them sketched what would become SSM.
Goel ended up working at Snorkel AI and then Salesforce, and Gu became an assistant professor at Carnegie Mellon University. However, Gu and Goel continued their research on SSM and published several important research papers on the architecture.
In 2023, Gu and Goel, along with former Stanford colleagues Arjun Desai and Brandon Yang, decided to collaborate to launch Cartesia to commercialize their research.
Founding team of Descartes. From left to right: Brandon Yang, Karan Goel, Albert Gu, Arjun Desai. Image credit: Descartes
Cartesia's founding team also includes Ré, who has developed many derivatives of Mamba, perhaps the most popular SSM today. Gu and Princeton University professor Tri Dao started Mamba as a public research project last December and have continued to improve it through subsequent releases.
Cartesia is built on top of Mamba in addition to its own SSM training. Like all SSMs, Cartesia gives the AI something like working memory, making the model's use of computing power faster and potentially more efficient.
Comparison of SSM and transformer
From ChatGPT to Sora, most AI apps today utilize models with a transformer architecture. As transformers process data, they add entries to what is called a “hidden state” to “remember” what they have done. For example, if your model is reading a book, the hidden state values might be representations of words in the book.
Concealment is one of the reasons transformers are so powerful. But it's also a source of inefficiency. In order for the Transformer to “say” even a single word about the book it just captured, the model must scan its entire hidden state. This is a task as computationally demanding as rereading an entire book.
In contrast, SSM compresses all previous data points into something like a summary of everything we've seen so far. As new data comes in, the “state” of the model is updated and SSM discards most of the previous data.
result? SSMs can handle large amounts of data and perform better than transformers for certain data generation tasks. With inference costs as they are, this is certainly an attractive proposition.
ethical concerns
Cartesia operates like a community research institute, developing SSM both internally and in partnership with external organizations. The company's latest project, Sonic, is an SSM that can replicate the human voice, generate new voices, and adjust the tone and rhythm of recordings.
Goel claims that Sonic, available through an API and web dashboard, is the fastest model in its class. “Sonic demonstrates how SSM excels with long context data such as voice, while maintaining the highest performance standards in terms of stability and accuracy,” he said.
Cartesia's Sonic model allows for a significant degree of audio customization, including PROSODY. Image credit: Descartes
Although Cartesia has been successful in shipping its product quickly, it has fallen into many of the same ethical pitfalls that have plagued other AI model makers.
Cartesia trained at least some of its SSM on The Pile, an open data set known to contain unauthorized copyrighted books. Many AI companies argue that the fair use doctrine protects them from claims of infringement. But that didn't stop the authors from suing Meta, Microsoft, and other companies for allegedly training models on The Pile.
And Cartesia has few obvious safeguards against Sonic-powered voice cloning tools. A few weeks ago, I was able to clone former Vice President Kamala Harris' voice using a campaign speech (see below). With Cartesia's tool, all you have to do is check a box indicating that you want to comply with your startup's ToS.
In this respect, Cartesia is not necessarily inferior to other audio duplication tools on the market. But the optics aren't surprising, as there are reports of voice clones beating bank security checks.
Goel wouldn't say that Cartesia is no longer training models on The Pile. But he did address the issue of moderation, telling TechCrunch that Cartesia has “automated and manual review” systems in place and is “working on a system for audio verification and watermarking.”
“We have a dedicated team that tests aspects such as technical performance, misuse and bias,” Goel said. “We have also established partnerships with external auditors to provide additional independent verification of the safety and reliability of our models. This is an ongoing process that requires continuous improvement. We recognize that.”
up-and-coming business
Goel said “hundreds” of customers pay for Sonic API access, which is Cartesia's main source of revenue, including automated calling app Goodcall. Cartesia's API is free for up to 100,000 characters, and its most expensive plan costs $299 per month for 8 million characters. (Cartesia also offers an enterprise tier with dedicated support and custom limits.)
By default, Cartesia uses customer data to train the model. This is not an unheard of policy, but it is unlikely to suit privacy-conscious users. Goal notes that users can opt out if they wish, and that Cartesia offers custom retention policies for large organizations.
Cartesia's data practices don't appear to be hurting the business, at least while Cartesia has the technological advantage. Goodcall CEO Bob Summers said they chose Sonic because it was the only voice generation model with less than 90 milliseconds of latency.
“[It] It outperformed the next best alternative by a factor of four,” Summers added.
Goodcall's AI “agent” service relies on Cartesia's Sonic API. Image credit: Goodcall
Currently, Sonic is used in games, dubbing, etc. But Goel believes this only scratches the surface of what SSM can do.
His vision is a model that runs on any device and understands and produces data in any format, including text, images, and video, almost instantly. In a small step toward this, Cartesia this summer launched a beta version of Sonic On-Device, a version of Sonic optimized to run on phones and other mobile devices for applications such as real-time translation. did.
Alongside Sonic On-Device, Cartesia published Edge, a software library that optimizes SSM for different hardware configurations, and Rene, a compact language model.
“We have a big long-term vision to become the go-to multimodal foundation model for any device,” said Goel. “Our long-term roadmap includes developing multimodal AI models with the goal of creating real-time intelligence that can infer context at scale.”
If that happens, Cartesia will need to convince potential new customers that its architecture is worth the learning curve. It also needs to stay ahead of other vendors experimenting with transformer alternatives.
Startups Zephyra, Mistral, and AI21 Labs trained hybrid Mamba-based models. Elsewhere, Liquid AI, led by robotics guru Daniela Russ, is developing its own architecture.
But Goel insists Cartesia, which employs 26 people, is positioned to succeed, thanks in part to the new cash injection. Earlier this month, the company closed a $22 million funding round led by Index Ventures, bringing Cartesia's total funding to $27 million.
Shardul Shah, partner at Index Ventures, sees Cartesia's technology one day powering apps in customer service, sales and marketing, robotics, security, and more.
“By challenging traditional reliance on transformer-based architectures, Cartesia has unlocked new ways to build real-time, cost-effective, and scalable AI applications,” he said. “The market is demanding faster, more efficient models that can run anywhere, from the data center to the device. Cartesia's technology delivers on this promise and is uniquely positioned to power the next wave of AI innovation. We are prepared.”
A* Capital, Conviction, General Catalyst, Lightspeed, and SV Angel also participated in San Francisco-based Cartesia's latest funding round.