At the Alchemist Accelerator, AI leading companies are demonstrating their products today. Please take a look if you are interested. The program itself also adds an international twist to Tokyo and Doha. Read on for Bach's picks.
Speaking to Alchemist CEO and founder Ravi Belani about the group ahead of Demo Day (10:30 a.m. PT today), it's clear that the AI startup's ambitions are shrinking. Yes, and that's not a bad thing.
Right now, there's no chance that an early-stage startup will become the next OpenAI or Anthropic. They currently have too much of a lead in the area of basic large-scale language models.
“The cost of building a basic LLM is prohibitively high. It would cost hundreds of millions of dollars just to pull that off. The question is, how can you compete as a startup? VCs don't want LLM wrappers. We're looking for companies that have a vertical, that own the end users, that have network effects and long-term lock-in.”
I read that as well because the companies selected for this group are all very specific in their applications, using AI to solve specific problems in specific domains.
One example is medicine. Healthcare is increasingly testing AI models for things like diagnosis and treatment planning, but it's still being done cautiously. Although the shadow of liability and bias weighs heavily on this highly regulated industry, there are also many legacy processes that can be replaced with real, tangible benefits.
Equality AI is not trying to revolutionize things like cancer treatment. The goal is to ensure that the models used do not violate important non-discrimination protections in AI regulation. This is a serious risk. Because if your care or diagnostic model turns out to be biased against protected classes (for example, assigning higher risk to Muslims or homosexuals), it will sink your product and Because it could expose you to lawsuits. .
Do you trust the manufacturer or vendor of the model? Or do you rely on a disinterested expert who knows the ins and outs of policy and knows how to properly evaluate a model? do you need it?
Image credit: Equality AI
“We all have the right to believe that AI is safe and effective behind the scenes in healthcare,” CEO and founder Maia Hightower told TechCrunch. “Healthcare industry leaders are struggling to keep up with a complex regulatory environment and rapidly changing AI technologies. In the coming years, AI compliance and litigation risks will continue to rise and responsible AI practices in healthcare will be challenged. Our solution is very timely given the risks of non-compliance and penalties as severe as decertification.”
It's a similar story for Cerevox, which is committed to eliminating hallucinations and other errors from today's LLMs. But it's not just in the general sense. They work with companies to build data pipelines and structures that allow them to minimize and observe these bad habits in their AI models. This is not to keep you from making up a physicist when you ask ChatGPT about a non-existent discovery from the 1800s, but to prevent the risk assessment engine from extrapolating from data in columns that should be there but aren't. That's it.
They are initially working with fintech and insurtech companies, which Bellani acknowledged is “a less glamorous use case, but a path to building a product.” It's a path to paying customers, a way to start a business.
Quickr Bio is built on a new world of biotechnology built on Crispr-Cas9 gene editing, which brings new risks and new opportunities. How do you know the edits you're making are correct? Being 99% sure isn't enough (again, regulation and responsibility), but testing to increase confidence takes time and money . Quickr claims that its method of quantifying and understanding actual (not theoretical, but ideally identical) changes made is up to 100 times faster than existing methods.
In other words, they are not creating a new paradigm, but only aim to be the best solution to strengthen the existing one. If it can demonstrate even a significant percentage of its claimed efficacy, it could become mandatory in many laboratories.
You can check out the rest of the cohort here. You can see that the above is representative of the atmosphere. The demo begins at 10:30 a.m. Pacific time.
As for the program itself, we've had a lot of buy-in for our programs in Tokyo and Doha.
“We think this is a turning point in Japan. Japan is going to be an exciting place to source stories and for companies to come in,” he said. Recent tax changes should free up early-stage capital for startups, and investment flowing out of China is landing in Japan, particularly Tokyo, where he expects new (or rather refurbished) technology to emerge. We expect that a center will emerge. The fact that OpenAI is building satellites there is really all you need to know, he suggested.
Mitsubishi Corporation is investing through some kind of arm, and the Japan External Trade Organization is also involved. I'll certainly be interested in seeing what Japan's awakened startup economy produces.
In an interesting twist, Alchemist Doha has secured a huge promise of $13 million from the government.
“Our mission there is to focus on emerging market founders, where 90 percent of the world is isolated and a lot of innovation is happening,” Bellani said. “We've found that some of the best companies in the U.S. come from outside the U.S. We need an outside perspective to create great companies, and there's a lot of uncertainty, and this talent needs a home.”
He noted that they plan to make larger investments from the program, from $200,000 to $1 million, which could change the types of companies that participate.