Cyril Gorilla, who grew up as an immigrant, taught himself how to code and practiced like a human being.
“When I was 11 years old, I completed a programming course at my mom's community college while our household utilities were regularly cut off,” he told TechCrunch.
In high school, Gorlla learned about AI and fell in love with the idea of training his own AI models, taking apart his laptop to upgrade its internal cooling. This idea led Gorlla to an internship at Intel during his second year of college, where he researched AI model optimization and interpretability.
Gorlla's college years coincided with the AI boom, where companies like OpenAI raised billions of dollars for AI technology. Gorlla believed that AI had the potential to transform entire industries. However, he also believed that safety efforts were taking a backseat to shiny new products.
“We felt we needed to fundamentally change the way we understand and train AI,” he said. “Lack of certainty and trust in model outputs is a major barrier to adoption in industries like healthcare and finance, where AI can make the biggest difference.”
So Golla and Trevor Tuttle, whom he met as an undergraduate, dropped out of a graduate program and founded CTGT, a company that helps organizations adopt AI more thoughtfully. CTGT exhibited today at TechCrunch Disrupt 2024 as part of the Startup Battlefield competition.
“My parents believe I'm in school,” he said. “They might be shocked to read this.”
CTGT works with companies to identify biased outputs and illusions from models and seeks to address their root causes.
It is impossible to completely eliminate errors from a model. However, Gorlla argues that CTGT's audit approach allows companies to mitigate them.
“We reveal the internal understanding of the concepts in the model,” he explained. “A model that tells users to include glue in a recipe may be humorous, but the response of recommending a competitor when a customer asks for a product comparison is not so straightforward. It is unacceptable to provide information or make credit decisions based on illusory information.”
A recent Cnvrg poll found that reliability is the number one concern shared by companies deploying AI apps. A separate study by risk management software provider RiskConnect found that more than half of executives said they were concerned about employees making decisions based on inaccurate information from AI tools.
The idea of a dedicated platform for evaluating the decisions of AI models is not new. TruEra and Patronus AI are among the startups, along with Google and Microsoft, that are developing tools to interpret model behavior.
However, Gorlla claims that CTGT's technology performs better. One reason for this is that it doesn't rely on training “decision” AI to monitor production models.
“Our mathematically guaranteed interpretability differs from current state-of-the-art methods, which are inefficient and involve training hundreds of other models to gain insight about the model,” he said. Said. “As businesses become more aware of their compute costs and enterprise AI moves from demo to delivering real value, they can now use their own models without having to train additional models or rely on other models to guide their decisions. Our value in providing enterprises with the ability to rigorously test the safety of advanced AI is important.”
To ease potential customers' concerns about data breaches, CTGT offers an on-premises option in addition to its managed plans. Both have the same annual fee.
“Because we don't have access to our customers' data, they have full control over how and where their data is used,” Gorla said.
A Character Labs accelerator alumnus, CTGT is backed by former GV partners Jake Knapp and John Zeratsky (co-founders of Character VC), Mark Cuban, and Zapier co-founder Mike Knoop.
“AI that cannot explain its reasoning is not intelligent enough in many areas where complex rules and requirements apply,” Cuban said in a statement. “I invested in CTGT because it solves this problem. More importantly, we are seeing results in our own use of AI.”
And despite being in its early stages, CTGT has multiple customers, including three unnamed Fortune 10 brands. Gorlla said CTGT worked with one of these companies to minimize bias in facial recognition algorithms.
“We identified a bias in the model that focused too much on hair and clothing to be predictive,” he said. “Our platform provided practitioners with immediate insights without the guesswork and wasted time of traditional interpretation methods.”
Over the next few months, CTGT will focus on strengthening its engineering team (currently just Gorlla and Tuttle) and improving its platform.
If CTGT can gain a foothold in the burgeoning market of AI interpretability, it certainly has the potential to be lucrative. Analytics firm Markets & Markets predicts that the explainable AI field will be worth $16.2 billion by 2028.
“Model size has far outpaced Moore's Law and advances in AI training chips,” Gorlla says. “This means we need to focus on a fundamental understanding of AI to address both the inefficiencies and the increasingly complex nature of model decisions.”