Y Combinator president and CEO Garry Tan told an audience at the Economic Club of Washington, D.C. this week that artificial intelligence “will likely require regulation.”
Tan spoke one-on-one with General Catalyst board member Teresa Carlson about everything from how to get into Y Combinator to AI, saying, “There's never been a better time to be working in technology.”
Tan said he “overall supports” the National Institute of Standards and Technology's (NIST) efforts to build a GenAI risk mitigation framework, adding that “most of the Biden administration's executive order is probably a step in the right direction.”
The NIST framework proposes defining that GenAI must comply with existing laws governing data privacy, copyright, etc., requiring end users to disclose their use of GenAI, and establishing regulations to prohibit GenAI from creating child sexual abuse material. President Biden's executive order covers a wide range of mandates, from requiring AI companies to share safety data with the government to ensuring fair access for small developers.
But like many Silicon Valley venture capitalists, Tan is wary of other regulatory developments, calling AI bills pending in the California and San Francisco legislatures “very concerning.”
One bill causing a stir in California, Politico reports, was introduced by state Senator Scott Wiener and would allow the attorney general to sue AI companies if they offer harmful products.
“The broad policy debate right now is whether this is actually a good thing,” Tan said. “We can look to the thoughtfulness of people like Ian Hogarth in the UK, who are also mindful of this idea of concentrated power, but at the same time looking at how do we support innovation while mitigating the worst of the damage.”
Hogarth is a former YC entrepreneur and AI expert who was appointed to the UK's AI Models Task Force.
“My fear is that by trying to address science fiction concerns, we'll end up with problems that don't actually exist,” Tan said.
On how YC manages accountability, Tan said that if the organization doesn't agree with a startup's mission or the impact its product has on society, “YC won't fund it.” He noted that he has read several times in the media about companies that have applied to YC.
“Looking back at the interview transcripts, I don't think this is a good thing for society. Thankfully, we didn't fund this,” he said.
Artificial intelligence leaders continue to fail
Tan's guidelines still leave room for Y Combinator to produce many AI startups as cohort graduates: As my colleague Kyle Wiggers reported, the Winter 2024 cohort has 86 AI startups, nearly double the number in the Winter 2023 batch and nearly triple the number in the Winter 2021 batch, according to YC's official startup directory.
Recent news events have also left people questioning whether companies selling AI products can be trusted to define responsible AI: Last week, TechCrunch reported that OpenAI was eliminating its AI Responsibility team.
Then came the uproar over the company's use of a voice similar to actress Scarlett Johansson in a demo of its new GPT-4o model. She was eventually asked about using her own voice, but declined. OpenAI subsequently removed Skye's voice, but denied that it was based on Johansson's. This, and other issues regarding OpenAI's ability to recover vested employee shares, were among several factors that led people to openly question Sam Altman's conscience.
Meanwhile, Meta made its own AI news headlines when it announced the formation of its AI Advisory Board, an all-white, male board that effectively excluded women and people of color, many of whom have played key roles in creating and innovating the AI industry.
Tan didn't cite any of these examples. Like most Silicon Valley venture capitalists, he sees an opportunity for a new, big, profitable business.
“We like to think of startups as a maze of ideas,” Tan says. “When a new technology like large-scale language models comes along, it shakes up the whole maze of ideas. ChatGPT itself is probably one of the fastest successful consumer products released in recent times, and that's good news for founders.”
Future Artificial Intelligence
Tan also noted that San Francisco is the epicenter of the AI movement, with companies like Anthropic, which was founded by a YC alumni, and OpenAI, which was spun out of YC, both getting their start in San Francisco.
Tan also joked that he has no plans to follow in Altman's footsteps, saying, “I have no plans to start an AI lab, because Altman did my job a few years ago.”
Other YC success stories include legal tech startup Casetext, which was sold to Thomson Reuters for $600 million in 2023. Tan believed Casetext was one of the first companies in the world to access generative AI and one of the first to have an exit in the generative AI space.
On the future of AI, Tan said “we definitely need to be smart about this technology” in relation to the risks of bioterrorism and cyber attacks, but added that a “more cautious approach” was needed.
He also assumes that there will not be a “winner take all” model, but rather “freedom of consumer choice and a great garden of founders who can create something that resonates with a billion people.”
At least, that's his hope. It's in his and YC's best interest to have a lot of startups succeed and return lots of cash to investors. So what Tan fears most isn't an evil AI gone wild, but a lack of AI to choose from.
“We may find ourselves in a totally monopolistic situation where there is a huge concentration in just a few models. Then we're talking about rent extraction, and that's a world I don't want to live in.”