Confused about Artificial General Intelligence (AGI)? That's what OpenAI is obsessed with creating in a way that ultimately “benefits all of humanity.” They just raised $6.6 billion to move closer to that goal, so they might as well take it seriously.
But if you're still wondering what exactly AGI is, you're not alone.
In a wide-ranging discussion at Credo AI's Responsible AI Leadership Summit on Thursday, Feifei Li, a world-renowned researcher often referred to as the “Godmother of AI,” said she believes AGI He said he didn't know if that was the case. Elsewhere, Lee talks about her role in the birth of modern AI, how society should protect itself from advanced AI models, and why she thinks her new unicorn startup World Labs will change everything. spoke.
But when we asked her what she thought about the “AI singularity,” Lee was just as confused as the rest of us.
“I come from an academic AI background and have been educated in more rigorous, evidence-based methods, so I don't really know what those words mean,” Lee told a packed room in San Francisco. , he said next to a large window overlooking the metropolis. golden gate bridge. “Frankly, I don’t even know what AGI means. They say you know it when you see it, but I guess I haven’t seen it yet. The truth is, I don’t know what else to do. I don’t spend much time thinking about these words because I think there are a lot of them…”
If anyone knows what AGI is, it's probably Feifei Li. In 2006, she created ImageNet, the world's first large-scale AI training and benchmarking dataset, critical to fueling the current AI boom. From 2017 to 2018, she was the lead scientist for AI/ML at Google Cloud. Lee now heads the Stanford Human-Centered AI Institute (HAI), and her startup World Labs builds “large-scale world models.” (If you ask me, this term is just as confusing as AGI.)
OpenAI CEO Sam Altman challenged the definition of AGI in a profile in The New Yorker last year. Mr. Altman described AGI as “the median number of people you could hire as a colleague.”
Clearly, this definition wasn't enough for a $157 billion company to grapple with. So OpenAI created five levels that we use internally to measure progress toward AGI. The first level is chatbots (such as ChatGPT), then reasoners (apparently OpenAI o1 was at this level), agents (which will probably come next), innovators (AIs that can help invent things), and finally organizational level (can do the work of the entire organization).
Are you still confused? I am too, and so is Mr. Lee. Also, this all seems far beyond what your average human colleague could do.
At the beginning of her talk, Lee said that she has been fascinated by the concept of intelligence since she was a child. That led her to research AI long before it was profitable. In the early 2000s, Lee said, he and a few others were quietly laying the groundwork for the field.
“In 2012, my ImageNet was combined with AlexNet and GPUs, which many refer to as the birth of modern AI. And when that moment arrived, I don't think life would ever be the same, not just for our world, but for the field of AI as a whole.”
When asked about California's controversial AI bill, SB 1047, Lee was careful not to revive the controversy that Governor Newsom had just quelled by vetoing the bill last week. Ta. (We recently spoke with the author of SB 1047, and he was even more enthusiastic about restarting discussions with Lee.)
“Some of you may know that I have been vocal about my concerns about this bill. [SB 1047]“I was vetoed, but now I'm very excited and thinking deeply to move forward,'' Lee said. “I was very honored and honored that Governor Newsom invited me to participate in the next steps for Post SB 1047.”
California's governor recently appointed Lee, along with other AI experts, to form a task force to help the state develop guardrails for AI adoption. Mr Lee said he takes an evidence-based approach in this role and is committed to championing academic research and funding. But she also wants to make sure California doesn't punish engineers.
“Rather than putting a strain on the technology itself, we need to seriously consider the potential impact on humans and our communities…if a vehicle is misused, intentionally or unintentionally. It would make no sense to punish automotive engineers (e.g. Ford or GM) who cause harm to people. Simply punishing automotive engineers will not make cars safer. What we need to do is continue to innovate for better safety measures, as well as better regulatory frameworks, such as seat belts and speed limits, and the same applies to AI. . ”
This is one of the best arguments I've heard for SB 1047, which would penalize tech companies that develop dangerous AI models.
Lee advises California on AI regulation, but he also runs his own startup, World Labs, in San Francisco. This is Li's first time founding a startup, and she is one of the few women leading a cutting-edge AI lab.
“We are far from a very diverse AI ecosystem,” Lee says. “I believe that diverse human intelligence will lead to diverse artificial intelligence, which will lead to better technology.”
In the coming years, she is excited to bring “spatial intelligence” closer to reality. Lee said that while language, the basis of today's large-scale language models, probably took one million years to develop, vision and perception likely took 540 million years. This means that creating large-scale world models becomes a much more complex task.
“It's not just making the computer see, it's actually making the computer understand the entire 3D world, and I call it spatial intelligence,” Lee said. “We don't just look to name things…we see to actually do things, to navigate the world, to interact with each other. Seeing and doing You need spatial knowledge to bridge the gap between things. As an engineer, I'm really excited about that.”