OpenAI CEO Sam Altman said in a post on his personal blog that he believes OpenAI “knows.”[s] How to build [artificial general intelligence]” as traditionally understood, and is beginning to set its sights on “superintelligence.”
“We love our current product, but we're here for a bright future,” Altman wrote in a post published late Sunday night. “Hyperintelligent tools have the potential to vastly accelerate scientific discovery and innovation far beyond what we could do alone, resulting in vast increases in wealth and prosperity.”
Altman previously said that superintelligence could be “thousands of days” away and that its arrival would be “more intense than people think.”
AGI (artificial general intelligence) is an ambiguous term. But OpenAI has its own definition. It is “a highly autonomous system that outperforms humans at the most economically valuable tasks.” OpenAI and the startup's close collaborator and investor Microsoft also define AGI as “an AI system that can generate at least $100 billion in profits.” (Once OpenAI achieves this, an agreement between the two companies prevents Microsoft from accessing the technology.)
So which definition is Altman referring to? He doesn't say for sure. However, the former seems most likely. In his post, Altman said AI agents (AI systems that can autonomously perform specific tasks) could “join the workforce” so to speak and “sharply change the way companies produce” this year. I wrote that I was thinking about it.
“We continue to believe that putting great tools in people's hands repeatedly leads to great and far-reaching outcomes,” Altman wrote.
It's possible. However, it is also true that today's AI technologies have significant technical limitations. I get hallucinations. It makes a mistake that is obvious to everyone. And it can be very expensive.
Altman seems confident that all of this can be quickly overcome. But if there's anything we've learned about AI over the past few years, it's that timelines can change.
“I am confident that in the coming years everyone will see what we are seeing, and the need to act with extreme caution while maximizing broad benefits and authority,” Altman said. I believe it is very important.” “OpenAI is no ordinary company given the potential of our work. I feel truly fortunate and humbled to play a role in this work.”
As OpenAI signals a shift in focus to what it considers superintelligence, one might expect the company to devote sufficient resources to ensuring its superintelligence systems operate securely.
OpenAI has written several times about how the transition to a world with superintelligence is “far from guaranteed” and that we don't have all the answers. “[W]“There is no solution for steering or controlling potentially superintelligent AI and preventing AI fraud,” the company said in a blog post dated July 2023.[H]Humans cannot reliably monitor AI systems that are much smarter than we are, so our current coordination techniques are not compatible with superintelligence. ”
Since the publication of this post, OpenAI has disbanded teams focused on AI safety, including safety for superintelligent systems, and has seen the departure of several influential safety-focused researchers. I've seen it. Some of these staffers cited OpenAI's increasingly commercial ambitions as a reason for their departure. OpenAI is currently restructuring its company to become more attractive to outside investors.
Asked in a recent interview about critics who say OpenAI doesn't focus enough on safety, Altman said, “I'd point to our track record.”