Anthropic CEO Dario Amodei wants you to know that he's not an AI “ruiner.”
At least, that's what I read in “Mic Drop,” a roughly 15,000-word essay that Amodei published on his blog late Friday. (I asked Anthropic's Claude chatbot if I agreed, but unfortunately the post exceeded the length limit of the free plan.)
Broadly speaking, Amodei paints a picture of a world where all AI risks are mitigated and the technology enables prosperity, social uplift, and enrichment never before possible. He insists this is not about minimizing the downsides of AI. Mr. Amodei initially targeted AI companies that (without naming names) oversold and promoted the capabilities of their technology in general. However, some might argue, and I do, that this essay leans too far in the direction of techno-utopianism and simply makes claims that are not supported by facts.
Amodei believes “powerful AI” will emerge as early as 2026. (By strong AI, we mean AI that is “smarter than a Nobel Prize winner” in fields such as biology or engineering, and can perform tasks such as proving unsolved mathematical theorems and writing.) According to Amodei. , this AI can control any software or hardware imaginable, including industrial machinery, and can basically do most of the jobs humans do today, but better.
“[This AI] “Includes acting on the Internet, directing or directing people, ordering materials, directing experiments, watching videos, and making videos,” Amodei said. he writes. “Although it has no physical entity (other than existing on a computer screen), existing physical tools, robots, or experimental equipment can be controlled through the computer. In theory, it could itself be We could also design the robots and equipment we use.”
A lot needs to happen to get to that point.
Even today's best AI cannot “think” in the way we understand it. Rather than making inferences, the model reproduces the patterns it observes in the training data.
Assuming the purpose of Amodei's argument is that the AI industry will soon “solve” human-like thinking, robotics will enable future AIs to conduct experiments in laboratories and manufacture their own tools. Will I catch up? The fragility of today's robots suggests that this is unlikely.
But Amodei is optimistic, very optimistic.
He believes that within the next 7 to 12 years, AI could be able to treat almost all infectious diseases, eliminate most cancers, treat genetic diseases and stop Alzheimer's disease in its early stages. I believe there is. Amodei believes that in the next five to 10 years, conditions such as PTSD, depression, schizophrenia and addiction will be cured with AI-prepared drugs or genetically prevented through embryonic screening. I am thinking of becoming deaf (controversial opinion). And AI-developed drugs can also “adjust cognitive function and emotional state” and “ [our brains] To behave a little better and live a more fulfilling life. ”
If this becomes a reality, Amodei predicts that the average human lifespan will double to 150 years.
“My basic prediction is that AI-enabled biology and medicine will allow us to compress into five to 10 years the advances that human biologists would make over the next 50 to 100 years. ,” he writes. “I call this the ‘compressed 21st century.’ Within a few years after powerful AI is developed, all the advances in biology and medicine that could have been achieved in the entire 21st century will be achieved. I think it will.”
These seem far-fetched given that AI has yet to fundamentally transform healthcare. And that may not change for some time, or ever. Even if AI reduces the effort and cost of pre-clinical testing of drugs, it is likely to fail at later stages, just like any human-designed drug. Consider that the AI being deployed in healthcare today is biased, risky, and proving extremely difficult to implement in existing clinical and laboratory settings. . That all these problems and more will be resolved within about 10 years seems, in a word, quite…aspirational.
But Amodei doesn't stop there.
He argues that AI has the potential to solve world hunger. It could change the course of climate change. And it could change the economies of most developing countries. Amodei believes that AI could bring sub-Saharan Africa's per capita GDP ($1,701 in 2022) closer to China's per capita GDP ($12,720 in 2022) within five to 10 years. are.
These are bold declarations, to say the least. However, anyone who has heard of disciples of the “Singularity” movement who expects similar results will be familiar. To Amodei's credit, he has been involved in “global health, philanthropy, [and] “Political advocacy” will occur, he argues, because it serves the world's greatest economic interests.
However, I would like to point out that in one important aspect this has not been the case historically. Many of the workers responsible for labeling the datasets used to train AI are paid well below minimum wage, but their employers can earn tens or even hundreds of millions in capital from their results. I'm getting it.
Mr. Amodei briefly touched on the dangers of AI to civil society, noting that the Coalition for Democracies will ensure that AI supply chains are secured and that adversaries seeking to use AI for harmful purposes are protected from powerful AI production methods (such as semiconductors). ) is proposed to be prevented. At the same time, he suggests that if used properly, AI could be used to “undermine repressive governments” and even reduce bias in legal systems. (AI has historically exacerbated bias in the legal system.)
“The implementation of truly mature and successful AI has the potential to reduce bias and make things more fair for everyone,” Amodei wrote.
So if AI takes over every conceivable job and does it better and faster, won't humans be in financial trouble? Deaf, and admits that at that point society will need to discuss “how we should organize our economy.”
But he doesn't offer any solutions.
“People want a sense of accomplishment and even a sense of competition. In a post-AI world, we will use complex strategies to tackle extremely difficult tasks, just as people do today when they set out to do something. “It would be entirely possible to spend years doing research projects, trying to become a Hollywood actor, or starting a company,” he writes. “(a) The fact that some AI could in principle do this task better, and (b) the fact that this task is no longer an economically rewarding component of the global economy, doesn't sit well with me. It doesn't seem important.”
In summary, Amodei promotes the notion that AI is simply a technological accelerator and that humans are naturally inclined toward “the rule of law, democracy, and Enlightenment values.” But in doing so, he ignores the many costs of AI. AI is predicted to have, and is already having, a huge impact on the environment. And that creates inequality. Nobel Prize-winning economist Joseph Stiglitz and others have pointed out that labor disruption caused by AI could further concentrate wealth in the hands of corporations and leave workers more powerless than ever before. There is.
Although Amodei doesn't like to admit it, these companies include Anthropic. (He only mentions Anthropic six times throughout the essay.) After all, Anthropic is a business, reportedly worth nearly $40 billion. And the beneficiaries of this AI technology are generally corporations, whose responsibility is only to increase returns to shareholders, not to improve humanity.
In fact, this essay seems ironically timed given that Anthropic is reportedly in the process of raising billions of dollars in venture funding. OpenAI CEO Sam Altman similarly released a technopotimist manifesto just before OpenAI closed a $6.5 billion funding round. Perhaps it's a coincidence!
Again, Amodei is not a philanthropist. Like any CEO, he has a product he wants to sell. As it happens, his product will “save the world,” and anyone who thinks otherwise risks being left behind (or so he wants us to believe).