Anthropic is launching a program to fund the development of new types of benchmarks that can evaluate the performance and impact of AI models, including generative models like its own Claude.
Anthropik's program, announced Monday, will award grants to third-party organizations that can “effectively measure the advanced capabilities of AI models,” as the company put it in a blog post. Interested parties can submit applications for evaluation at any time.
“Our investments in these assessments are intended to advance the overall field of AI safety and provide valuable tools to benefit the entire ecosystem,” Anthropik wrote in a blog post. “Developing high-quality, safety-related assessments remains challenging, and demand continues to outstrip supply.”
As we've noted before, AI has a benchmarking problem: Today's most commonly cited AI benchmarks don't adequately capture how the average person actually uses the systems they test, and some benchmarks, especially those released in the early days of modern generative AI, raise questions about whether they're measuring what they purport to measure, given their age.
Anthropic’s proposed very high-level, and perhaps more difficult than it sounds, solution is to create challenging benchmarks focused on the security and societal impact of AI through new tools, infrastructure, and methods.
The company specifically seeks tests to evaluate a model's ability to accomplish tasks such as conducting cyberattacks, “powering” weapons of mass destruction (such as nuclear weapons), and manipulating or deceiving the public (e.g. through deepfakes and disinformation). Regarding AI risks related to national security and defense, Anthropic said it is working on developing an “early warning system” to identify and assess risks, though the blog post did not specify what such a system would entail.
Antropic also said the new program intends to support research into benchmarking and “end-to-end” tasks that explore AI's potential in supporting scientific research, multilingual conversations, and mitigating the toxicity of deep-rooted bias and self-censorship.
To make all this happen, Antropic envisions a new platform where experts can develop their own evaluations and conduct large-scale testing of their models involving “thousands” of users. The company says it has hired a full-time coordinator for the program and may buy or expand projects it sees as having the potential to scale.
“We offer a range of funding options tailored to the needs and stage of each project,” Anthropic said in the post, though an Anthropic spokesperson declined to provide further details about those options. “Teams will have the opportunity to engage directly with Anthropic's domain experts from the Frontier Red Team, Fine-Tuning, Trust & Safety, and other relevant teams.”
Anthropic's efforts to support new AI benchmarks are commendable — assuming, of course, that it has enough funding and talent committed to them — but it may be hard to fully trust the company given its commercial ambitions in the AI race.
In the blog post, Anthropik is pretty forthright about the fact that it wants certain assessments it funds to align with an AI safety taxonomy it developed (with input from third parties such as nonprofit AI research organization METR), which is within the company's purview, but could force program applicants to accept definitions of “safe” or “unsafe” AI that they may not entirely agree with.
Some in the AI community are also likely to take issue with Anthropik's claims that cite “catastrophic” and “deceptive” risks from AI, such as the risk of nuclear weapons. Many experts say there is little evidence to suggest that AI as we know it will gain the ability to destroy the world or outsmart humans in the near future. Claims of impending “superintelligence” only serve to distract from today's pressing AI regulatory issues, such as AI's tendency to hallucinate, these experts add.
In a post, Anthropik wrote that it hopes its program will be a “catalyst for progress toward a future where comprehensive AI assessment is the industry standard,” a mission that resonates with many open, non-corporate efforts to create better AI benchmarks. But it remains to be seen whether those efforts are willing to work with AI vendors whose ultimate loyalty lies with shareholders.