Meta today made its Llama series of AI models available to U.S. government agencies and national security contractors to counter perceptions that its “open” AI is aiding foreign adversaries. Then he announced.
“We are pleased to confirm that we are making Llama available to U.S. government agencies, including those working on defense and national security applications, as well as private sector partners supporting that work,” said Mehta. said in a blog post. “We partner with companies like Accenture, Amazon Web Services, Anduril, Booz Allen, Databricks, Deloitte, IBM, Leidos, Lockheed Martin, Microsoft, Oracle, Palantir, Scale AI, and Snowflake to bring Llama to government agencies. We are introducing it.”
For example, Meta says Oracle uses Llama to process aircraft maintenance documents. Scale AI is fine-tuning Llama to support specific missions for national security teams. And Lockheed Martin is offering Llama to defense customers for use cases such as computer code generation.
Meta policy generally prohibits developers from using Llama for projects related to military, war, or espionage. But the company told Bloomberg it has made exceptions in this case, as well as similar government agencies (and contractors) in the UK, Canada, Australia and New Zealand.
Last week, Reuters reported that Chinese researchers associated with the People's Liberation Army (PLA), the military wing of China's ruling party, used an older Rama model, the Rama 2, to develop tools for defense applications. Chinese researchers, including two from the People's Liberation Army's Research and Development Group, are developing a military-specific chat designed to collect and process information to inform operational decision-making. I created a bot.
Mehta told Reuters in a statement that the use of the “single and outdated” llama model was “unauthorized” and against its terms of service. However, the report provided significant impetus to the ongoing debate over the benefits and risks of open AI.
The use of AI for defense, whether open or “closed,” is controversial.
A recent study by the nonprofit AI Now Institute found that AI currently deployed for military intelligence, surveillance, and reconnaissance relies on personal data that can be stolen and weaponized by adversaries. , poses a danger. It also has vulnerabilities such as bias and a tendency to hallucinate, for which there is currently no cure, the co-authors write, recommending creating AI independent of “commercial” models.
Employees at several Big Tech companies, including Google and Microsoft, protested contracts with their employers to build AI tools and infrastructure for the U.S. military.
Mehta argues that open AI can accelerate defense research while advancing America's “economic and security interests.” But the U.S. military has been slow to adopt this technology and is skeptical about its ROI. So far, the U.S. military is the only branch of the U.S. military to deploy generative AI.
TechCrunch has a newsletter focused on AI. Sign up here to get it delivered to your inbox every Wednesday.