In late September, Shield AI co-founder Brandon Tseng vowed that U.S. weapons would never be fully autonomous, meaning that AI algorithms would make the final decision to kill someone. “Congress doesn't want that,” the defense technology founder told TechCrunch. “Nobody wants that.”
But Tseng spoke too quickly. Five days later, Anduril co-founder Palmer Lackey announced his openness to autonomous weapons, or at least his deep skepticism about the debate. America's adversaries are using a phrase that “sounds pretty good in a nutshell: Don't you agree that robots should never be able to decide who lives or dies?” Lackey said this month. He first spoke at a lecture at Pepperdine University. “My point to them is, where is the moral high ground when you can't tell the difference between a school bus full of children and a land mine and a Russian tank?”
When asked for further comment, Anduril spokeswoman Shannon Pryor said Luckey was not saying robots should be programmed to kill themselves, but simply that “bad people are creating bad AI.” He said he was concerned about “using it.”
Silicon Valley has erred on the side of caution in the past. In the words of Luckey co-founder Trae Stephens: “I think the technology we're building is enabling humans to make good decisions about these things,” he told Kara Swisher last year. “Of course, any potentially lethal decision must involve an accountable party.”
An Anduril spokesperson denied any discrepancy between Mr Lackey (pictured above) and Mr Stevens' views, saying that Mr Stevens was not saying that a human should always make the call. He said it's just that someone has to take responsibility.
To be fair, the US government's own position is similarly ambiguous. The US military does not currently purchase fully autonomous weapons. However, it does not prohibit companies from manufacturing them, nor does it explicitly prohibit them from selling them abroad. Last year, the U.S. released updated guidelines for AI safety in the military, supported by many U.S. allies, that require military leaders to approve new autonomous weapons. But the guidelines are voluntary (Anduril said he is committed to following them), and U.S. officials have reiterated that “now is not the time” to consider a binding ban on autonomous weapons. states.
Last month, Palantir co-founder and Anduril investor Joe Lonsdale also expressed interest in considering fully autonomous weapons. Speaking at an event hosted by the Hudson Institute think tank, Lonsdale expressed frustration that the question was framed as a yes or no question. Instead, he proposed the hypothesis that China has introduced AI weapons, but the United States “has to press a button every time it fires.” He encouraged policymakers to adopt a more flexible approach to the extent to which AI is incorporated into weapons.
“If I just put in some stupid top-down rules, I'd quickly find out that my assumptions were wrong, because I'm a staff member who's never played this game before,” he said. Ta. “They could have destroyed us in battle.”
When TC asked Lonsdale for further comment, he stressed that defense technology companies should not be setting the agenda on lethal AI. “The important context for what I said is that our companies don't set policy, and we don't want to set policy. It's the elected officials who set policy. It's an official's job.” “But to do a good job, you have to educate yourself about the nuances.”
He also reiterated his desire to further consider making weapons autonomous. “It's not a dualism, as you suggest. 'Fully autonomous or not' is not a valid policy question. “There are sophisticated dials that follow these different dimensions,” he said. “Before policymakers implement these rules and decide where the dial should be set and in what situations, it's important to learn the game, understand what the bad guys are doing, and understand what the lives of Americans are.” You need to learn what it takes to compete and win.”
Activists and human rights groups have tried unsuccessfully for years to enact an international ban on lethal autonomous weapons, but the United States has resisted signing the ban. But the Ukraine war may have turned the tide against activists, providing both a treasure trove of combat data and a testing field for the founders of defense technology. Even though companies are now integrating AI into weapon systems, humans are still needed to make the final kill decision.
Meanwhile, Ukrainian officials are pushing to automate their weapons in hopes of gaining an advantage over Russia. “We need maximum automation,” Mykhailo Fedorov, Ukraine's Minister of Digital Transformation, said in an interview with the New York Times. “These technologies are the foundation of our victory.”
For many in Silicon Valley and Washington, D.C., the biggest fear is that China or Russia will deploy fully autonomous weapons first, forcing the United States' hand. At a United Nations debate on AI weapons last year, one Russian diplomat was particularly reserved. “We understand that for many delegations the priority is human control,” he said. “For the Russian Federation, the priorities are somewhat different.”
Speaking at the Hudson Institute event, Lonsdale said the tech industry needs to take it upon itself to “teach the Navy, the Pentagon, and Congress” and address the potential of AI “hopefully ahead of China.”
Lonsdale and Lackey's affiliates are working to get their views heard by Congress. Anduril and Palantir have spent more than $4 million on lobbying this year, according to OpenSecrets.