About two months ago, Brandon Tseng, co-founder of Shield AI, and one of his employees were in an Uber in Kiev, Ukraine. They were on their way to a meeting with military officials to sell AI pilot systems and drones, when suddenly an employee flashed a warning on his cell phone. Russian bombs were flying. Tseng shrugged off the possibility of death. “When it’s time to go, it’s your time to go,” he said.
Instead, Tseng, a former Navy SEAL, wanted more action. Shield AI employees previously traveled to more dangerous regions of Ukraine to train troops using the company's software and drones. “I'm so jealous of where they ended up going,” Tseng said. “It's just from an adventure standpoint.”
Tseng embodies the quiet machismo that pervades most defense technology founders. When I met him last month at the company's Arlington office, he showed off a knife on display in his office that had the SEAL slogan “Suffer in Silence” engraved on it. The top of the white wall was illuminated with fluorescent lights (to give it the appearance of a spaceship, Tseng said) and covered with slogans such as “Live by the Code of Honor” and “Earn a Shield Every Day.” I pointed out that they were pretty intense. “Really?” replied Mr. Tseng.
Tseng founded Shield AI in 2015 with his brother Ryan Tseng, a patented electrical engineer, with a clear mission: “We have created the world's best AI pilot.” “We want to put 1 million AI pilots in the hands of our customers.”
To that end, he and his brother raised more than $1 billion from investors including Riot Ventures and the US Innovative Technology Fund. The company develops AI software to make aircraft autonomous, but Tseng said he wants to bring Shield AI's software to underwater and ground systems as well. The company also develops hardware products such as the drone “V-BAT.”
Shield AI is also part of the rare breed of defense technology startups that actually win decent-sized government contracts, like this year's $198 million contract from the Coast Guard. Poised for a bigger future, the founders chose a new office surrounded by three floors of Raytheon Corp., one of the major defense contractors.
Ukraine: Institute for US defense technology startups
September 16th was a sign of changing times. Instead of flying the founders of defense technology to Capitol Hill and having them put on suits and grovel to politicians, Washington, D.C., came to them.
Members of the U.S. House Armed Services Committee gathered at the University of California, Santa Cruz's Silicon Valley campus, along with Palantir CTO Shyam Sankar, Brandon Tseng, and executives from Skydio, Applied Intuition, and Saildrone. They discussed acquisition reform in the US Department of Defense (DoD) and, inevitably, the role of US technology in Ukraine. This is the first hearing the committee has held outside of Washington, D.C. since 2006.
Tseng told policymakers that Ukraine “was a great laboratory.” “I think Ukrainians have discovered that they don't use things that aren't useful on the battlefield.”
Defense technology founders, including Anduril co-founder Palmer Lackey and Skydio co-founder Adam Bly, have all come to the forefront of this battleground to sell relatively new technology to a rapidly deteriorating battlefield. are flocking to the country of Unfortunately, not all of America's technology is working. The Wall Street Journal reports that the U.S. startup's drones have almost universally failed in electronic warfare operations in Ukraine, as the drones would no longer function under Russian GPS blackout technology. It means that.
“Ukraine is at war and people are being killed. But…I want to use the lessons learned,” Tseng told me a week later, reflecting on the hearing. “There is no need to relearn those lessons. The United States should not want to relearn those lessons.”
Not surprisingly, he believes that Shield AI's drones are working better than other drones in Ukraine because they can operate without relying on GPS. He declined to say specifically how many drones Shield AI has sent, but added, “We're working to build on our success to date and get more drones over there.” said.
A Terminator-like AI killer? Or “Ender’s Game”?
Mr. Tseng's corner office is empty except for a framed copy of the Constitution hanging crooked on the wall. He cited it as one of his biggest inspirations. “It's not because we're perfect, it's because we aspire to values that I would argue are perfect values,” he said. “That's the most important thing. We're always moving in that direction.”
He straightened his frame before skimming through the outlined history of the war. He said deterrence tends to emerge when radically new technologies emerge, such as the atomic bomb, stealth technology, and GPS. He said AI will usher in a new era of deterrence, provided the Department of Defense funds it appropriately. “Private companies are spending more money on AI and autonomy than the entire defense budget,” he said.
The potential value of AI-related federal contracts has grown from $335 million in 2022 to $4.6 billion in 2023, according to a report from the Brookings Institution. But that's only a fraction of the more than $70 billion that venture capital has invested in defense technology during roughly the same period, according to PitchBook.
Still, the biggest issue with military AI use is not budget, but ethics. Founders and policymakers alike are grappling with whether to allow fully autonomous weapons, where AI itself decides when to kill. Recently, the rhetoric of some founders appears to be on the side of developing such weapons.
For example, a few days ago Anduril's Lucky claimed that “many adversaries are currently running a shadow campaign at the United Nations” to trick Western countries into not actively pursuing AI. He implied that fully autonomous AI is no worse than a landmine. But he did not mention that the United States is among more than 160 countries that have agreed to ban the use of antipersonnel mines in most locations.
Tseng is adamantly opposed to fully autonomous weapons. “We had to make a moral decision about using deadly force on the battlefield,” he said. “It's a human decision, and it will always be a human decision. That's the position of Shield AI. That's the position of the U.S. military.”
While it is correct that the U.S. military is not currently purchasing fully autonomous weapons, companies are not prohibited from developing fully autonomous weapons. What if the US changes its position? “I think that's a strange hypothesis,” he replied. “Congress doesn't want that. Nobody wants that.”
So if he didn't foresee an army of Terminator-like killers, what did he imagine? We can,” Tseng said. “There are no technological limits to how effectively a person can command on the battlefield.”
He said it would be similar to “Ender's Game,” referring to the 1985 science fiction classic in which a boy military attaché can unleash an army of Space Forces with a wave of his hand.
“Except that he was commanding a robot, not an actual person,” Tseng said.