As part of TechCrunch's ongoing “Women in AI” series, which aims to shine a well-deserved and long-overdue spotlight on female academics and others focused on AI, TechCrunch interviewed Lakshmi Raman, Director of AI at the CIA. We spoke about her journey to the position, the CIA's use of AI, and the balance that needs to be struck in embracing new technologies while deploying them responsibly.
Raman is a longtime intelligence professional, having earned a bachelor's degree from the University of Illinois at Urbana-Champaign and a master's in computer science from the University of Chicago before joining the CIA as a software developer in 2002. A few years later, he rose to management positions at the agency, eventually leading the agency's entire enterprise data science effort.
Raman said she was fortunate to have female role models and mentors as resources at the CIA, given that the intelligence community has historically been a male-dominated hierarchy.
“I still have people I can turn to, people I can listen to for advice, people I can talk to about what the next level of leadership looks like,” she said. “I think there are things that all women have to go through as they go through their careers.”
As director of the CIA, Raman coordinates, integrates and drives AI efforts across the agency. “We believe that AI is there to support our mission,” he says. “At the forefront of using AI is human-machine collaboration.”
AI is not new to the CIA. The agency has been researching applications of data science and AI since about 2000, particularly in the areas of natural language processing (analyzing text), computer vision (analyzing images), and video analytics, according to Raman. The agency strives to stay on top of emerging trends, such as generative AI, with a roadmap informed by both industry and academia, he added.
“When you think about the sheer volume of data that we have to consume within an agency, content triage is an area where generative AI can make a difference,” Raman says. “We're looking at things like helping with search and discovery, helping with ideation, and helping generate counterarguments to counter any analytical biases that we might have.”
There is a sense of urgency within the US intelligence community to deploy any tools that could help the CIA counter rising geopolitical tensions around the world, from terrorist threats from the Gaza war to disinformation campaigns by foreign powers (China, Russia, etc.). Last year, the Special Competitive Research Project, a powerful advisory group focused on AI in national security, set a two-year timeline for domestic intelligence agencies to move beyond experiments and limited pilot projects and deploy generative AI at scale.
One generative AI-powered tool developed by the CIA, Osiris, is a bit like OpenAI’s ChatGPT, but tailored for intelligence use: It summarizes data (currently only unclassified and publicly available data) and allows analysts to dig deeper by asking follow-up questions in plain English.
Osiris is currently used by thousands of analysts across the 18 U.S. intelligence agencies, as well as within the CIA. Raman declined to say whether it was developed in-house or using technology from a third-party company, but said the CIA has partnerships with well-known vendors.
“We are leveraging commercial services,” Raman said, adding that the CIA is also employing AI tools for tasks like translation and alerting analysts to potentially important developments during off-hours. “We need to work closely with private companies to provide not only the larger services and solutions that you've heard of, but also more niche services from non-traditional vendors that you might not have thought of yet.”
Difficult technology
There are many reasons to be skeptical and concerned about the CIA's use of AI.
In February 2022, Senators Ron Wyden (D-Oregon) and Martin Heinrich (D-New Mexico) revealed in an open letter that the CIA maintains secret, private data repositories containing information collected on U.S. citizens, despite the agency being generally barred from investigating Americans and U.S. companies. And a report from the Office of the Director of National Intelligence last year showed that U.S. intelligence agencies, including the CIA, purchase data on Americans from data brokers such as LexisNexis and Sayari Analytics, with little oversight.
Many Americans would undoubtedly be opposed if the CIA were to use AI to scrutinize this data: it would be a clear infringement of civil liberties and, due to the limitations of AI, could lead to wildly unfair outcomes.
Some studies have found that crime-prediction algorithms from companies like Geolytica are easily skewed by arrest rates and tend to disproportionately flag black communities, while other studies suggest facial recognition has a higher false positive rate for people of color than for white people.
In addition to bias, even today's best AI can hallucinate or make up facts and figures when asked — Microsoft's meeting summarization software, for example, sometimes quotes people who don't exist — and it's easy to see how this could be problematic in intelligence work, where accuracy and verifiability are paramount.
Raman insisted that the CIA not only complies with all US laws, but also “follows all ethical guidelines” and uses AI “in a way that reduces bias.”
“I call this a thoughtful approach. [to AI]”The approach we're taking is to help users understand as much as possible about the AI systems they're using. Building responsible AI requires the engagement of all stakeholders – that means AI developers, the Office for Privacy and Civil Rights. [and so on].”
Regardless of what an AI system is designed to do, it's important for its designers to identify areas where it might fall short, Raman noted. In a recent study, researchers at North Carolina State University found that AI tools such as facial recognition and gunshot detection algorithms were being used by police who were unaware of the technology or its shortcomings.
In one particularly egregious example of law enforcement abuse of AI, perhaps born out of ignorance, the NYPD has reportedly used photos, distorted images, and sketches of celebrities to perform facial recognition matching of suspects in cases where still images from surveillance cameras had not produced results.
“AI-generated output must be clearly understandable to users, which of course means that we must label any AI-generated content and clearly explain how our AI systems work,” Raman said. “Everything the agency does is in compliance with legal requirements and ensures that users, partners and stakeholders are aware of all relevant laws, regulations and guidelines governing the use of AI systems, and we comply with all these rules.”
This reporter certainly hopes that's true.