To shine a much-deserved and long-overdue spotlight on women researchers and others focused on AI, TechCrunch has been publishing a series of interviews highlighting notable women who have contributed to the AI revolution. As the AI boom continues, we'll be publishing these stories throughout the year, shining a spotlight on important research that often goes unrecognized. Find the other profiles here.
Arati Prabhakar is the White House Director of Science and Technology Policy and science advisor to President Joe Biden. She previously served as the first woman to head the National Institute of Standards and Technology (NIST) and director of the Defense Advanced Research Projects Agency (DARPA).
Prabhakar earned her Bachelor of Science in Electrical Engineering from Texas Tech University and her Master of Science in Electrical Engineering from California Institute of Technology, where in 1984 she became the first woman to earn a PhD in Applied Physics.
In simple terms, what inspired you to start working in AI?
I took over as head of DARPA in 2012, during a time when machine learning-based AI was booming. We did great work with AI, and AI was everywhere, so that was my first clue that something big was happening. I took over the role at the White House in October 2022, and a month later ChatGPT came on the scene and captured everyone's imagination with generative AI. That became the moment that President Biden and Vice President Kamala Harris seized to get AI on the right track, and that's the work we've been doing for the past year.
What attracted you to this field?
I love big, powerful technology. Technology always has its bright and dark sides, and that's definitely true here. The most interesting work I do as a technologist is creating, managing, and driving these technologies, because ultimately, if we do it right, that's where progress comes from.
What advice do you have for women looking to enter the AI field?
This is the same advice I would give to anyone who wants to get involved in AI: there are many ways to contribute, from immersing yourself in and building the technology, to using it in various applications, to working to ensure that the risks and harms of AI are managed. Whatever you do, understand that this is a technology that brings bright and dark sides. Above all, the time is now, so do something big and useful.
What are the most pressing issues facing AI as it evolves?
What I'm really interested in is, what are the most pressing issues for us as a nation moving forward with this technology? A lot of good work has been done to steer AI in the right direction and manage the risks. We have a lot more work to do, but the President's executive order and the White House Office of Management and Budget providing guidance to agencies on how to use AI responsibly are critical steps that move us in the right direction.
And now, I see the job as twofold. One, to ensure that AI is deployed in a responsible way, that it's safe, effective and trustworthy. The second, to leverage AI to solve big challenges. AI has the potential to enable everything from health to education to decarbonizing the economy to weather forecasting. This won't happen automatically, but I think the journey is worthwhile.
What issues should AI users be aware of?
AI is already in our lives. It powers the ads you see online and decides what to show next in your feed. It influences the price of an airline ticket. It influences the “yes” or “no” decision you make on a mortgage application. The first step is to recognize how pervasive AI is in our environment. AI can be a good thing, with the potential for greater creativity and scale. But it also comes with significant risks. In an AI-powered (or AI-driven) world, we all need to be smart users.
What is the best way to build AI responsibly?
Like any powerful technology, if we have ambitions to do something with it, we must do so responsibly. That starts with recognizing that the power of these AI systems comes with significant risks, and the types of risks vary depending on the application. For example, we know that generative AI can be used to enhance creativity. But we also know that it can distort the information environment. We also know that it can raise safety and security issues.
There are many applications where AI can greatly improve efficiency and provide scope, scale, and reach that we have never seen before. But before we can scale, we need to make sure it doesn't embed bias or violate privacy. This has huge implications for jobs and workers. If we get this right, it can empower workers by allowing them to do more and earn more, but it won't happen if we're not careful. President Biden has been very clear about making sure these technologies empower workers, not replace them.