To shine a much-deserved and long-overdue spotlight on women researchers and others focused on AI, TechCrunch has been publishing a series of interviews highlighting notable women who have contributed to the AI revolution. As the AI boom continues, we'll be publishing these stories throughout the year, shining a spotlight on important research that often goes unrecognized. Find the other profiles here.
Sarah Myers West is Managing Director of the AI Now Institute, a US research institute that explores the social impacts of AI and policy research addressing the concentration of power in the tech industry. She previously served as Senior Advisor on AI at the US Federal Trade Commission, and is a Visiting Scholar at Northeastern University and a Research Associate at Cornell University's Institute for Civic Technology.
Just to briefly ask, how did you get started working in AI? What attracted you to this field?
For the past 15 years, I have been researching the role of tech companies as powerful political actors as they emerge at the forefront of international governance. Early in my career, I had a front-row seat to how U.S. tech companies emerged and transformed political landscapes around the world, including in Southeast Asia, China, and the Middle East, and I wrote a book delving into how industry lobbying and regulation shaped the origins of the internet surveillance business model, even as the technologies that offered alternatives in theory never materialized in practice.
Many times throughout my career, I have asked myself, “Why are we trapped in this dystopian vision of the future?” The answer has little to do with technology itself and a lot to do with public policy and commercialization.
That's been my project ever since, both in my career as a researcher and in my policy work as co-director of AI Now: If AI is part of the infrastructure of our everyday lives, then we need to critically examine the institutions that are producing it, and make sure that as a society there's enough friction (whether through regulation or through institutionalization) so that ultimately the needs of ordinary citizens are met, not the needs of tech companies.
What work in AI are you most proud of?
I'm really proud of my work at the FTC, a U.S. government agency, which has been at the forefront of regulatory enforcement, especially for artificial intelligence. I loved rolling up my sleeves and working on cases. I got to utilize the methodologies I was trained in as a researcher to do investigative work, because the toolkit is fundamentally the same. It was satisfying to see this work have a direct impact on the public, using those tools to hold the powerful directly accountable, whether that was addressing how AI is degrading workers and inflating prices, or fighting the anti-competitive behavior of big tech companies.
We're fortunate to have an incredible team of technologists on our team working out of the White House Office of Science and Technology Policy, and it's been exciting to see how the foundation we've laid there correlates directly to the emergence of generative AI and the importance of cloud infrastructure.
What are the most pressing issues facing AI as it evolves?
First of all, AI technologies are widely used in highly sensitive contexts, such as hospitals, schools, and borders, yet remain poorly tested and validated. It is a technology prone to errors, and we know from independent studies that those errors are not evenly distributed. They disproportionately harm communities that have long borne the brunt of discrimination. We need to set a much higher standard. But what concerns me is how powerful institutions are using AI (whether it works or not) to justify their actions, from the use of weapons against civilians in Gaza to the stripping of labor rights. This is not a technology issue, it’s a discourse issue: how we orient our culture around technology and the idea that certain choices and behaviors become more “objective” or somehow acceptable when AI is involved.
What is the best way to build AI responsibly?
We should always start with the question: why build AI, is the use of artificial intelligence necessary, and is AI technology fit for that purpose? Sometimes the answer is to build something better. In that case, developers should comply with the law, document and validate their systems well, and make them as open and transparent as possible so independent researchers can do the same. But sometimes the answer is not to build anything at all. We don't need any more “responsibly built” weapons or surveillance technologies. In this question, the end use matters, and we need to start there.