To give female academics and others focused on AI their well-deserved and overdue spotlight time, TechCrunch is launching a series of interviews highlighting notable women who have contributed to the AI revolution. Start. As the AI boom continues, we'll be publishing several articles throughout the year highlighting key research that may go unrecognized. Click here for a detailed profile.
Urvashi Aneja is the founding director of the Digital Futures Lab, an interdisciplinary research effort that seeks to explore the interaction of technology and society in the Global South. She is also an Associate Research Fellow in the Asia-Pacific Program at Chatham House, an independent policy institute based in London.
Aneja's current research focuses on the social impact of algorithmic decision-making systems and platform governance in India, where she is based. Aneja recently authored a study on the current use of AI in India, reviewing use cases across sectors such as police and agriculture.
Q&A
In short, how did you get started in AI? What attracted you to this field?
I started my career with research and policy involvement in the humanitarian field. For several years, I have been researching the use of digital technologies in protracted crises in resource-poor settings. We quickly learned that there is a fine line between innovation and experimentation, especially when dealing with vulnerable populations. What I learned from this experience made me deeply concerned about the techno-solutionist discourse around the potential of digital technologies, especially AI. At the same time, India launched the Digital India Mission and the National Strategy for Artificial Intelligence. I was troubled by the dominant discourse that saw AI as a silver bullet for India's complex socio-economic problems and the complete lack of critical discussion on the issue.
What work (in the AI field) are you most proud of?
We are proud to have drawn attention not only to the political economy of AI production, but also to its broader implications for social justice, labor relations, and environmental sustainability. Narratives about AI all too often focus on the benefits of a particular application, or at best, the benefits and risks of that application. But this is missing the forest for the trees. A product-oriented lens obscures broader structural impacts, such as AI’s contribution to epistemic injustice, workforce detraining, and the perpetuation of unaccountable power in the world of the majority. We are also proud to have translated these concerns into concrete policy and regulation, including developing procurement guidelines for the use of AI in the public sector and providing evidence in litigation against Big Tech companies in the Global South. I am thinking.
How do we overcome the challenges of a male-dominated tech industry and, by extension, a male-dominated AI industry?
By letting my work speak for itself. And by constantly asking “Why?”
What advice would you give to women looking to enter the AI field?
Develop knowledge and expertise. Make sure you have a technical understanding of the problem. But don't focus exclusively on AI. Rather, study broadly so that you can make connections across disciplines and disciplines. Not enough people understand AI as a socio-technical system that is a product of history and culture.
What are the most pressing issues facing AI as it evolves?
I think the most pressing issue is the concentration of power in a small number of technology companies. This problem is not new, but it is exacerbated by new developments in large-scale language models and generative AI. Many of these companies are now fueling fears about the existential risks of AI. This not only distracts from existing harms, but also puts these companies in the position they need to address AI-related harms. In many ways, we are losing some of the momentum of the “tech rush” that occurred after the Cambridge Analytica scandal. We are also concerned that in places like India, AI is being positioned as a necessity for socio-economic development, offering an opportunity to overcome persistent challenges. Not only does this overstate the potential of AI, it also ignores the impossibility of leapfrogging the institutional development needed to develop safeguards. Another issue we don’t consider seriously enough is the impact of AI on the environment. The current trajectory is likely to be unsustainable. In the current ecosystem, those most vulnerable to the impacts of climate change are unlikely to benefit from AI innovations.
What issues should AI users be aware of?
Users need to be made aware that AI is not magic, nor is it anything close to human intelligence. It is a type of computational statistic that has many useful uses, but is ultimately just a probabilistic guess based on historical and previous patterns. I think there are some other issues that users need to be aware of, but I would like to caution that they need to be wary of attempts to shift responsibility downstream, i.e. to users. We have recently seen this in the use of generative AI tools in low-resource settings in the majority of the world. Rather than paying attention to these experimental and unreliable technologies, the focus often shifts to how end users, such as farmers and those on the front lines, function. Healthcare workers need to upskill.
What is the best way to build AI responsibly?
This must start by assessing the need for AI. Are there problems that AI can solve on its own, or are there other means possible? And when building AI, do we need complex black-box models, or can simpler logic-based models do the same? We also need to re-center domain knowledge in building AI. In our obsession with big data, we have sacrificed theory. We need to build a theory of change based on domain knowledge, and this should be the basis of the models we are building, not just big data. This is of course in addition to important issues such as participation, inclusive teams, and worker rights.
How can investors more effectively promote responsible AI?
Investors need to consider the entire lifecycle of AI production, not just the outputs and outcomes of AI applications. This requires considering a range of issues, including whether the workforce is valued fairly, environmental impact, the company's business model (i.e. is it based on commercial oversight), and internal accountability measures within the company. There is. Investors also need to demand better and more rigorous evidence about the supposed benefits of AI.