To shine a much-deserved and long-overdue spotlight on women researchers and others focused on AI, TechCrunch has been publishing a series of interviews highlighting notable women who have contributed to the AI revolution. As the AI boom continues, we'll be publishing these stories throughout the year to shine a spotlight on important research that often goes unrecognized. Find more profiles here.
Chinasa T. Okoro is a Fellow in the Governance Studies Program at the Brookings Institution's Center for Technology Innovation. Previously, she served on the Ethics and Social Impact Committee that helped develop Nigeria's National Artificial Intelligence Strategy and has served as an AI policy and ethics advisor to various organizations, including the African Union Development Agency and the Quebec Institute for Artificial Intelligence. She recently completed her PhD in Computer Science at Cornell University, where she studied the impacts of AI on the Global South.
Just to briefly ask, how did you get started working in AI? What attracted you to this field?
I turned to AI because I believe computational technology can advance biomedical research and democratize access to healthcare for marginalized communities. [at Pomona College]When I began working with a professor specializing in human-computer interaction, I saw the challenges of bias in AI first-hand. During my PhD, I became interested in understanding how these issues affected people from the Global South, who make up the majority of the world's population and are often excluded and underrepresented in AI development.
What work in AI are you most proud of?
I am very proud to have worked with the African Union (AU) to develop the AU-AI Continental Strategy for Africa, which aims to help AU member states prepare for the responsible adoption, development, and governance of AI. The strategy took over a year and a half to draft and was released in late February 2024. It is currently in an open feedback period, with the aim of being formally adopted by AU member states in early 2025.
As a first-generation Nigerian American who grew up in Kansas City, Missouri and never left the US until studying abroad in college, I have always sought to make Africa the centerpiece of my career. I am excited to be involved in such impactful work so early in my career and to pursue similar opportunities to help shape inclusive, global AI governance.
How do you address the challenges of a male-dominated tech industry, and even a male-dominated AI industry?
Finding a community with people who share my values was essential to navigating the male-dominated tech and AI industry.
I have been fortunate to see a lot of notable work advancing responsible AI and exposing its harms led by Black women scholars, including Timnit Gebru, Safiya Noble, Abeba Birhane, Ruha Benjamin, Joy Buolamwini, and Dev Raji, many of whom I have had the pleasure of interacting with over the past few years.
Their leadership has inspired me to continue working in this field and has taught me the value of working “against the wind” to make a meaningful impact.
What advice do you have for women looking to enter the AI field?
Don't be intimidated if you don't have a technical background. The field of AI is multifaceted and requires expertise from many different disciplines. My work is heavily influenced by sociologists, anthropologists, cognitive scientists, philosophers, and others from the humanities and social sciences.
What are the most pressing issues facing AI as it evolves?
One of the most prominent challenges is improving the fair representation of non-Western cultures in major languages and multimodal models. The majority of AI models are trained in English and based on data that primarily represents Western contexts, leaving out valuable perspectives from the majority of the world.
Moreover, the race to build larger models will exacerbate the effects of natural resource depletion and climate change, which already affect countries in the Global South disproportionately.
What issues should AI users be aware of?
Many publicly available AI tools and systems overstate their capabilities or don't work at all, and many of the tasks that AI is intended to use could potentially be solved with simpler algorithms or basic automation.
Moreover, generative AI has the potential to exacerbate the harms we've observed with previous AI tools: Over the years, we've seen these tools demonstrate bias and lead to harmful decisions against vulnerable communities, and this trend is likely to increase as generative AI grows in scale and scope.
However, empowering informed people to understand the limitations of AI can help improve the responsible adoption and use of these tools. As AI tools are rapidly integrated into society, it will be essential to improve AI and data literacy among the general public.
What is the best way to build AI responsibly?
The best way to build AI responsibly is to critique the intended and unintended use cases of these tools. Those building AI systems have a responsibility to speak out against AI being used for harmful scenarios in warfare or policing, and should seek outside guidance on whether AI is appropriate for other use cases they may target. Because AI often amplifies existing social inequalities, developers and researchers must also be careful how they construct and manage the datasets they use to train AI models.
How can investors promote responsible AI?
Many argue that VCs' growing interest in “cash on” the current AI wave is fueling the rise of the “AI quack pill” coined by Arvind Narayanan and Sayash Kapur. I agree and believe that investors need to take leadership roles alongside academics, civil society stakeholders, and industry players to drive responsible AI development. As an angel investor myself, there are a lot of questionable AI tools on the market. Investors should also invest in AI expertise to vet companies and commission external audits of tools demoed in pitch decks.
Anything else you'd like to add?
The ongoing “AI Summer” has led to a proliferation of “AI experts” who often stifle important discussions about the current risks and harms of AI and present misleading information about the capabilities of AI-enabled tools. We encourage anyone seeking to learn about AI to be critical of these voices and learn from trusted sources.