To give female academics and others focused on AI their well-deserved and overdue spotlight time, TechCrunch is launching a series of interviews highlighting notable women who have contributed to the AI revolution. Start. As the AI boom continues, we'll be publishing several articles throughout the year highlighting key research that may go unrecognized. Click here for a detailed profile.
Anna Korhonen is Professor of Natural Language Processing (NLP) at the University of Cambridge. She is also a Senior Research Fellow at Churchill University, a Research Fellow at the Association for Computational Linguistics, and a Research Fellow at the European Institute for Learning and Intelligent Systems.
Korhonen was previously a fellow at the Alan Turing Institute and holds a PhD in computer science and master's degrees in both computer science and linguistics. She researches her NLP and how to develop, adapt and apply computational techniques to meet the needs of AI. She has a particular interest in responsible, “human-centered” NLP that, in her own words, “unleashes an understanding of human cognitive, social and creative intelligence.”
Q&A
In short, how did you get started in AI? What attracted you to the field?
I have always been fascinated by the beauty and complexity of human intelligence, especially human language. However, my interest in STEM subjects and practical applications led me to study engineering and computer science. I chose to specialize in AI because it is a field that allows me to combine all these interests.
What work are you most proud of in the AI field?
The science of building intelligent machines is fascinating, and it's easy to get lost in the world of language modeling, but the ultimate reason we build AI is its practical potential. I am most proud of my work where fundamental research in natural language processing has led to the development of tools that can support societal and global benefits. Examples include tools that help us better understand how diseases like cancer and dementia develop and can be treated, and apps that can support education.
Much of my current research is driven by a mission to develop AI that can better improve human life. AI has huge positive potential for societal and global benefits. A big part of my job as an educator is to encourage the next generation of AI scientists and leaders to focus on realizing their potential.
How do we overcome the challenges of a male-dominated tech industry and, by extension, a male-dominated AI industry?
I am fortunate to work in the field of AI, where there is a significant female population and an established support network. I have found these to be extremely helpful in helping me overcome career and personal challenges.
The biggest issue for me is how male-dominated industries are setting the AI agenda. The current arms race to develop ever-bigger AI models at all costs is a case in point. This has a profound impact on the priorities of both academia and industry, and has far-reaching socio-economic and environmental implications. Do we need a larger model? And what are the global costs and benefits of that model? I think we would have asked these questions much earlier in the game if we had a better gender balance in the field.
What advice would you give to women looking to enter the AI field?
AI desperately needs more women at all levels, especially at leadership levels. The current leadership culture isn't necessarily attractive to women, but with active engagement you can change that culture, and by extension, the culture of her AI. Women are notoriously not always the best at supporting each other. I would really like to see a change in attitude in this regard. If we want to achieve better gender balance in this field, we need to actively network and support each other.
What are the most pressing issues facing AI as it evolves?
AI has developed with incredible speed, evolving from an academic field to a global phenomenon in less than a decade. During this time, most efforts have been focused on scaling with large amounts of data and computation. Little effort has been put into thinking about how this technology should be developed to best serve humanity. People have good reason to worry about the safety and reliability of AI and its impact on jobs, democracy, the environment, and other areas. We urgently need to put human needs and safety at the center of AI development.
What issues should AI users be aware of?
Current AI, even if it appears highly fluent, ultimately lacks human knowledge of the world and the ability to understand the complex social contexts and norms in which we operate. Even with today's best technology, mistakes happen, but our ability to prevent or predict them is limited. AI is a very useful tool for many tasks, but I can't trust it to educate my children or make decisions that are important to me. We humans must continue to take responsibility.
What is the best way to build AI responsibly?
AI developers tend to think about ethics as an afterthought, after the technology has already been built. The best way to think about it is before development begins. Questions such as “Do we have a diverse enough team to develop a fair system?” or “Is my data really free to use and representative of the entire user population?” “Are my techniques solid?” should really be the first question you ask.
While some of this problem can be addressed through education, it can only be enforced through regulation. Recent developments in national and global AI regulation are significant and must continue to ensure that future technologies are safer and more reliable.
How can investors more effectively promote responsible AI?
Regulations around AI are emerging and companies will eventually need to comply. Responsible AI can be thought of as sustainable AI that is truly worth investing in.