To give female academics and others focused on AI their well-deserved and overdue spotlight time, TechCrunch is launching a series of interviews highlighting notable women who have contributed to the AI revolution. is published. As the AI boom continues, we publish these articles throughout the year, highlighting important research that often goes unrecognized. Click here for a detailed profile.
Catherine Breslin is the founder and director of Kingfisher Labs, which helps companies develop their AI strategies. She has spent over 20 years as an AI scientist, working at the University of Cambridge, Toshiba Research, and even Amazon Alexa. She previously served as an advisor to the VC fund Deeptech Labs and as a solution architect and director of Cobalt Speech & Language.
She attended undergraduate school at the University of Oxford and then completed her master's and doctorate degrees at the University of Cambridge.
In short, how did you get started in AI? What attracted you to this field?
I loved math and physics at school and decided to study engineering at university. That's where I first learned about AI (it wasn't called AI at the time). I was intrigued by the idea of using computers to process speech and language, something that seems easy for humans to do. From there, I ended up studying for a PhD in speech technology and working as a researcher. AI has come a long way in recent years, and we see a huge opportunity to build technology that improves people's lives.
What work are you most proud of in the AI field?
In 2020, at the beginning of the pandemic, I founded a consulting firm with a mission to bring real-world AI expertise and leadership to organizations. I'm proud of the work I've done working with clients across a variety of interesting projects and that I've been able to do this in a really flexible way around my family.
How do we overcome the challenges of a male-dominated tech industry and, by extension, a male-dominated AI industry?
Although it's hard to measure exactly, about 20% of people in the AI field are women. My understanding is that the higher you get, the lower the percentage becomes. For me, one of the best ways to overcome this problem is to build a supportive network. Of course, you can receive support regardless of your gender. However, it can be reassuring to talk to other women who are going through similar situations or are going through the same issues, and it's great to not feel so alone.
Another thing for me is to be careful about where I spend my energy. I believe that we will only see lasting change when more women are in senior and leadership positions. That won't happen if women spend all their energy fixing the system instead of advancing their careers. There needs to be a realistic balance between driving change and focusing on your day-to-day work.
What advice would you give to women looking to enter the AI field?
AI is a huge and exciting field with a lot going on. It also creates a lot of noise as papers, products, and models seem to be released continuously. It's impossible to keep up with everything. Furthermore, not all papers and research results matter in the long run, no matter how flashy the press release. My advice is to find a niche that you're really interested in and can advance in, learn everything you can about that niche, and work on the problems you want to solve. Doing so will give you the solid foundation you need.
What are the most pressing issues facing AI as it evolves?
Progress has been rapid over the past 15 years, and we have seen AI move from the lab to products without stepping back to properly assess situations and predict their outcomes. One example that comes to mind is how much better our speech and language technology performs in English than in other languages. It's not that researchers have ignored other languages. Significant efforts have been made in language technologies other than English. However, an unintended consequence of improvements in English technology means that we are building and deploying technologies that do not serve everyone equally.
What issues should AI users be aware of?
I think people should realize that AI is not a silver bullet that will solve all problems in the next few years. Building an impressive demo is easy, but building an AI system that consistently works well takes a lot of effort. We must not forget the fact that AI is designed and built by humans, for humans.
What is the best way to build AI responsibly?
Building AI responsibly means incorporating diverse voices from the beginning, including input from your customers and everyone affected by your product. It is important to test your system thoroughly to see how well it performs in different scenarios. Testing has a reputation for being boring compared to the excitement of devising new algorithms. However, it is important to know if the product really works. Next, you need to be honest with yourself and your customers about both the capabilities and limitations of what you're building to prevent your system from being exploited.
How can investors more effectively promote responsible AI?
Startups are building many new applications of AI, and investors have a responsibility to be thoughtful about what they fund. We want more investors to have a voice in the vision of the future we are building and how responsible AI fits into it.