To give female academics and others focused on AI their well-deserved and overdue spotlight time, TechCrunch is launching a series of interviews highlighting notable women who have contributed to the AI revolution. Start. As the AI boom continues, we'll be publishing several articles throughout the year highlighting key research that may go unrecognized. Click here for a detailed profile.
Francine Bennett is a founding member of the Ada Lovelace Institute Board of Directors and currently serves as the Institute's interim director. Prior to that, she worked in the field of biotechnology using AI to find treatments for rare diseases. She also co-founded a data science consultancy and is a founding board member of DataKind UK, which helps UK charities support data science.
In short, how did you get started in AI? What attracted you to this field?
I started with pure mathematics and wasn't really interested in applied stuff. Although I liked playing with computers, I thought applied mathematics was just calculations and not very interesting intellectually. I then started working on AI and machine learning because data is becoming much richer in different contexts, and there are exciting new ways to use AI to solve all sorts of problems. That's when it started to become clear to me and others that the possibilities were opening up. And machine learning turned out to be much more interesting than I expected.
What work (in the AI field) are you most proud of?
The work I'm most proud of is the work that isn't so technically sophisticated, but that makes real improvements for people. For example, ML can be used to discover previously unnoticed patterns in hospital patient safety incident reports to help healthcare professionals make future improvements. Patient outcomes. And I'm proud to represent the importance of putting people and society, not technology, at the center at events like this year's UK AI Safety Summit. I think you can only do that with authority, because I have a lot of experience working with technology and getting excited about it and being deeply involved in how it actually impacts people's lives.
How do we overcome the challenges of a male-dominated tech industry and, by extension, a male-dominated AI industry?
Primarily, I choose to work in places and with people who are interested in that person and their skills across genders, and I try to use the influence I have to make that the norm. Masu. I also try to work in as diverse a team as possible. Being on a balanced team rather than an exceptional “minority” creates a completely different atmosphere and allows everyone to reach their full potential. More broadly, AI is so multifaceted and likely to impact so many walks of life, especially people from marginalized communities, that people from all walks of life should be involved in building and shaping it. It's clear that you need to. It works well.
What advice would you give to women looking to enter the AI field?
enjoy it! This is a very interesting, intellectually challenging, and ever-changing field. You can always find something useful or rewarding. There are also many important applications that no one has thought of yet. Also, don't worry too much about needing to know everything technical (literally, no one knows everything technical). Start with something you're interested in and work your way from there.
What are the most pressing issues facing AI as it evolves?
I think there is currently a lack of a shared vision of what we want AI to do and what it can and cannot do as a society. Many technological advances are currently occurring, which are likely to have enormous environmental, economic, and social impacts, without a well-founded understanding of their potential risks and unintended consequences. There's a lot of excitement surrounding rolling out these new technologies. Most of the people who build technology and talk about risks and consequences are a fairly narrow group of people. We now have an opportunity to decide what we want from AI and work to make it a reality. We can think back to other types of technology and how we have dealt with their evolution, or what we wish we had done better. What would be the equivalent of his AI product for crash testing new cars? A restaurant is held responsible for accidentally causing food poisoning. Consulting affected people during planning permission. Challenge AI decisions just like you would with human bureaucracy.
What issues should AI users be aware of?
We want people who use AI technology to be confident about what their tools are and what they can do, and to say what they want from AI. AI is often seen as something unknowable and uncontrollable, but in reality it is just a set of tools. And I want humans to feel like they're in charge of what they do with those tools. But those who use technology should not be the only ones to blame. Governments and industry need to create an environment where people using AI can feel confident in using it.
What is the best way to build AI responsibly?
At the Ada Lovelace Institute, a company dedicated to harnessing data AI for the good of people and society, we ask this question often. It's difficult and you can take it from hundreds of angles, but from my perspective there are two that are really big.
The first is to be willing to build and not stop sometimes. We see AI systems coming out strong all the time. There, builders try to add “guardrails” later to reduce problems or harm, but they don't put themselves in situations where they might be stopped.
The second thing is to really engage and try to understand how all kinds of people experience what you're building. If we can truly understand their experiences, we have the potential to build a positive kind of responsible AI, something that truly solves people's problems based on a shared vision of what good is. It will increase further. Avoiding Negativity – Avoiding accidentally making someone’s life worse because their daily life is very different from yours.
For example, the Ada Lovelace Institute has partnered with the NHS to develop algorithmic impact assessments that developers must complete as a condition of accessing health data. Therefore, developers must assess the potential impact of their AI systems on society before implementation, and incorporate the real-life experiences of the people and communities that may be affected.
How can investors more effectively promote responsible AI?
By asking questions about investment and future possibilities, what does it look like for this AI system to perform well and be responsible? Where can things go off track? To people and society? What are the potential ramifications? How do you know if you need to stop building or change things significantly, and if so, what should you do? A one-size-fits-all There is no prescription, but simply by asking questions and communicating the importance of taking responsibility, investors can change where their company focuses and puts its efforts.