To give female academics and others focused on AI their well-deserved and overdue spotlight time, TechCrunch is launching a series of interviews highlighting notable women who have contributed to the AI revolution. Start. As the AI boom continues, we'll be publishing several articles throughout the year highlighting key research that may go unrecognized. Click here for a detailed profile.
Sandra Wachter is Professor and Senior Research Fellow in Data Ethics, AI, Robotics, Algorithms and Regulation at the Oxford Internet Institute. She is also a former Research Fellow at the Alan Turing Institute, the UK's National Institute for Data Science and AI.
During her time at the Turing Institute, Watcher evaluated the ethical and legal aspects of data science and highlighted instances in which opaque algorithms have become racist and sexist. She also considered how to audit her AI to combat misinformation and promote fairness.
Q&A
In short, how did you get started in AI? What attracted you to this field?
I don't remember a time in my life when I didn't think that innovation and technology have incredible potential to make people's lives better. But I also know that technology can have a devastating impact on people's lives. So I was always driven to find ways to ensure that perfect middle ground, especially because of my strong sense of justice. Achieving innovation while protecting human rights.
I have always felt that the law plays a very important role. Laws enable a middle ground that protects people while allowing innovation. Law as an academic discipline came very naturally to me. I like a challenge, I like to understand how the system works and see how I can advance the game and find loopholes and close them.
AI is an incredibly transformative force. It is implemented in the fields of finance, employment, criminal justice, immigration, health, and the arts. This can be good or bad. And whether that's good or bad is a matter of design and policy. I was naturally drawn to this law because I felt it could make a meaningful contribution to ensuring that innovation benefits as many people as possible.
What work (in the AI field) are you most proud of?
I think the work I'm most proud of right now is the work I co-wrote with Brent Mittelstadt (philosopher), Chris Russell (computer scientist), and myself as a lawyer.
Our latest research on bias and fairness, “Unfairness in Fair Machine Learning,” reveals the negative effects of enforcing many “group fairness” measures in practice. Specifically, equity is achieved not by helping disadvantaged groups, but by “levelling out,” or making everyone worse off. This approach is not only ethically questionable but also highly problematic in the context of EU and UK anti-discrimination law. In a Wired article, we discussed how leveling down can actually be harmful. For example, in the medical field, enforcing group fairness can lead to missing more cancer cases than strictly necessary, while also reducing the overall accuracy of the system.
This was scary for us, and it was also important for technology people, policy people, and everyone to know. In fact, we worked with the UK and her EU regulators and shared some amazing results with them. We strongly hope that this will give policymakers the power they need to implement new policies that prevent AI from causing such serious harm.
How do we overcome the challenges of a male-dominated tech industry and, by extension, a male-dominated AI industry?
What's interesting is that I never thought of technology as something that “belonged” to men. It wasn't until I started school that society told me there was no room for someone like me in the tech industry. I still remember when I was 10 years old, there was a curriculum where girls had to knit and sew while boys built birdhouses. I also wanted to build a birdhouse, so I requested to be transferred to the boys' class, but the teacher told me, “Girls don't do that.'' I went to the principal of the school to try to get the decision reversed, but unfortunately I was unsuccessful at that time.
It's very difficult to fight the stereotype that you shouldn't be part of this community. I'd like to say things like that don't happen anymore, but unfortunately they don't.
But I've been incredibly lucky to work with allies like Brent Mittelstadt and Kris Russell. I have had great mentors such as Drs., a Ph.D. My boss and I are a network of like-minded people of all genders who are doing our best to move in a forward direction to improve things for everyone interested in technology. is expanding.
What advice would you give to women looking to enter the AI field?
Above all, try to find like-minded people and allies. It is important to find allies and support each other. My most impactful work always comes from talking with open-minded people from other backgrounds and disciplines to solve common problems we face. Accepted wisdom alone cannot solve new problems, so women and other groups who have historically faced barriers to entry into AI and other technology fields need to be able to truly innovate and create new ideas. Or have the tools to offer something new.
What are the most pressing issues facing AI as it evolves?
I think there are a wide range of issues that need to be seriously considered from a legal and policy perspective. To name a few, AI suffers from biased data that leads to discriminatory and unfair outcomes. AI is inherently opaque and difficult to understand, but it is playing a role in determining who gets a loan, who gets a job, who goes to prison, and who gets to go to college.
Generative AI has associated problems, as well as promoting misinformation, suffering from hallucinations, violating data protection and intellectual property rights, putting people's jobs at risk, and contributing to climate change more than the airline industry. contributes to.
We don't have a second to spare. We had to deal with these issues yesterday.
What issues should AI users be aware of?
I think there's a tendency to believe in a certain narrative that says, “AI is here to stay, get on board, or get left behind.” I think it's important to consider who is driving this narrative and who is profiting from it. It's important to remember where the actual power is. The power does not lie with those who innovate, but with those who buy and implement AI.
Therefore, consumers and businesses need to ask themselves, “Will this technology actually help me and in what ways?” Electric toothbrushes have built-in AI. Who is this for? Who needs this? What's being improved here?
In other words, ask yourself what is broken, what needs to be fixed, and can AI actually fix it?
This type of thinking is expected to shift market power and move innovation toward a focus on utility to the community rather than just profit.
What is the best way to build AI responsibly?
Putting laws in place that require responsible AI. Once again, the very unhelpful false narrative that regulation stifles innovation tends to prevail. It's not true. Regulation suppresses harmful innovation. Good laws encourage and foster ethical innovation. This is why we have safe cars, planes, trains, and bridges.Society does not suffer even if regulations hinder it.
Developing AI that violates human rights.
Traffic and safety regulations for cars are also said to “suppress innovation'' and “limit autonomy.'' These laws prohibit driving without a license, prevent cars without safety belts or airbags from entering the market, and penalize drivers who fail to obey speed limits. Imagine what the auto industry's safety record would be without laws to regulate vehicles and drivers. AI is currently at a similar tipping point, and the path AI will take remains uncertain due to heavy industry lobbying and political pressure.
How can investors more effectively promote responsible AI?
I wrote a paper a few years ago called “How unbiased AI can make us richer.” I deeply believe that human rights-respecting, unbiased, explainable and sustainable AI is not only the legally, ethically and morally right thing to do, but can also be beneficial. .
We really hope that investors understand that responsible research and innovation will lead to better products. Bad data, bad algorithms, and bad design choices create worse products. I can't convince you that you should do the ethical thing because it's the right thing to do, but I hope you understand that the ethical thing is also more beneficial. Ethics should be seen as an investment rather than a hurdle to overcome.