To give female academics and others focused on AI their well-deserved and overdue spotlight time, TechCrunch is launching a series of interviews highlighting notable women who have contributed to the AI revolution. is published. As the AI boom continues, we publish these articles throughout the year, highlighting important research that often goes unrecognized. Click here for a detailed profile.
Today's featured person: Rachel Caldicutt is the founder of Careful Industries, which studies the social impact of technology on society. Clients include Salesforce and the Royal Academy of Engineering. Prior to Careful Industries, Mr. Coldicutt served as CEO of the think tank Doteveryone. At Doteveryone, I also conducted research on how technology is impacting society.
Prior to Doteveryone, he spent decades working in digital strategy at companies including the BBC and the Royal Opera House. She attended Cambridge University and in recognition of her work in digital technology, she was awarded the honor of OBE (Order of the British Empire).
In short, how did you get started in AI? What attracted you to the field?
I started working in the technology industry in the mid-90s. My first real technical job was with him in 1997 at Microsoft Encarta, where I helped build content databases for reference books and dictionaries. Over the past 30 years, I've worked on all kinds of new technologies. That's why we use automated processes and data to drive decisions, create experiences, and create works since the 2000s. Rather, I think the question is probably, “When did AI become a technology that everyone wanted to talk about?” And I think the answer is probably around his 2014 when DeepMind was acquired by Google. It was the moment in the UK when AI overtook everything else. This is despite the fact that many of the underlying technologies that we now call “AI” were already fairly widespread. General use.
I started working in the technology industry in the 1990s, almost by accident. And what has kept me in this field through its many changes is the fact that it is full of fascinating contradictions. I love how empowering it is to learn new skills and create things. I'm interested in what can be discovered from structured data, and I'm willing to spend the rest of my life observing and understanding how people create and shape the technologies we use. can be spent.
What work are you most proud of in the AI field?
Much of my AI work is in policy development and social impact assessment, working with government agencies, charities, and businesses of all types to help them use AI and related technologies in purposeful and trustworthy ways. We have supported you.
In the 2010s I ran Doteveryone, a responsible technology think tank, which helped reframe how UK policymakers thought about emerging technologies. Our research shows that AI is not a set of technologies without consequences, but one that has far-reaching impacts on people and societies in the real world. In particular, we're really proud of the free results scanning tool we've developed. This tool is currently being used by teams and companies around the world to help predict the social, environmental and political impacts of shipping choices. New products and features.
Recently, the AI and Society Forum 2023 was also a proud moment. Speaking to the industry-led UK Government's AI Safety Forum, my team at Care Trouble is calling on the whole of civil society to collectively make the case that AI can be brought to bear on our 8 billion people. quickly convened and organized his gathering of 150 people. , not just eight billionaires.
How do we overcome the challenges of a male-dominated tech industry and, by extension, a male-dominated AI industry?
As a relative veteran of the tech industry, I feel like some of the gains in gender representation in tech have been lost over the past five years. According to research from the Turing Institute, less than 1% of investments made in the AI space go to startups led by women, and women still make up only a quarter of the total tech workforce. It's just that. When he goes to AI conferences and events, he is reminded of the early 2000s in terms of the gender mix, especially in terms of who gets a platform to share their work, which he finds really sad and shocking.
I am able to overcome sexist attitudes in the tech industry because I have the great privilege of founding and running my own organization. I spent much of my early career experiencing sexism and sexual harassment on a daily basis. It gets in the way of great work and is an unnecessary cost of entry for many women. Instead, I've prioritized building a feminist business where we all strive for equity in everything we do, and where we can show that other ways are possible. We hope.
What advice would you give to women looking to enter the AI field?
Don't feel like you have to work in the field of “women's issues”, don't fall for the hype, seek out allies and build friendships with others to build a positive support network. What has supported me over the years is a network of friends, former colleagues, and allies. We support each other, give each other constant pep talks, and sometimes have a shoulder to cry on. Without it, you can feel very alone. You'll often be the only woman in the room, so it's important to have a place where you feel safe.
Hire as soon as you get the chance. Don’t recreate the structures you’ve seen before or entrench elitist and sexist industry expectations and norms. We challenge the status quo and support new employees with every hire. Then you can start building a new normal, no matter where you are.
Then, explore the work of great women who are pioneering great AI research and practice. Start by reading the work of pioneers like Ababa Birhane, Timnit Gebru, and Joy Buolamwini. They have all produced fundamental research that shapes our understanding of how AI works. Change and interact with society.
What are the most pressing issues facing AI as it evolves?
AI is an enhancer. While some use may feel inevitable, as a society we need to be empowered to make clear choices about what is worth enhancing. For now, the increased use of AI is primarily doing more to increase the power and bank balances of a relatively small number of male CEOs, which seems unlikely. [it] It shapes the world in which many people want to live. I hope that more people, especially those in industry and policymaking, will begin to address the question of what a more democratic and accountable AI would look like, and whether it is even possible.
The impacts of AI on the climate – the use of water, energy and critical minerals, and the health and social justice impacts on people and communities affected by the exploitation of natural resources – are at the top of the list for responsible development. Must be a matter of concern. In particular, he said, the fact that LLM is so energy-intensive speaks to the fact that current models are not fit for purpose. In 2024, we will need innovations that protect and restore the natural world, and mining models and working methods will need to be phased out.
We are also getting real about the impact that a more data-based society will have on surveillance, and the fact that in an increasingly unstable world, any general-purpose technology is likely to be used for unimaginable horrors in war. You need to be objective. Everyone involved in AI needs to be realistic about the historical and long-standing link between technological research and development and military development. We must champion, support, and demand innovation that starts with and is managed by communities, for outcomes that strengthen society rather than lead to more destruction.
What issues should AI users be aware of?
As well as the environmental and economic extraction built into many of today's AI business and technology models, we need to understand the everyday impacts of increased use of AI and what that means for everyday human interactions. Thinking is very important.
Some of the issues that have hit the headlines are about more existential risks, but how is the technology we use helping or hindering us on a daily basis? It's worth looking at whether it can be turned off and avoided, and what kind of automation is in place. Where can we, as consumers, vote with our feet to assert that we really want to carry on conversations with real people, not bots? We don't have to settle for low-quality automation, we stand together for better outcomes. You should ask for it.
What is the best way to build AI responsibly?
Responsible AI starts with good strategic choices. Rather than just throwing in an algorithm and hoping for the best results, you can intentionally decide what and how to automate. I've been talking about the idea that “the Internet is enough” for several years now, and I feel that this is a very helpful idea to guide our thinking when building new technologies. Rather than constantly pushing the envelope, can we build AI in a way that maximizes benefit and minimizes harm to people and the planet?
We have developed a robust process for this at Careful Trouble. This starts by working with the board and senior team to map how AI can and cannot support their vision and values. Understand where the problem is too complex or volatile to be enhanced by automation, and where automation would benefit. And finally, develop a proactive risk management framework. Responsible development is not a one-time application of a set of principles, but a process of continuous monitoring and mitigation. Continuous implementation and social adaptation means that quality assurance does not end after the product is shipped. As AI developers, we need to build iterative social sensing capabilities and treat responsible development and deployment as a living process.
How can investors more effectively promote responsible AI?
Invest more patiently, support more diverse founders and teams, and stop chasing exponential returns.