To give female academics and others focused on AI their well-deserved and overdue spotlight time, TechCrunch is launching a series of interviews highlighting notable women who have contributed to the AI revolution. Start. As the AI boom continues, we'll be publishing several articles throughout the year highlighting key research that may go unrecognized. Click here for a detailed profile.
Claire Liebowitz is director of the AI and Media Integrity Program at Partnership on AI (PAI), an industry group backed by Amazon, Meta, Google, Microsoft and others, and is committed to the “responsible” deployment of AI technology. It is working. She also oversees PAI's AI and Media Integrity Steering Committee.
In 2021, Mr. Leibovitz was a Journalism Fellow at Tablet Magazine, and in 2022 he was a Fellow at the Rockefeller Foundation's Bellagio Center focused on AI Governance. Mr. Leibovitz, who holds a bachelor's degree in psychology and computer science from Harvard University and a master's degree from Oxford, has advised businesses, governments, and nonprofits on AI governance, generative media, and digital information. I did.
Q&A
In short, how did you get started in AI? What attracted you to this field?
It may sound paradoxical, but I came to the AI field because I was interested in human behavior. I grew up in New York and have always been fascinated by the different ways people interact there and how such a diverse society takes shape. I was interested in big questions that affect truth and justice, such as how we choose to trust others. What causes conflict between groups? Why do people believe certain things to be true and not others? I began to explore these questions in my academic life through research in cognitive science, but as technology I quickly realized that it influenced the answers to these questions. I was also intrigued by how artificial intelligence could be a metaphor for human intelligence.
That led me to a computer science classroom. There, Professor Barbara Gross, a pioneer in natural language processing, and Jim, who blended his philosophy and computer science backgrounds, he has to shout out Professor Waldo, and the faculty… He emphasized the importance of filling the classroom with information. Majors other than computer science and engineering will focus on the social impact of technology, including AI. And this was before “AI ethics” became a distinct and popular field. They argue that while technical understanding is useful, technology impacts vast areas including geopolitics, economics, social engagement, etc. It has become clear that the issue needs to be considered.
Whether you're an educator thinking about how generative AI tools might impact pedagogy or a museum curator experimenting with predictive routes for exhibits, read our lab report Even if you're a doctor researching new image detection methods for , AI can impact your field. This reality that AI is impacting so many areas intrigued me. The intellectual diversity inherent in work in the AI field has provided opportunities to impact many aspects of society.
What work (in the AI field) are you most proud of?
I'm proud of the work we do in AI, which integrates different perspectives in surprising, action-oriented ways, and which not only accommodates but encourages disagreement. I joined PAI six years ago as his second staff member in the organization and quickly sensed that this organization was a trailblazer in its commitment to diverse perspectives. PAI believed that such efforts are key prerequisites for AI governance that reduces harm and leads to real adoption and impact in the AI field. This has proven to be true, and I have been encouraged to help his PAI embrace multidisciplines and watch his PAI grow alongside the AI field.
Our work on synthetic media over the past six years began long before generative AI became part of the public consciousness and demonstrated the potential of multi-stakeholder AI governance. In 2020, we collaborated with nine different organizations from civil society, industry, and media to create the Facebook Deepfake Detection Challenge, a machine learning competition to build models to detect AI-generated media. did. These outside perspectives help shape the winning model's fairness and goals, showing how human rights experts and journalists can contribute to seemingly technical problems like deepfake detection. Last year, we published a set of prescriptive guidance on responsible synthetic media, PAI Responsible Practices for Synthetic Media. We currently have 18 backers from very different backgrounds, from OpenAI to TikTok, Code for Africa, Bumble, BBC and WITNESS. It's one thing to be able to put practical guidance on paper based on technical and social realities, but it's another to actually have institutional support. In this case, the institutions were committed to providing transparent reporting on how they navigate the synthetic media field. AI projects that feature specific guidance and demonstrate how to implement that guidance across an institution are among the most meaningful to me.
How do we overcome the challenges of a male-dominated tech industry and, by extension, a male-dominated AI industry?
Throughout my career, I have had great mentors, both men and women. Finding people who support me and challenge me at the same time has been the key to all the growth I've experienced. I feel that focusing on common interests and discussing the questions that animate the AI field can bring people with different backgrounds and perspectives together. Interestingly, more than half of PAI's team is made up of women, and many of the organizations working on issues of AI and society, or responsible AI, have large numbers of female staff. This is often in contrast to people working on engineering or AI research teams, and is a step in the right direction for representation in the AI ecosystem.
What advice would you give to women looking to enter the AI field?
As I mentioned in the previous question, some of the predominantly male-dominated areas within AI that I've encountered are also some of the most technical. While technical acumen should not be prioritized over other forms of literacy in AI fields, technical training does benefit both confidence and effectiveness in such fields. I understand that. We need equal representation in technical roles and openness to the expertise of experts in other fields, such as civil rights and politics, with more balanced representation. At the same time, equipping more women with technical literacy is key to balancing representation in the AI field.
I also found it very rewarding to connect with women in the AI field who have balanced family and professional lives. Find and talk to role models about the big questions around career and parenting, as well as the unique challenges that women still face in the workplace, so you feel better equipped to handle those challenges when they arise. became.
What are the most pressing issues facing AI as it evolves?
As AI evolves, questions of truth and trust become increasingly difficult, both online and offline. Content from images to videos to text can be generated or modified by AI, so seeing is believing. How can we trust evidence if documents can be easily and realistically altered? If it is so easy to imitate real humans, can we have a human-only space online? AI How do companies navigate the trade-offs between freedom of expression and the potential for AI systems to cause harm? More broadly, how do companies navigate the trade-off between freedom of expression and the potential for AI systems to cause harm? More broadly, how do companies navigate the trade-offs between freedom of expression and the potential for AI systems to cause harm? How can we ensure that we include the perspectives of stakeholders around the world, including the public, rather than being shaped by the public?
In addition to these specific questions, PAI also asks how we think about fairness and bias in the age of algorithmic decision-making, how labor impacts AI, and how it impacts AI. He has also been involved in other aspects of AI and society, including how to proceed with the responsible implementation of AI systems. Even how to make AI systems more reflective of myriad perspectives. At a structural level, we need to consider how AI governance can navigate vast trade-offs by incorporating different perspectives.
What issues should AI users be aware of?
First, AI users need to know that if something sounds too good to be true, it probably is.
The generative AI boom of the past year has, of course, reflected tremendous ingenuity and innovation, but it has also led to public messages about AI that are often hyperbolic and inaccurate.
AI users also need to understand that AI is not revolutionary, but rather exacerbates and amplifies existing problems and opportunities. This means that we should not take AI too seriously, but rather use this knowledge as a useful foundation for navigating an increasingly AI-infused world. For example, if we're concerned about the fact that people can misinterpret the context of pre-election videos by changing captions, we should also be concerned about the speed and scale at which deepfake technology can be used to mislead. there is. If you are concerned about the use of surveillance in the workplace, you should also consider how AI will make such surveillance easier and more prevalent. Maintaining a healthy skepticism about the novelty of an AI problem while being honest about its current characteristics provides a helpful framework for users' encounters with AI.
What is the best way to build AI responsibly?
Building AI responsibly requires broadening our notions of who plays a role in “building” AI. Of course, influencing technology companies and social media platforms is an important way to influence the impact of AI systems, and these institutions are essential to building technology responsibly. At the same time, we need to recognize how building responsible AI that serves the public interest requires continuing to engage diverse institutions from civil society, industry, media, academia, and the general public.
For example, consider the responsible development and deployment of synthetic media.
While technology companies may be concerned about their liability in determining how synthetic videos will impact users before an election, journalists should be aware that fake videos may pose as fakes from trusted news brands. You may be concerned about creating one. Human rights activists may consider the responsibilities associated with how AI-generated media reduces the impact of videos as evidence of abuse. And while artists are excited about the opportunity to express themselves through generative media, they are concerned about how their work could be used without their consent to train AI models that generate new media. It may be. These diverse considerations highlight how important it is to involve a variety of stakeholders in efforts and efforts to build AI responsibly, and how the ways in which AI is integrated into society will impact countless institutions. It shows how it is affected and influenced by.
How can investors more effectively promote responsible AI?
Years ago, I heard DJ Patil, former White House chief data scientist, describe a revision of the “act fast and break things” mantra that was so prevalent in the early days of social media. , that's what sticks in my mind. He suggested those in the field “move purposefully to solve problems.”
I like this because it doesn't mean stagnation or abandonment of innovation, but rather intentionality and the possibility of being able to innovate responsibly. Investors need to help guide this mindset and give portfolio companies more time and space to embed responsible AI practices without hindering progress. While financial institutions often describe limited time and tight deadlines as limiting factors for doing the “right” thing, investors can be a major catalyst for changing this dynamic.
The more I worked in AI, the more I started working on very human problems. And these are questions we all need to answer.