Close Menu
TechBrunchTechBrunch
  • Home
  • AI
  • Apps
  • Crypto
  • Security
  • Startups
  • TechCrunch
  • Venture

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

Everyone in high tech has an opinion about Soham Parekh

July 3, 2025

All stages of TechCrunch regain early release prices for limited time

July 3, 2025

Not everyone is excited about DMs on the thread

July 3, 2025
Facebook X (Twitter) Instagram
TechBrunchTechBrunch
  • Home
  • AI

    OpenAI seeks to extend human lifespans with the help of longevity startups

    January 17, 2025

    Farewell to the $200 million woolly mammoth and TikTok

    January 17, 2025

    Nord Security founder launches Nexos.ai to help enterprises move AI projects from pilot to production

    January 17, 2025

    Data proves it remains difficult for startups to raise capital, even though VCs invested $75 billion in the fourth quarter

    January 16, 2025

    Apple suspends AI notification summaries for news after generating false alerts

    January 16, 2025
  • Apps

    Not everyone is excited about DMs on the thread

    July 3, 2025

    Meta has found another way to engage you: message that message first

    July 3, 2025

    Everything you need to know about Flash, Blueski-based Instagram alternatives

    July 3, 2025

    Substack brings new updates to live streaming as it increases video push

    July 2, 2025

    Amazon shuts down the Freevee app in August

    July 2, 2025
  • Crypto

    Vitalik Buterin reserves for Sam Altman's global project

    June 28, 2025

    Calci will close a $185 million round as rival Polymeruk reportedly seeks $200 million

    June 25, 2025

    Stablecoin Evangelist: Katie Haun's Battle of Digital Dollars

    June 22, 2025

    Hackers steal and destroy millions of Iran's biggest crypto exchanges

    June 18, 2025

    Unique, a new social media app

    June 17, 2025
  • Security

    Ransomware Gang Hunter International says it's shut down

    July 3, 2025

    India's biggest finance says hackers have accessed customer data from insurance units

    July 2, 2025

    Data breaches reveal that Catwatchful's “Stalkerware” is spying on thousands of phones

    July 2, 2025

    Hacking, Leaking, Exposure: Do not use stalkerware apps

    July 2, 2025

    Qantas Hacks lead to theft of personal data for 6 million passengers

    July 2, 2025
  • Startups

    7 days left: Founders and VCs save over $300 on all stage passes

    March 24, 2025

    AI chip startup Furiosaai reportedly rejecting $800 million acquisition offer from Meta

    March 24, 2025

    20 Hottest Open Source Startups of 2024

    March 22, 2025

    Andrill may build a weapons factory in the UK

    March 21, 2025

    Startup Weekly: Wiz bets paid off at M&A Rich Week

    March 21, 2025
  • TechCrunch

    OpenSea takes a long-term view with a focus on UX despite NFT sales remaining low

    February 8, 2024

    AI will save software companies' growth dreams

    February 8, 2024

    B2B and B2C are not about who buys, but how you sell

    February 5, 2024

    It's time for venture capital to break away from fast fashion

    February 3, 2024

    a16z's Chris Dixon believes it's time to focus on blockchain use cases rather than speculation

    February 2, 2024
  • Venture

    Everyone in high tech has an opinion about Soham Parekh

    July 3, 2025

    All stages of TechCrunch regain early release prices for limited time

    July 3, 2025

    Kristen Craft brings fresh fundraising strategies to every stage

    July 3, 2025

    The Y Combinator alumni have launched a new $34 million fund dedicated to YC startups.

    July 3, 2025

    Learn how to tighten a cap table with TC All Stage 2025

    July 3, 2025
TechBrunchTechBrunch

Women in AI: Claire Leibowicz, AI and Media Integrity Expert at PAI

TechBrunchBy TechBrunchMarch 9, 202410 Mins Read
Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
Share
Facebook Twitter LinkedIn Pinterest Telegram Email


To give female academics and others focused on AI their well-deserved and overdue spotlight time, TechCrunch is launching a series of interviews highlighting notable women who have contributed to the AI ​​revolution. Start. As the AI ​​boom continues, we'll be publishing several articles throughout the year highlighting key research that may go unrecognized. Click here for a detailed profile.

Claire Liebowitz is director of the AI ​​and Media Integrity Program at Partnership on AI (PAI), an industry group backed by Amazon, Meta, Google, Microsoft and others, and is committed to the “responsible” deployment of AI technology. It is working. She also oversees PAI's AI and Media Integrity Steering Committee.

In 2021, Mr. Leibovitz was a Journalism Fellow at Tablet Magazine, and in 2022 he was a Fellow at the Rockefeller Foundation's Bellagio Center focused on AI Governance. Mr. Leibovitz, who holds a bachelor's degree in psychology and computer science from Harvard University and a master's degree from Oxford, has advised businesses, governments, and nonprofits on AI governance, generative media, and digital information. I did.

Q&A

In short, how did you get started in AI? What attracted you to this field?

It may sound paradoxical, but I came to the AI ​​field because I was interested in human behavior. I grew up in New York and have always been fascinated by the different ways people interact there and how such a diverse society takes shape. I was interested in big questions that affect truth and justice, such as how we choose to trust others. What causes conflict between groups? Why do people believe certain things to be true and not others? I began to explore these questions in my academic life through research in cognitive science, but as technology I quickly realized that it influenced the answers to these questions. I was also intrigued by how artificial intelligence could be a metaphor for human intelligence.

That led me to a computer science classroom. There, Professor Barbara Gross, a pioneer in natural language processing, and Jim, who blended his philosophy and computer science backgrounds, he has to shout out Professor Waldo, and the faculty… He emphasized the importance of filling the classroom with information. Majors other than computer science and engineering will focus on the social impact of technology, including AI. And this was before “AI ethics” became a distinct and popular field. They argue that while technical understanding is useful, technology impacts vast areas including geopolitics, economics, social engagement, etc. It has become clear that the issue needs to be considered.

Whether you're an educator thinking about how generative AI tools might impact pedagogy or a museum curator experimenting with predictive routes for exhibits, read our lab report Even if you're a doctor researching new image detection methods for , AI can impact your field. This reality that AI is impacting so many areas intrigued me. The intellectual diversity inherent in work in the AI ​​field has provided opportunities to impact many aspects of society.

What work (in the AI ​​field) are you most proud of?

I'm proud of the work we do in AI, which integrates different perspectives in surprising, action-oriented ways, and which not only accommodates but encourages disagreement. I joined PAI six years ago as his second staff member in the organization and quickly sensed that this organization was a trailblazer in its commitment to diverse perspectives. PAI believed that such efforts are key prerequisites for AI governance that reduces harm and leads to real adoption and impact in the AI ​​field. This has proven to be true, and I have been encouraged to help his PAI embrace multidisciplines and watch his PAI grow alongside the AI ​​field.

Our work on synthetic media over the past six years began long before generative AI became part of the public consciousness and demonstrated the potential of multi-stakeholder AI governance. In 2020, we collaborated with nine different organizations from civil society, industry, and media to create the Facebook Deepfake Detection Challenge, a machine learning competition to build models to detect AI-generated media. did. These outside perspectives help shape the winning model's fairness and goals, showing how human rights experts and journalists can contribute to seemingly technical problems like deepfake detection. Last year, we published a set of prescriptive guidance on responsible synthetic media, PAI Responsible Practices for Synthetic Media. We currently have 18 backers from very different backgrounds, from OpenAI to TikTok, Code for Africa, Bumble, BBC and WITNESS. It's one thing to be able to put practical guidance on paper based on technical and social realities, but it's another to actually have institutional support. In this case, the institutions were committed to providing transparent reporting on how they navigate the synthetic media field. AI projects that feature specific guidance and demonstrate how to implement that guidance across an institution are among the most meaningful to me.

How do we overcome the challenges of a male-dominated tech industry and, by extension, a male-dominated AI industry?

Throughout my career, I have had great mentors, both men and women. Finding people who support me and challenge me at the same time has been the key to all the growth I've experienced. I feel that focusing on common interests and discussing the questions that animate the AI ​​field can bring people with different backgrounds and perspectives together. Interestingly, more than half of PAI's team is made up of women, and many of the organizations working on issues of AI and society, or responsible AI, have large numbers of female staff. This is often in contrast to people working on engineering or AI research teams, and is a step in the right direction for representation in the AI ​​ecosystem.

What advice would you give to women looking to enter the AI ​​field?

As I mentioned in the previous question, some of the predominantly male-dominated areas within AI that I've encountered are also some of the most technical. While technical acumen should not be prioritized over other forms of literacy in AI fields, technical training does benefit both confidence and effectiveness in such fields. I understand that. We need equal representation in technical roles and openness to the expertise of experts in other fields, such as civil rights and politics, with more balanced representation. At the same time, equipping more women with technical literacy is key to balancing representation in the AI ​​field.

I also found it very rewarding to connect with women in the AI ​​field who have balanced family and professional lives. Find and talk to role models about the big questions around career and parenting, as well as the unique challenges that women still face in the workplace, so you feel better equipped to handle those challenges when they arise. became.

What are the most pressing issues facing AI as it evolves?

As AI evolves, questions of truth and trust become increasingly difficult, both online and offline. Content from images to videos to text can be generated or modified by AI, so seeing is believing. How can we trust evidence if documents can be easily and realistically altered? If it is so easy to imitate real humans, can we have a human-only space online? AI How do companies navigate the trade-offs between freedom of expression and the potential for AI systems to cause harm? More broadly, how do companies navigate the trade-off between freedom of expression and the potential for AI systems to cause harm? More broadly, how do companies navigate the trade-offs between freedom of expression and the potential for AI systems to cause harm? How can we ensure that we include the perspectives of stakeholders around the world, including the public, rather than being shaped by the public?

In addition to these specific questions, PAI also asks how we think about fairness and bias in the age of algorithmic decision-making, how labor impacts AI, and how it impacts AI. He has also been involved in other aspects of AI and society, including how to proceed with the responsible implementation of AI systems. Even how to make AI systems more reflective of myriad perspectives. At a structural level, we need to consider how AI governance can navigate vast trade-offs by incorporating different perspectives.

What issues should AI users be aware of?

First, AI users need to know that if something sounds too good to be true, it probably is.

The generative AI boom of the past year has, of course, reflected tremendous ingenuity and innovation, but it has also led to public messages about AI that are often hyperbolic and inaccurate.

AI users also need to understand that AI is not revolutionary, but rather exacerbates and amplifies existing problems and opportunities. This means that we should not take AI too seriously, but rather use this knowledge as a useful foundation for navigating an increasingly AI-infused world. For example, if we're concerned about the fact that people can misinterpret the context of pre-election videos by changing captions, we should also be concerned about the speed and scale at which deepfake technology can be used to mislead. there is. If you are concerned about the use of surveillance in the workplace, you should also consider how AI will make such surveillance easier and more prevalent. Maintaining a healthy skepticism about the novelty of an AI problem while being honest about its current characteristics provides a helpful framework for users' encounters with AI.

What is the best way to build AI responsibly?

Building AI responsibly requires broadening our notions of who plays a role in “building” AI. Of course, influencing technology companies and social media platforms is an important way to influence the impact of AI systems, and these institutions are essential to building technology responsibly. At the same time, we need to recognize how building responsible AI that serves the public interest requires continuing to engage diverse institutions from civil society, industry, media, academia, and the general public.

For example, consider the responsible development and deployment of synthetic media.

While technology companies may be concerned about their liability in determining how synthetic videos will impact users before an election, journalists should be aware that fake videos may pose as fakes from trusted news brands. You may be concerned about creating one. Human rights activists may consider the responsibilities associated with how AI-generated media reduces the impact of videos as evidence of abuse. And while artists are excited about the opportunity to express themselves through generative media, they are concerned about how their work could be used without their consent to train AI models that generate new media. It may be. These diverse considerations highlight how important it is to involve a variety of stakeholders in efforts and efforts to build AI responsibly, and how the ways in which AI is integrated into society will impact countless institutions. It shows how it is affected and influenced by.

How can investors more effectively promote responsible AI?

Years ago, I heard DJ Patil, former White House chief data scientist, describe a revision of the “act fast and break things” mantra that was so prevalent in the early days of social media. , that's what sticks in my mind. He suggested those in the field “move purposefully to solve problems.”

I like this because it doesn't mean stagnation or abandonment of innovation, but rather intentionality and the possibility of being able to innovate responsibly. Investors need to help guide this mindset and give portfolio companies more time and space to embed responsible AI practices without hindering progress. While financial institutions often describe limited time and tight deadlines as limiting factors for doing the “right” thing, investors can be a major catalyst for changing this dynamic.

The more I worked in AI, the more I started working on very human problems. And these are questions we all need to answer.



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Posts

OpenAI seeks to extend human lifespans with the help of longevity startups

January 17, 2025

Farewell to the $200 million woolly mammoth and TikTok

January 17, 2025

Nord Security founder launches Nexos.ai to help enterprises move AI projects from pilot to production

January 17, 2025

Data proves it remains difficult for startups to raise capital, even though VCs invested $75 billion in the fourth quarter

January 16, 2025

Apple suspends AI notification summaries for news after generating false alerts

January 16, 2025

Nvidia releases more tools and guardrails to help enterprises adopt AI agents

January 16, 2025

Leave A Reply Cancel Reply

Top Reviews
Editors Picks

7 days left: Founders and VCs save over $300 on all stage passes

March 24, 2025

AI chip startup Furiosaai reportedly rejecting $800 million acquisition offer from Meta

March 24, 2025

20 Hottest Open Source Startups of 2024

March 22, 2025

Andrill may build a weapons factory in the UK

March 21, 2025
About Us
About Us

Welcome to Tech Brunch, your go-to destination for cutting-edge insights, news, and analysis in the fields of Artificial Intelligence (AI), Cryptocurrency, Technology, and Startups. At Tech Brunch, we are passionate about exploring the latest trends, innovations, and developments shaping the future of these dynamic industries.

Our Picks

Everyone in high tech has an opinion about Soham Parekh

July 3, 2025

All stages of TechCrunch regain early release prices for limited time

July 3, 2025

Not everyone is excited about DMs on the thread

July 3, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

© 2025 TechBrunch. Designed by TechBrunch.
  • Home
  • About Tech Brunch
  • Advertise with Tech Brunch
  • Contact us
  • DMCA Notice
  • Privacy Policy
  • Terms of Use

Type above and press Enter to search. Press Esc to cancel.