As part of TechCrunch's ongoing “Women in AI” series, we're giving female AI-focused academics and others their well-deserved and overdue spotlight time, but TechCrunch interviewed Dr. Rebecca Portnoff, Vice President of Data Science. Thorn, a nonprofit organization, develops technology to protect children from sexual abuse.
She attended Princeton University and then earned a PhD in computer science from the University of California, Berkeley. She continues to rise through the ranks at Thorne, where she has worked since 2016. She started as a volunteer researcher, and now, eight years later, she leads what is probably the only team in the world dedicated to building machines. Learning and artificial intelligence to deter, prevent and protect children from sexual abuse.
“When I was a senior at Princeton University and thinking about what I was going to do after graduation, my sister recommended that I read Half the Sky by Nicholas Kristof and Sheryl Wooden. “It introduced me to the subject of sexual abuse,” she told TechCrunch. She said the book inspired her to learn how to make a difference in this field. She went on to write her doctoral thesis, specifically focusing on the use of machine learning and AI in this field.
Mission to protect children
At Thorne, Portnoff's team works to identify victims, prevent re-victimization and prevent sexual abuse material from spreading virally. She led Thorn and All Tech Is Human's joint Safety by Design initiative last year. This initiative seeks to prevent people from using generated AI to sexually harm children.
“This prevents the production of fraudulent material by generative models, allows for more reliable detection of such material, and prevents the distribution of models, services, and apps used to create this fraudulent material. “It was a huge step forward in jointly defining the principles and mitigations for the industry and aligning industry leaders to adhere to those standards,” she recalled. She said she met many people who were dedicated to the cause. “But I have more gray hair than when it all started.”
As TechCrunch previously reported, using AI to create non-consensual sexual images has become a hot topic of discussion, especially as the AI porn generation becomes more sophisticated. Currently, there is no comprehensive federal law that protects or prevents sexually generated AI images created without the consent of others. However, states such as Florida, Louisiana, and New Mexico have passed their own legislation specifically targeting child abuse by AI.
In fact, she said this is one of the most pressing issues facing AI as it evolves. “One in 10 minors reported knowing of an incident in which a classmate produced nude images of another child,” she says.
“We don't have to live in this reality, and it is unacceptable that we are already allowing this reality to continue.” However, she says there are mitigation measures that can be put in place to prevent and reduce this abuse. said. For example, Thorne discusses how tech companies are adopting their safety-by-design principles and mitigations to prevent the misuse of their generative AI technologies and products to facilitate child sexual abuse. We advocate sharing publicly. The Institute of Electrical and Electronics Engineers (IEEE) and the National Institute of Standards and Technology (NIST) are helping set standards for companies that can be used to audit progress and working with policymakers to understand why this is important. I'll let you know.
“Influence-based legislation will be needed to bring all businesses and stakeholders on board,” she said.
Working as a woman in the AI field
As she rose through the ranks of building AI, Portnoff recalls people ignoring her advice and asking her to consult someone with a technical background instead. “My response? 'Don't worry, you're talking to someone with a technical background,'” she said.
She said there are several things that have helped her work in a male-dominated field, including being prepared, acting confident and assuming good intentions. Being prepared allows her to walk into a room with more confidence, but confidence also allows her to meet challenges with curiosity and boldness, and to be able to “understand first, Next, you want to be understood,” she continued.
“Assuming good intentions allows us to approach challenges with kindness rather than defensiveness,” she said. “Even if that goodwill really doesn’t exist, it will become clear eventually.”
Her advice to women looking to enter AI is to always believe in your abilities and meaning. She said it's easy to fall into the trap of letting the assumptions people have about you determine your potential, but everyone's voice will be needed in the current AI revolution.
“As ML/AI becomes more integrated into human systems, we can all work together to build collective prosperity and ensure it is done in a way that prioritizes the most vulnerable among us. It needs to be done.”
Building ethical AI
Portnoff said there are many aspects to responsible AI, including the need for transparency, fairness, trust and safety. “But they all have one thing in common,” she continued. “Building ML/AI responsibly requires engaging more stakeholders than just your fellow engineers.”
This means more active listening and collaboration. “If you're following a roadmap for building responsible AI and find yourself not talking to anyone outside of your organization or engineering team along the way, you're probably heading in the wrong direction. ”
And as investors continue to pour billions of dollars into AI startups, Portnoff said investors should start considering liability early in the due diligence stage and understand a company's commitment to ethics before making an investment. He suggested that companies could be asked to comply with certain standards. We met. This can “prevent damage and enable positive growth.”
“There is a lot of work to be done,” she said in general terms. “And you are the one who can make it happen.”