“Move carefully and be on the red team” is unfortunately not as catchy as “move fast and break things.” But three AI safety advocates made it clear to startup founders that moving too quickly could lead to ethical issues in the long run.
“We're at a tipping point where a lot of resources are moving into this space,” said Sarah Myers West, co-executive director of the AI Now Institute, on stage at TechCrunch Disrupt 2024. Rushing to bring certain products into the world without considering the traditional questions of what kind of world we really want to live in and how does the technology being produced serve that world? That's all. or actively causing harm. ”
This conversation comes at a time when the issue of AI safety feels more pressing than ever. In October, the family of a child who died by suicide sued chatbot company Character.AI for its role in the child's death.
“This story illustrates the significant risks posed by the very rapid deployment of AI-based technologies that we have seen,” said Meyers West. “Some of these are long-standing and largely intractable problems with content moderation of online abuse.
But beyond these life-or-death issues, the risks of AI remain high, from misinformation to copyright infringement.
“We're building something that has great power and ability to really, really impact people's lives,” said Jinna Chan, founder of Cara, a social platform for artists. “When you talk about something like Character.AI that engages someone emotionally, it makes sense to me that there should be guardrails in how that product is built.”
Zhang's platform Cara became popular after Meta revealed that any user's public posts can be used to train AI. For artists like Chan himself, this policy is a slap in the face. Artists need to post their work online to gain followers and secure potential clients, but in doing so, their work may be exposed to the very AI that could one day cost them their jobs. May be used to form models.
“Copyright protects us and allows us to earn a living,” Zhang said. Even if the artwork is available online, that doesn't mean it's free, per se. For example, digital news publications must obtain a license from a photographer to use an image. “As generative AI starts to become more mainstream, what we’re seeing is that it doesn’t work with what we’re normally used to, which is established in law. If they want to use our work, they have to get a license.”
Aleksandra Pedraszewska, AI Safety, Eleven Labs, AI Now Institute Executive Director Sarah Myers West. Cara founder and CEO Jingna Zhang spoke at TechCrunch Disrupt 2024 on Wednesday, October 30, 2024. Image credit: Katelyn Tucker/Slava Blade Photography
Artists could also be influenced by products like Eleven Labs, an AI voice cloning company with a market capitalization of more than $1 billion. Aleksandra Pedraszewska, head of safety at Eleven Labs, is responsible for ensuring that the company's advanced technology is not used for things like non-consensual deepfakes.
“I think the unintended consequences of red team models, understanding undesirable behaviors, and the launch of new generative AI companies are becoming important again. [a top priority]” she said. “Eleven Labs currently has 33 million users, which is a large community that is affected by every change we make to the product.”
Pedraszewska said one way people in her role can be more proactive in keeping the platform safe is by building closer relationships with the user community.
“We cannot operate between two extremes: completely anti-AI and anti-GenAI. One is completely anti-AI and anti-GenAI, and the other is effectively persuading zero space regulation. “When it comes to regulation, I think we need to meet in the middle,” she said.
TechCrunch has a newsletter focused on AI. Sign up here to get it delivered to your inbox every Wednesday.