To shine a well-deserved and long-overdue spotlight on women academics and others focused on AI, TechCrunch is launching an interview series highlighting notable women who have contributed to the AI revolution.
Annika Collier Navaroli is a senior research associate at Columbia University’s Tow Center for Digital Journalism and a Technology Public Voices Fellow at the OpEd Project in collaboration with the MacArthur Foundation.
She is known for her research and advocacy in the technology space. Previously, she served as a practitioner fellow on race and technology at the Stanford Center for Philanthropy and Civil Society. Prior to that, she led trust and safety at Twitch and Twitter. Navaroli may be best known for her congressional testimony about Twitter, in which she spoke about how warnings of imminent violence on social media that were a precursor to the January 6 attack on the Capitol were ignored.
Just to briefly ask, how did you get started working in AI? What attracted you to this field?
Nearly 20 years ago, I worked as a copy clerk in the editorial office at my local newspaper during the summer when it went digital. I was an undergraduate journalism student at the time. Social media sites like Facebook spread across campus, and I became obsessed with understanding how laws created by the printing press could evolve with new technology. That curiosity led me to law school, where I transitioned to Twitter, studied media law and policy, and watched the Arab Spring and Occupy Wall Street unfold. I brought it all together to write my master's thesis on how new technologies are changing the way information flows and how society exercises freedom of expression.
After graduating, I worked at a few law firms before landing at the Data & Society Institute, where I led a new think tank's research on what was then called “big data,” civil rights, and equity. My work there explored how early AI systems like facial recognition software, predictive policing tools, and criminal justice risk assessment algorithms were reproducing bias and creating unintended consequences that impact marginalized communities. I then went to work at Color of Change, where I led the first civil rights audit of a tech company, wrote the organizational playbook for tech accountability campaigns, and advocated for tech policy change with governments and regulators. From there, I became a senior policy practitioner on the Trust and Safety teams at Twitter and Twitch.
What work in AI are you most proud of?
My work at tech companies is what I am most proud of in that we use policies to substantively shift the balance of power and correct bias within algorithmic systems that generate culture and knowledge. At Twitter, we ran several campaigns to authenticate individuals who were previously excluded from our exclusive authentication process, including Black women, people of color, and queer people. This included leading AI researchers like Safiyah Noble, Alondra Nelson, Timnit Gebru, and Meredith Broussard. This was in 2020, when Twitter was still Twitter. At the time, authentication meant that your name and content became part of Twitter's core algorithm, as tweets from verified accounts appeared in recommendations, search results, and your home timeline, and contributed to creating trends. So our efforts to authenticate new people with different perspectives on AI fundamentally shifted whose voices were given authority as thought leaders and brought new ideas to the public conversation at a critical moment.
I'm very proud of the research I did at Stanford University, which we compiled as Black in Moderation. When I was working in tech, I also realized that no one was writing or talking about what I was experiencing every day as a Black person working in the Trust & Safety space. So when I left the industry and returned to academia, I decided to speak to Black tech workers and bring their stories to light. This study was the first of its kind and prompted so many new and important conversations about the experiences of tech employees with marginalized identities.
How do you address the challenges of a male-dominated tech industry, and even a male-dominated AI industry?
As a Black queer woman, navigating male-dominated and marginalized spaces has been part of my life's journey. In the field of technology and AI, I find the most challenging aspect to be what I call in my research “forced identity labor,” a term I coined to describe frequent situations in which employees with marginalized identities are treated as spokespeople or representatives for an entire community that shares that same identity.
Developing new technologies like AI comes with so much risk that escaping the work can seem nearly impossible. I've had to learn to set very specific boundaries for myself about what problems I'm willing to work on and when.
What are the most pressing issues facing AI as it evolves?
Investigative reports suggest that current generative AI models are gobbling up all the data on the internet and will soon run out of available data, so the world's largest AI companies are turning to synthetic data — information generated by the AI itself, rather than humans — to keep training their systems.
This idea has led me down a labyrinth, so much so that I recently wrote an op-ed arguing that using synthetic data as training data is one of the most pressing ethical issues facing new AI development. Generative AI systems have already shown that their output reproduces biases and creates misinformation, based on the original training data. Thus, the path to training new systems with synthetic data would mean constantly feeding back biased and inaccurate outputs into the systems as new training data. I described this as a potential feedback loop to hell.
After I wrote this article, Mark Zuckerberg hailed Meta’s updated Llama 3 chatbot, which is partially powered by synthetic data, as the “most intelligent” generative AI product on the market.
What issues should AI users be aware of?
AI is ubiquitous in our modern lives, from spell check and social media feeds to chatbots and image generators. In many ways, society has become a guinea pig for this new and untested technology. But AI users should not feel powerless.
I have argued that technology advocates should band together and organize AI users to call for public pause on AI. I think the Writers Guild of America has shown that with organization, collective action, and patient determination, people can come together to create meaningful boundaries on the use of AI technologies. I also believe that AI does not have to be an existential threat to our future if we pause now to right the wrongs of the past and create new ethical guidelines and regulations.
What is the best way to build AI responsibly?
My experience working in tech companies showed me how important it is to have someone at the table crafting policy, presenting arguments, and making decisions. I also realized that what I learned in journalism school equipped me with the skills I needed to succeed in the tech industry. Now I'm back working at Columbia Journalism School, and I'm interested in holding technology accountable and educating the next generation of people who develop AI responsibly, both as a watchdog within and outside tech companies.
I think [journalism] “The school provides unique training in questioning information, searching for truth, considering multiple perspectives, making logical arguments, and distilling fact and reality from opinion and misinformation. I think this will provide a strong foundation for those responsible for writing the rules for what the next generation of AI can and can't do, and I look forward to laying a more orderly path for those who come next.”
I also believe that in addition to a skilled trust and safety workforce, the AI industry needs outside regulation. In the United States, I argue that this should come in the form of a new agency regulating U.S. technology companies with the power to establish and enforce basic safety and privacy standards. I also want to continue my efforts to connect current and future regulators with former tech workers who can help those in power ask the right questions and craft new solutions that are nuanced and practical.