To give female academics and others focused on AI their well-deserved and overdue spotlight time, TechCrunch is launching a series of interviews highlighting notable women who have contributed to the AI revolution. Start. As the AI boom continues, we'll be publishing several articles throughout the year highlighting key research that may go unrecognized. Click here for a detailed profile.
Brandie Nonnecke is the founding director of the CITRIS Policy Lab, headquartered at the University of California, Berkeley, which supports interdisciplinary research that addresses questions about the role of regulation in promoting innovation. Nonnecke leads projects on AI, Platforms, and Society at the Berkeley Center for Law and Technology, and works at the UC Berkeley Center for Law and Technology, an initiative to train researchers to develop governance and policy frameworks for effective AI. He is also co-director of Policy Hub.
In his spare time, Nonnecke hosts the video and podcast series TecHype. This series analyzes emerging technology policies, regulations, and laws, provides insight into the benefits and risks, and identifies strategies for leveraging technology for profit.
Q&A
In short, how did you get started in AI? What attracted you to the field?
I've been working on responsible AI governance for nearly a decade. My training in technology, public policy, and their interaction with social impact drew me to the field. AI is already here and is having a major impact on our lives, for better or worse. What's important to me is not to sit on the sidelines, but to contribute meaningfully to help society use this technology for good.
What work (in the AI field) are you most proud of?
I'm really proud of two things we accomplished. First, the University of California was the first university to establish responsible AI principles and governance structures to better ensure responsible sourcing and use of AI. We take seriously our commitment to serving the public in a responsible manner. I had the honor of co-chairing the University of California Presidential Task Force on AI and then the Permanent AI Council. In these roles, I gain first-hand experience thinking about how to best operationalize responsible AI principles to protect faculty, students, and the broader communities we serve. is completed. Second, I think it's important for the public to understand emerging technologies and their real benefits and risks. We launched TecHype, a video and podcast series that demystifies emerging technologies and provides guidance on effective technical and policy interventions.
How do we overcome the challenges of a male-dominated tech industry and, by extension, a male-dominated AI industry?
Be curious, be persistent, and don't let imposter syndrome get in your way. We have found it important to seek out leaders who support diversity and inclusion and provide the same support to others entering the field. Building inclusive communities in the tech industry is a powerful way to share experiences, advice, and encouragement.
What advice would you give to women looking to enter the AI field?
My advice to women entering the AI field is threefold. AI is a rapidly evolving field, so constantly seek knowledge. Take advantage of networking, as connections open doors to opportunities and provide valuable support. And advocate for yourself and others, because your voice is essential to shaping an inclusive and equitable future for AI. Remember that your unique perspective and experience enrich the field and drive innovation.
What are the most pressing issues facing AI as it evolves?
I think one of the most pressing challenges we face as AI evolves is to avoid getting caught up in the latest hype cycle. We are now witnessing this with generative AI. Certainly, generative AI will bring great advances and will have a huge impact, both good and bad. But other forms of machine learning are being used today to covertly make decisions that directly impact everyone's ability to enforce their rights. Rather than focusing on the latest wonders of machine learning, it is important to focus on where and how machine learning is being applied, regardless of its technical capabilities.
What issues should AI users be aware of?
AI users need to be aware of data privacy and security issues, the potential for bias in AI decision-making, and the importance of transparency in how AI systems operate and make decisions. Understanding these issues will enable users to demand more responsible and fair AI systems.
What is the best way to build AI responsibly?
Building AI responsibly requires integrating ethical considerations at every stage of development and deployment. This includes diverse stakeholder engagement, transparent methodologies, bias management strategies, and continuous impact assessment. It is fundamental to prioritize the public interest and ensure that AI technologies are developed with human rights, equity and inclusion at their core.
How can investors more effectively promote responsible AI?
This is a very important question! For a long time, we never explicitly discussed the role of investors. I can't express enough how much influence investors have. I believe that the metaphor that “regulation stifles innovation” is overused and often untrue. Rather, I strongly believe that smaller companies can also experience the late-mover advantage and learn from larger AI companies that are developing responsible AI practices and guidance emerging from academia, civil society, and government. believe. Investors have the power to shape the direction of the industry by making responsible AI practices a key factor in investment decisions. This includes supporting initiatives focused on addressing societal challenges through AI, promoting diversity and inclusion within the AI workforce, and supporting strong governance to ensure AI technologies benefit society as a whole. and technology strategy advocacy.