To give female academics and others focused on AI their well-deserved and overdue spotlight time, TechCrunch is launching a series of interviews highlighting notable women who have contributed to the AI revolution. Start. As the AI boom continues, we'll be publishing several articles throughout the year highlighting key research that may go unrecognized. Click here for a detailed profile.
Kristin Gloria leads the Aspen Institute's Emergent and Intelligent Technology Initiative. The Aspen Institute is a Washington, DC-based think tank focused on values-based leadership and policy expertise. Gloria received his Ph.D. He has a PhD in cognitive science and a master's degree in media studies, and his previous work includes MIT's Internet Policy Research Initiative, the San Francisco-based startup His Policy Lab, and the University of California, Berkeley. Includes research at the Center for Society, Technology and Policy.
Q&A
In short, how did you get started in AI? What attracted you to this field?
To be honest, I never started my career aiming for AI. First, I was very interested in understanding the intersection of technology and public policy. At the time, I was working on my master's degree in media studies and was exploring ideas around remix culture and intellectual property. I lived and worked in Washington, DC as an Archer Fellow with the New America Foundation. I distinctly remember sitting in a room full of public policy makers and politicians one day, abusing a term that had no real technical definition. Immediately after this conference, I realized that I needed a qualification to change the direction of public policy. I went back to school and earned a PhD in cognitive science with a focus on semantic technologies and online consumer privacy. I have been extremely fortunate to have found mentors, advisors, and labs that foster interdisciplinary understanding of how technology is designed and built. There, I honed my technical skills while developing a more critical perspective on the many ways technology intersects with our lives. In my role as Director of AI at the Aspen Institute, I have had the privilege of ideating, engaging, and collaborating with some of the leading thinkers in AI. And I've always found myself drawn to people who take the time to deeply question whether and how AI will impact our daily lives. Ta.
Over the years, I've led various AI initiatives, but one of the most meaningful ones is just getting started. Now, as a founding team member and director of strategic partnerships and innovation at Young Futures, a new nonprofit organization, I am committed to building this kind of partnership to achieve our mission of making the digital world a better place to grow. I'm excited to weave in my thoughts. Specifically, as generative AI becomes a key element and new technologies come online, it is urgent and important to help preteens, teens, and their support forces navigate this vast digital wilderness together. is.
What work (in the AI field) are you most proud of?
There are two initiatives that I am most proud of. The first is work related to surfacing the tensions, pitfalls, and impacts of AI on marginalized communities. The Power and Progress of Algorithmic Bias, published in 2021, clearly summarizes months of stakeholder engagement and research on this issue. This report raises one of my all-time favorite questions. “How can we (data and algorithm operators) rebuild our own models to predict a different future that centers the needs of the most vulnerable?” Safiya Noble first asked that question I am the author of , and I always take that into consideration throughout my work. The second most important initiative came recently during my time as head of data for her Blue Fever. Blue Fever is a company with a mission to improve the well-being of young people in a judgment-free and inclusive online space. Specifically, I led the design and development of Blue, the first AI emotional support companion. I learned a lot in this process. Most notably, I gained a profound new appreciation for the impact that virtual companions have on people who are suffering and who don't have support systems in place. Blue was designed and built to bring a “sibling energy” that guides users to reflect on their mental and emotional needs.
How do we overcome the challenges of a male-dominated tech industry and, by extension, a male-dominated AI industry?
Unfortunately, the challenges are real and remain current. I have experienced my fair share of distrust of my skills and experience among all types of colleagues in the field. But for each of these negative challenges, I can cite examples where my male colleagues have been my biggest cheerleaders. It's a tough environment, but I hope to get through it by referring to these examples. I also think that a lot has changed in this area over the past five years. The necessary skill sets and professional experience that qualify as part of “AI” are no longer strictly computer science focused.
What advice would you give to women looking to enter the AI field?
Step inside and follow your curiosity. This space is constantly in motion, and the most interesting (and perhaps most productive) pursuit is to remain critically optimistic about the field itself.
What are the most pressing issues facing AI as it evolves?
In fact, I think some of the most pressing problems facing AI are the same problems we haven't fully understood since the web was first introduced. These are issues related to agency, autonomy, privacy, fairness, and fairness. These are the core of how we position ourselves within the machine. Yes, AI could make it much more complex, but so do sociopolitical changes.
What issues should AI users be aware of?
AI users need to be aware of how these systems complicate or enhance their own agency and autonomy. Additionally, as we debate how technology, and AI in particular, will impact our well-being, it is important to remember that there are proven tools to manage more negative outcomes. It is important.
What is the best way to build AI responsibly?
Building AI responsibly is more than just code. Truly responsible construction takes design, governance, policy, and business model into consideration. Everything is driving the other, and trying to address only one part of the build will continue to fall short.
How can investors better promote responsible AI?
One specific task that Mozilla Ventures is eagerly requesting is AI model cards. This model card creation method, developed by Timnit Gebru and colleagues, allows teams (such as funders) to assess risk and safety issues in AI models used in their systems. Also, related to the above, investors should comprehensively assess the capabilities and capabilities of responsibly constructed systems. For example, if you have built-in trust and safety features built into the build and published model cards, but your revenue model relies on weak population data, you are misaligned with your intentions as an investor. . I think you can still benefit from building responsibly. Finally, we hope to see more opportunities for collaborative financing among investors. In the area of well-being and mental health, the solutions will be diverse and vast because no two people are the same and there is no single solution that will solve everyone. Investors interested in solving the problem would be welcome to act together.