To give female academics and others focused on AI their well-deserved and overdue spotlight time, TechCrunch is launching a series of interviews highlighting notable women who have contributed to the AI revolution. Start. As the AI boom continues, we'll be publishing several articles throughout the year highlighting key research that may go unrecognized. Click here for a detailed profile.
Alison Cohen is a senior applied AI project manager at Mila, a Quebec community of more than 1,200 researchers specializing in AI and machine learning. She collaborates with researchers, social scientists, and external partners to develop AI projects that benefit society. Cohen's portfolio of work includes tools to detect misogyny, an app to identify the online activities of suspected human trafficking victims, and an agricultural app to encourage sustainable farming practices in Rwanda.
Previously, he was co-director of AI drug discovery at the Global Partnership on Artificial Intelligence, an organization that guides the responsible development and use of AI. She was also her AI strategy consultant at Deloitte and her project consultant at the International Digital Policy Center, an independent think tank in Canada.
Q&A
In short, how did you get started in AI? What attracted you to this field?
The realization that everything from facial recognition to negotiating trade deals can be modeled mathematically changed the way I saw the world. That's what made AI so appealing to me. Ironically, now that I work in the field of AI, I've learned that algorithms can't, and often shouldn't, capture this type of phenomenon.
I was exposed to this field when I was completing my master's degree in world affairs at the University of Toronto. The program is designed to teach students how to navigate the systems that impact the world order, from macroeconomics to international law to human psychology. But as I learned more about AI, I realized how important it will be to world politics and how important it is to educate myself on the subject.
What inspired me to enter this field was an essay contest. For this competition, I wrote a paper explaining how psychedelic drugs can help humans stay competitive in an AI-filled labor market. This qualified me to participate in the 2018 St. Gallen Symposium (this was creative writing). My invitation and subsequent participation in that event gave me the confidence to continue pursuing my interest in this field.
What work are you most proud of in the AI field?
One of the projects I managed involved building a dataset containing examples of both subtle and overt expressions of bias against women.
Staffing and management of a multidisciplinary team of natural language processing experts, linguists, and gender studies experts was critical to this project throughout the project lifecycle. That makes me very proud. I learned firsthand why this process is fundamental to building responsible applications, and why it's not done enough. It's hard work. If we can support each of these stakeholders to communicate effectively across disciplines, we can facilitate work that blends decades of tradition in the social sciences with cutting-edge developments in computer science.
We are also proud that this project has been well received by the community. One of our papers received attention at the Socially Responsible Language Modeling Workshop at NeurIPS, one of the leading AI conferences. This work also inspired a similar interdisciplinary process managed by AI Sweden, which adapted the work to fit Swedish misogynist concepts and representations.
How do we overcome the challenges of a male-dominated tech industry and, by extension, a male-dominated AI industry?
It's unfortunate that such a cutting-edge industry still has problematic gender dynamics. It's not just affecting women negatively, it's costing us all. I've been quite inspired by a concept called “feminist standpoint theory” that I learned about in Sasha Costanza-Chock's book Design Justice. \
This theory asserts that marginalized communities, whose knowledge and experience do not benefit from the same privileges as others, have an awareness of the world that can bring about fair and inclusive change. Of course, not all marginalized communities are the same, nor are the experiences of individuals within those communities.
That said, the diverse perspectives of these groups are essential to overcoming, challenging, and dismantling structural challenges and inequalities of all kinds. That's why not including women could make the field of AI more exclusive to a broader swath of the population and strengthen power relations outside the field.
I have found allies to be extremely important in how I have coped with a male-dominated industry. These alliances are the product of strong trust. For example, I'm very lucky to have friends like Peter Kurzwelly. She shared her podcasting expertise and helped us create a female-led and female-centered podcast, The World We're Building. This podcast will help improve the work of even more women and non-binary people in AI.
What advice would you give to women looking to enter the AI field?
Find the open door. It doesn't have to be paid, it doesn't have to be a career, it doesn't even have to match your background or experience. Once you find an opening, you can use it to hone your voice in that space and build from there. If you are volunteering, give it your all. This will help you stand out and hopefully get paid for your work as soon as possible.
Of course, being able to volunteer is a privilege, and I want to acknowledge that.
When I lost my job during the pandemic and Canada's unemployment rate was at an all-time high, very few companies were hiring AI talent, and those that were hiring had 8 months of consulting experience. We weren't looking for international relations students. . I started volunteering with an AI ethics organization while applying for jobs.
One of the projects I worked on while volunteering was about whether copyright protection should apply to art generated by AI. To gain a deeper understanding of this area, I reached out to lawyers at Canada's AI Law Firm. She connected me to someone at her CIFAR, who connected me to Benjamin Prud'homme, the executive director of her AI for Humanity team at Mila. It's amazing to think that through this series of interactions regarding AI art, I subsequently learned about career opportunities that changed my life.
What are the most pressing issues facing AI as it evolves?
My answer to this question is threefold, and they are somewhat interrelated. I think you need to understand the following:
The fact that AI is built to scale, and how do we adjust how the tools we're building are tailored to fit local knowledge, experience, and needs? If you want to build tools that are adapted to local contexts, you need to include anthropologists and sociologists in your AI design process. However, many incentive structures and other obstacles exist that impede meaningful interdisciplinary collaboration. How can I overcome this? How can we influence the design process more deeply than simply incorporating interdisciplinary expertise? Specifically, how can we influence the design process more deeply than simply incorporating multidisciplinary expertise? Specifically, we need to focus on those whose data and business most urgently benefit from it, rather than those who benefit the most. How can we change the incentives to design tools that are built for those who need them?
What issues should AI users be aware of?
I think labor exploitation is one of the issues that doesn't get enough coverage. There are many AI models that learn from labeled data using supervised learning techniques. If your model relies on labeled data, someone will need to do this tagging (i.e. someone will add the label “cat” to an image of a cat). These people (annotators) are often subject to exploitative behavior. For models that do not require data to be labeled during the training process (as is the case with some generative AI and other underlying models), developers often do not obtain consent or provide compensation for data Sets may be constructed exploitatively. or give credit to the data creator.
I encourage you to check out the work of Krystal Kauffman, who I was thrilled to feature in this TechCrunch series. She advocates for annotators' labor rights, including a living wage, an end to “mass rejection” practices, and engagement practices that align with fundamental human rights (in response to developments such as intrusive surveillance).
What is the best way to build AI responsibly?
People often turn to ethical principles for AI to argue that their technology is responsible. Unfortunately, ethical considerations can only begin after many decisions have already been made, including but not limited to:
What are you making? How are you building it? How will it be rolled out?
If we wait until these decisions are made, we will miss countless opportunities to build responsible technology.
In my experience, the best way to build responsible AI is to recognize early in the process how the problem is defined and whose interests it serves. How does that orientation support or challenge existing power dynamics? And which communities will be empowered or disempowered by the use of AI?
If we want to create meaningful solutions, we must carefully navigate these power systems.
How can investors more effectively promote responsible AI?
Ask about your team's values. Teams are more likely to embed responsible practices when values are at least partially defined by the local community and there is some level of accountability to that community.