To give female academics and others focused on AI their well-deserved and overdue spotlight time, TechCrunch is launching a series of interviews highlighting notable women who have contributed to the AI revolution. Start. As the AI boom continues, we will be publishing works throughout the year that focus on key research that is hard to recognize. Click here for a detailed profile.
Emilia Gomez is a Principal Scientist at the European Commission's Joint Research Center and Scientific Coordinator of AI Watch, the EC's initiative to monitor the progress, diffusion and impact of AI in Europe. Her team leverages its scientific and technical knowledge to contribute to her EC AI policy, including the recently proposed AI Act.
Gomez's research is grounded in the field of computational music, contributing to our understanding of how humans express music and how it can be modeled digitally. Starting with the field of music, Gomez investigates the impact of his AI on human behavior, particularly on work, decision-making, and children's cognitive and socio-emotional development.
Q&A
In short, how did you get started in AI? What attracted you to this field?
As a developer of algorithms for automatically describing musical audio signals in terms of melody, tonality, similarity, style, and emotion, used in applications ranging from music platforms to education, I am particularly interested in AI. I started researching machine learning. . I began researching how to design new machine learning approaches to handle various computational tasks in the music field and the relevance of data pipelines, including dataset creation and annotation. What I loved about machine learning at the time was its modeling capabilities and the shift from knowledge-driven to data-driven algorithm design. For example, instead of designing descriptors based on acoustics or music knowledge, we now use our know-how to design datasets, architectures, and training and evaluation procedures.
From my experience as a machine learning researcher and seeing my algorithms “in action” in a variety of domains, from music platforms to symphonic concerts, I have learned how to use those algorithms to communicate with people (listeners, listeners, etc.). Recognizing the significant impact it has on musicians (such as musicians), he directed the research. His focus is on evaluation rather than development of AI, with particular emphasis on research into the impact of his AI on human behavior and how to evaluate systems in terms of fairness, human oversight, transparency, etc. . This is the current research theme of my team at the Joint Research Center.
What work (in the AI field) are you most proud of?
Academically and technically, I am proud to have contributed to music-specific machine learning architectures at the Music Technology Group in Barcelona. My advances in the state of the art in this field are reflected in my citation record. For example, during my Ph.D. I proposed a data-driven algorithm to extract tonality from an audio signal (e.g., whether a piece of music is in C major or D minor). This became an important reference in the field, and we subsequently co-designed machine learning methods for automatic description. Musical signals related to melody (used, for example, to search for songs by humming), tempo, or modeling the emotion of music. Most of these algorithms are now integrated into Essentia, an open-source library for audio and music analysis, description, and synthesis, and are leveraged in many recommender systems.
I am particularly proud of Banda Sonora Vital (LifeSoundTrack), a Red Cross Award-winning project for humanitarian technology. In this project, we developed a personalized music recommender for elderly Alzheimer's patients. There is also a large project funded by the European Union (EU), PHENICX, which I coordinated regarding the use of music. Leverage AI to create rich symphonic music experiences.
I love the music computing community and am thrilled to serve as the first female president of the International Music Information Retrieval Association. She has a special interest in increasing diversity in the field and has contributed to it throughout her career.
Currently, in my role at the Commission, which I joined as Chief Scientist in 2018, I provide scientific and technical support to AI policies developed in the EU, in particular the AI Act. From this recent work, which is less visible in publications, I am proud of my modest technical contribution to AI law. As you can imagine there are a lot of people involved here, I say “humble.” As an example, I have contributed to the harmonization and translation of legal and technical terminology (e.g., proposing definitions based on existing literature) and evaluating the actual implementation of legal requirements such as transparency and high-level technical documentation. There are many jobs. Risk AI system, general purpose AI model, generative AI.
I'm also extremely proud of the team's work supporting the EU's AI Liability Directive. There, we studied certain characteristics that make AI systems inherently risky, such as the lack of causality, opacity, unpredictability, or their self-continuity. We assessed learning ability and the associated difficulties encountered in proving causal relationships.
How do we overcome the challenges of a male-dominated tech industry and, by extension, a male-dominated AI industry?
It's not just technology. I also work in the male-dominated fields of AI research and policy. I only know the environment, so I don't have any techniques or strategies. I don't know what it's like to work in a diverse or female-dominated work environment. As the Beach Boys sing, “Isn't it wonderful?” Honestly, I try to avoid frustration and enjoy this challenging scenario. I enjoy working in a world dominated by highly assertive men and collaborating with talented women in the field.
What advice would you give to women looking to enter the AI field?
I tell them two things:
You are sorely needed – come into our field because diversity of vision, approach and ideas is urgently needed. For example, according to the divinAI project (a project I co-founded to monitor diversity in the AI field), only 23% of the author names at the 2023 International Conference on Machine Learning are women; Only 29%. , regardless of gender identity.
You are not alone. There are many women, non-binary colleagues, and male allies in this field, even if we are not as visible or recognized. Seek them out for guidance and support. In this context, there are many affinity groups in the research field. For example, when I became president of the International Music Information Retrieval Association, I actively participated in the Women in Music Information Retrieval initiative. This initiative pioneered diversity efforts in music computing and implemented a highly successful mentoring program.
What are the most pressing issues facing AI as it evolves?
In my opinion, given the imbalance, researchers should focus as much effort on AI development as on AI evaluation. The research community has been busy advancing the state-of-the-art in terms of AI capabilities and performance, and is all too excited to see algorithms used in the real world, requiring proper evaluation, impact assessment, and external auditing. forget to do. The more intelligent an AI system becomes, the more intelligent its evaluation should be. The field of AI evaluation is understudied, which accounts for many incidents that give AI a bad reputation, such as gender and racial bias present in datasets and algorithms.
What issues should AI users be aware of?
Citizens using AI-powered tools such as chatbots need to know that AI is not magic. Artificial intelligence is a product of human intelligence. To be able to challenge and use AI algorithms responsibly, we need to learn about their working principles and limitations. It is also important for the public to be informed about the quality of his AI products, how they are evaluated and certified, and know which ones they can trust.
What is the best way to build AI responsibly?
In my opinion, the best way to develop an AI product (in a responsible manner with positive social and environmental impact) is to assess the resources needed, evaluate the social impact, and evaluate the risks (e.g. Fundamental Rights). Before bringing an AI system to market. This is not only for the trust of companies and products, but also for society.
Responsible AI or trustworthy AI is a way of building algorithms that requires aspects such as transparency, fairness, human oversight, and social and environmental well-being to be addressed from the beginning of the AI design process. In this sense, the AI Act not only sets standards for regulating artificial intelligence around the world, but also reflects Europe's emphasis on trust and transparency, enabling innovation while protecting the rights of citizens. I am. We feel that this will increase public trust in our products and technology.