To shine a well-deserved and long-overdue spotlight on women academics and others focused on AI, TechCrunch is launching an interview series highlighting notable women who have contributed to the AI revolution.
Sarah Bitamazile is Chief Policy Officer at boutique consulting firm Lumiera and writes for the Lumiera Loop, a newsletter focused on AI literacy and responsible AI adoption.
She previously worked as a policy advisor in Sweden, focusing on gender equality, foreign policy and security and defence policy.
Just to briefly ask, how did you get started working in AI? What attracted you to this field?
AI found me! AI is having an ever-increasing impact in the fields I am deeply involved in. Understanding the value of AI and its challenges has become essential to provide sound advice to senior decision makers.
Firstly, in the field of defense and security, where AI is being used in research and development and in real warfare. Secondly, in the field of arts and culture, where creators were among the first groups to recognize the added value and challenges of AI. They helped to bring to light copyright issues that surfaced, such as the ongoing lawsuits in which several dailies are suing OpenAI.
You know something is having a major impact when leaders from very different backgrounds and issues are increasingly asking their advisors, “Can you explain this to me? Everyone's talking about it.”
What work in AI are you most proud of?
We recently worked with a client whose attempt to integrate AI into their R&D workstream was unsuccessful. Lumiera developed an AI integration strategy with a roadmap aligned to the client's specific needs and challenges. The combination of a curated AI project portfolio, a structured change management process, and leadership that recognized the value of interdisciplinary thinking made this project a huge success.
How do you address the challenges of a male-dominated tech industry, and even a male-dominated AI industry?
It's about getting clear on the why. I'm active in the AI industry because it has a deeper purpose and problems to solve. Lumiera's mission is to provide comprehensive guidance to leaders, empowering them to make confident and responsible decisions in the age of technology. This sense of purpose remains the same no matter what field you go into. Male-dominated or not, the AI industry is huge and increasingly complex. No one can see the whole picture. We need more perspectives to learn from each other. The challenges that exist are big, and we need all of us to work together.
What advice do you have for women looking to enter the AI field?
Working with AI is like learning a new language or a new skill. AI has great potential to solve challenges across many fields. What problem do you want to solve? Find out how AI can be a solution and focus on solving that problem. Keep learning and interact with people who inspire you.
What are the most pressing issues facing AI as it evolves?
The rapid pace at which AI is evolving presents a challenge in itself, and I believe asking this question often and regularly is key to honestly navigating the world of AI. At Lumiera, we ask this question in our weekly newsletter.
Here are some of the most popular ones right now:
AI Hardware and Geopolitics: Public sector investment in AI hardware (GPUs) will likely increase as governments around the world become more AI savvy and start making strategic and geopolitical moves. So far, we have seen movement in countries like the UK, Japan, UAE, and Saudi Arabia. This is an area to watch. AI Benchmarking: As we rely more and more on AI, it is essential to understand how to measure and compare AI performance. Choosing the right model for your specific use case requires careful consideration. The model that best suits your needs is not necessarily the one at the top of the leaderboard. Models are changing rapidly, so benchmark accuracy will fluctuate as well. Balancing Automation and Human Oversight: Believe it or not, too much automation does exist. Decisions require human judgment, intuition, and understanding of context. This cannot be replicated through automation. Data Quality and Governance: Where is the good data? Data flows in and out of organizations every second. If that data is not properly managed, organizations will not be able to reap the benefits of AI. And in the long run, this can be detrimental. A data strategy is an AI strategy. Data system architecture, management, and ownership must also be part of the discussion.
What issues should AI users be aware of?
Algorithms and data are not perfect: It is important that users be critical and not blindly trust the output, especially when using off-the-shelf technologies. Technologies and tools are new and evolving, so keep this in mind and apply common sense. Energy consumption: The amount of computation required to train large AI models, combined with the energy required to operate and cool the necessary hardware infrastructure, consumes a lot of electricity. Gartner predicts that AI could consume up to 3.5% of the world's electricity by 2030. Educate yourself and use a variety of sources: AI literacy is key. To make good use of AI in your life and work, you need to make informed decisions about its use. AI is meant to help you make decisions, not make them for you. Density of perspectives: To understand what kinds of solutions can be created with AI, you need to involve people who know the problem domain well. You need to do this throughout the entire AI development lifecycle. The same goes for ethics. Ethics is not something that can be added “on top” of an AI product after it has already been built. Ethical considerations must begin at the research stage and be embedded early and throughout the construction process by conducting social and ethical impact assessments, mitigating bias, and promoting accountability and transparency.
When building AI, it's essential to recognize skill limitations within your organization. Gaps are opportunities for growth. They allow you to prioritize areas where you need to seek external expertise and develop robust accountability mechanisms. Factors such as current skill sets, team capabilities, and available funding all need to be evaluated. These factors will impact your AI roadmap, among other things.
How can investors promote responsible AI?
First and foremost, as an investor, you want to be sure that your investment is solid and will last for the long term. Investing in responsible AI protects your financial returns and mitigates risks related to trust, regulation, and privacy concerns.
Investors can promote responsible AI by looking at indicators of responsible AI leadership and use. A clear AI strategy, dedicated responsible AI resources, publicly-disclosed responsible AI policies, strong governance practices, and the integration of human-enhanced feedback are factors to consider. These indicators should be part of a sound due diligence process. More science, less subjective decision-making. Divorcing from unethical AI practices is another way to promote responsible AI solutions.