To give female academics and others focused on AI their well-deserved and overdue spotlight time, TechCrunch is launching a series of interviews highlighting notable women who have contributed to the AI revolution. Start. As the AI boom continues, we'll be publishing several articles throughout the year highlighting key research that may go unrecognized. Click here for a detailed profile.
Heidy Khlaaf is the Director of Engineering at the cybersecurity company Trail of Bits. She specializes in evaluating software and AI implementations within “safety-critical” systems such as nuclear power plants and self-driving cars.
Ms. Klaf received her Ph.D. in computer science. He graduated from University College London and received a bachelor's degree in computer science and philosophy from Florida State University. She has led safety and security audits, provided consultation and review of assurance cases, and contributed to the creation of standards and guidelines for safety and security-related applications and their development.
Q&A
In short, how did you get started in AI? What attracted you to the field?
I was drawn to robotics from an early age and started programming at the age of 15, fascinated by the potential of using robotics and AI (because of their puzzling relationship) to automate the workloads that are most needed. I did. As with manufacturing, we've seen robotics used in our society to assist the elderly and automate dangerous manual labor. But I got my Ph.D. A strong theoretical foundation in computer science will help you succeed in other subfields of computer science, as you will gain knowledge and understanding about where AI is suitable, where it is not suitable, and where there are pitfalls. Because we believe in making science-based decisions.
What work (in the AI field) are you most proud of?
Leveraging my strong expertise and background in safety engineering and safety-critical systems, I will provide background and critique as needed on the emerging field of AI “safety.” The field of AI safety has attempted to adopt and cite established safety and security techniques, but the use and meaning of various terms have been misunderstood. There is a lack of consistent or intentional definitions that undermines the integrity of the safety technologies currently used by the AI community. I am particularly proud of my work on “Towards comprehensive risk assessment and assurance of AI-based systems'' and “A hazard analysis framework for code synthesis large-scale language models.'' We dismantle the myths about AI evaluation and provide concrete steps to bridge safety and AI evaluation. Safety gaps within AI.
How do we overcome the challenges of a male-dominated tech industry and, by extension, a male-dominated AI industry?
Acknowledging how much the status quo hasn't changed is not something we often discuss, but for myself and other women in tech, it's important to understand our place within the industry and think about the changes that need to happen. I believe that having a realistic view is actually important. Retention rates and the percentage of women in leadership positions have remained largely unchanged since I joined the field more than a decade ago. And, as TechCrunch aptly pointed out, despite the tremendous advances and contributions of women in the AI field, we remain marginalized from the conversations we ourselves have defined. Recognizing this lack of progress, rather than relying on DEI initiatives that have unfortunately failed to change the situation, given that prejudice and skepticism towards women in tech remain widespread in the world. , I realized that building a strong personal community is a far more valuable source of support. technology.
What advice would you give to women looking to enter the AI field?
Find work that you truly believe in, without appealing to authority, even if it contradicts conventional wisdom. Given the political and economic power that AI labs currently hold, there is an instinct to accept as fact anything that AI “thought leaders” say, and many AI claims are It is often a marketing statement that exaggerates the ability of AI to bring benefits. As conclusion. However, we are seeing a significant reluctance, especially among young women in the field, to express skepticism about the unsubstantiated claims of their male colleagues. Imposter syndrome has a powerful impact on women in technology, causing many to question their own scientific integrity. But it is more important than ever to challenge claims that exaggerate AI's capabilities, especially those that cannot be disproved using scientific methods.
What are the most pressing issues facing AI as it evolves?
Regardless of the advances in AI, it will never be the only solution to our problems, both technologically and socially. Across numerous domains, the current trend is to incorporate AI into any system, regardless of its effectiveness (or lack thereof). AI should augment human capabilities, not replace them, but we are seeing the pitfalls and failure modes of AI that cause real and tangible harm completely ignored. I'm doing it. Just recently, the AI system ShotSpotter led to a police officer shooting a child.
What issues should AI users be aware of?
AI is really unreliable. AI algorithms are known to have flaws with high error rates observed across applications that require precision, precision, and safety. The way AI systems are trained embeds human bias and discrimination in their output, making it “effectively” automatic. This is because the nature of AI systems is that they provide results based on statistical and probabilistic inferences and correlations from historical data, rather than on any type of inference, factual evidence, or “causality.” It's for a reason.
What is the best way to build AI responsibly?
Ensuring that AI is developed in ways that protect people's rights and safety by building verifiable claims, and holding AI developers accountable for them. These claims must be limited in scope to regulatory, safety, ethical or technical applications and must not be rebuttable. Otherwise, there is a significant lack of scientific integrity to properly evaluate these systems. Independent regulators will also need to evaluate AI systems against these claims, as is currently required of many products and systems in other industries (e.g., those evaluated by the FDA). AI systems should not be exempt from standard audit processes established to ensure public and consumer protection.
How can investors more effectively promote responsible AI?
Investors should engage with and fund organizations that are establishing and advancing AI audit practices. Currently, most of the funds are invested in his AI laboratory itself, and it is believed that the safety team alone is sufficient for advances in AI evaluation. However, independent auditors and regulators are key to public trust. Independence allows the public to trust the accuracy and completeness of assessments and the integrity of regulatory outcomes.