To give female academics and others focused on AI their well-deserved and overdue spotlight time, TechCrunch is launching a series of interviews highlighting notable women who have contributed to the AI revolution. Start. As the AI boom continues, we'll be publishing several articles throughout the year highlighting key research that may go unrecognized. Click here for a detailed profile.
As an AI expert with the Organization for Economic Co-operation and Development (OECD) and the Global Partnership on AI (GPAI), an international initiative promoting the responsible use of AI, Mr. Tiedrich works with the law to assess and manage risks. Developing AI approaches. Policy and practice using science. She served on the faculty of Duke University, advised numerous companies, and was a longtime partner at the law firm Covington & Burling LLP.
Mr. Tiedrich, a tech transactions and intellectual property lawyer, also serves on the Biden campaign policy committee and is registered as an attorney with the United States Patent and Trademark Office (USPTO).
Lee Tiedrich, Global Partnership on AI
In short, how did you get started in AI? What attracted you to this field?
I've worked at the intersection of technology, law, and policy for decades, starting with mobile phones and ending with the Internet and e-commerce. I am attracted to helping organizations optimize the benefits of new technologies and reduce risks in a rapidly changing and complex legal environment. I have been working on AI problems for years, starting when I was a partner at Covington & Burling LLP. Long before it occupied the headlines. In 2018, I became co-chair of Covington's global, interdisciplinary organization amidst the growing commercial use and legal challenges of AI.
He joined the Artificial Intelligence Initiative and further focused his work on AI, including AI governance, compliance, transactions, and government affairs.
What work (in the AI field) are you most proud of?
Unlocking the benefits and mitigating the risks of AI requires global and multidisciplinary solutions. I am proud of the breadth of work that brings together different disciplines, geographies, and cultures to contribute to solving these pressing challenges. This research began while working on his AI at Covington.
Governance and other issues with client lawyers, engineers, and business teams. Most recently, he has been a member of global expert groups for both the Organization for Economic Co-operation and Development (OECD) AI and the Global Partnership on AI (GPAI), working on high-stakes, interdisciplinary AI problems such as: AI governance, responsible AI data and model sharing, and how to address climate, intellectual property, and privacy issues in an AI-driven world. I co-lead both his GPAI Intellectual Property Committee and Responsible AI Environmental Strategies (RAISE) Committee. My interdisciplinary work extends to Duke University, where I design and teach courses that bring together graduate students from various programs to work with the OECD, companies, and others to address real-world, responsible technology problems. We are excited to help the next generation of AI leaders address interdisciplinary AI challenges.
How do we overcome the challenges of a male-dominated tech industry and, by extension, a male-dominated AI industry?
As an undergraduate at Duke University and one of the few female electrical engineering students, I have spent most of my life in a male-dominated field. I was also the 22nd woman elected to the Covington Partnership, and my practice focused on technology.
Navigating a male-dominated industry starts with doing great, innovative work and communicating it with confidence. This increases the demand for jobs and usually creates more opportunities. Women should also focus on building good relationships within the AI ecosystem. This helps develop not only clients and customers, but also important mentors and sponsors. I also encourage women to leverage their networks and actively pursue opportunities to expand their knowledge, profile and experience, such as participation in industry associations and other activities.
Lastly, I would like women to invest in themselves. There are many resources and networks that can help women thrive and advance in their AI and other industries. Women need to set goals and identify and utilize resources that will help them achieve those goals.
What advice would you give to women looking to enter the AI field?
There are so many opportunities in the AI field, including engineers, data scientists, lawyers, economists, and business and government affairs experts. I encourage women to find an aspect of AI that they are passionate about and pursue it. People often perform better when they work on things they care about.
Women also need to invest in developing and promoting their expertise. This includes joining professional associations, attending networking events, writing articles, public speaking, and pursuing continuing legal education. AI for a wide range of new and difficult problems
Nowadays, there are many opportunities for young professionals to quickly become experts. Women should actively pursue these opportunities. Building expertise and a good professional network can help.
What are the most pressing issues facing AI as it evolves?
AI has great potential to advance global prosperity, security, and social well-being, including addressing climate change and achieving the United Nations Sustainable Development Goals. However, if not developed or used properly, AI can pose safety and other risks, including to individuals and the environment. Society faces major challenges in developing frameworks that unlock the benefits and reduce the risks of AI. This requires multidisciplinary cooperation, as laws and policies need to take into account not only market and social realities, but also relevant technologies. International harmonization is also important because technology crosses borders. Standards and other tools can help advance international harmonization, especially when legal frameworks vary from jurisdiction to jurisdiction.
What issues should AI users be aware of?
In an article I recently co-published with the OECD, I called for a global AI learning campaign. This explains the urgent need for users to be aware of the benefits and risks of the AI applications they intend to use. With this knowledge, you can make better decisions about whether and how to use AI applications, including how to mitigate risks.
Additionally, AI users should be aware that AI is becoming increasingly regulated and litigious. Government crackdowns on AI are also growing, potentially making AI users liable for harm caused by AI systems provided by third-party vendors. To mitigate potential liability and other risks, AI users should establish proactive AI governance and compliance programs to manage AI deployments. You should also carefully consider third-party AI systems before agreeing to use them.
What is the best way to build AI responsibly?
Building and deploying AI responsibly requires many important steps. It starts with publicly embracing and upholding the good values of responsible AI, such as those embodied in the OECD AI Principles, which serve as our north star. Given the complexity of AI, develop and implement an AI governance framework that applies across the lifecycle of AI systems and fosters interdisciplinary collaboration between technical, legal, business, sustainability, and other experts. is also essential. In addition to ensuring compliance with applicable laws, your governance framework should consider the NIST AI Risk Management Framework and other important guidance. The legal and technology landscape for AI is changing rapidly, so your governance framework must enable your organization to be agile in responding to new developments.
How can investors more effectively promote responsible AI?
Investors typically have a variety of ways to implement responsible AI within their portfolio companies. First, we need to embrace responsible AI as an investment priority. Not only is it the right thing to do, it's also good for business. Market demand for responsible AI is increasing and should improve profitability for portfolio companies. Additionally, in the highly regulated and litigious world of AI, responsible AI practices should reduce the risk of litigation and potential reputational damage caused by poorly designed AI.
Investors can also promote responsible AI by providing oversight through board appointments. Corporate boards are increasing their oversight of technology issues. You should also consider structuring your investments to incorporate other monitoring mechanisms.
Additionally, even if not mentioned in the investment agreement, investors may introduce portfolio companies to potential Responsible AI adopters or consultants to encourage and support their engagement in the ever-expanding Responsible AI ecosystem. I can.