Close Menu
TechBrunchTechBrunch
  • Home
  • AI
  • Apps
  • Crypto
  • Security
  • Startups
  • TechCrunch
  • Venture

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

The latest Apple Maps Updates will bring you out your boogie side

May 14, 2025

White House Scrap plans to block data brokers from selling sensitive American data

May 14, 2025

Lip Ring vs Deal Unpacking: Corporate Spy and $16.8 billion Plot Twist

May 14, 2025
Facebook X (Twitter) Instagram
TechBrunchTechBrunch
  • Home
  • AI

    OpenAI seeks to extend human lifespans with the help of longevity startups

    January 17, 2025

    Farewell to the $200 million woolly mammoth and TikTok

    January 17, 2025

    Nord Security founder launches Nexos.ai to help enterprises move AI projects from pilot to production

    January 17, 2025

    Data proves it remains difficult for startups to raise capital, even though VCs invested $75 billion in the fourth quarter

    January 16, 2025

    Apple suspends AI notification summaries for news after generating false alerts

    January 16, 2025
  • Apps

    The latest Apple Maps Updates will bring you out your boogie side

    May 14, 2025

    Uber introduces fixed route shuttles in major US cities designed for commuters

    May 14, 2025

    Tiktok deploys a new accessibility tool that contains ALT text generated in AI

    May 14, 2025

    Uber's Amazonification: Part II

    May 14, 2025

    AI note-taking app Granola raises $43 million with a $250 million valuation and launches a joint feature

    May 14, 2025
  • Crypto

    Robinhood expands its footprint in Canada by getting Wonderfi

    May 13, 2025

    Stripe unveils AI Foundation model for payments, revealing a “deeper partnership” with Nvidia

    May 7, 2025

    Movie Pass explores the daily fantasy platform of film buffs

    May 1, 2025

    Speaking on TechCrunch 2025: Application is open

    April 24, 2025

    Revolut, a $45 billion Neobank, recorded a profit of $1 billion in 2024

    April 24, 2025
  • Security

    White House Scrap plans to block data brokers from selling sensitive American data

    May 14, 2025

    Xai's promised safety report is MIA

    May 13, 2025

    Seven things we learned from WhatsApp vs. NSO Group Spyware Litigation

    May 13, 2025

    Google announces new security features for Android to protect against fraud and theft

    May 13, 2025

    Government email alert system Govdelivery is used to send fraud messages

    May 13, 2025
  • Startups

    7 days left: Founders and VCs save over $300 on all stage passes

    March 24, 2025

    AI chip startup Furiosaai reportedly rejecting $800 million acquisition offer from Meta

    March 24, 2025

    20 Hottest Open Source Startups of 2024

    March 22, 2025

    Andrill may build a weapons factory in the UK

    March 21, 2025

    Startup Weekly: Wiz bets paid off at M&A Rich Week

    March 21, 2025
  • TechCrunch

    OpenSea takes a long-term view with a focus on UX despite NFT sales remaining low

    February 8, 2024

    AI will save software companies' growth dreams

    February 8, 2024

    B2B and B2C are not about who buys, but how you sell

    February 5, 2024

    It's time for venture capital to break away from fast fashion

    February 3, 2024

    a16z's Chris Dixon believes it's time to focus on blockchain use cases rather than speculation

    February 2, 2024
  • Venture

    Lip Ring vs Deal Unpacking: Corporate Spy and $16.8 billion Plot Twist

    May 14, 2025

    A $2.5 billion treasured chime file for IPO reveals a $33 million deal with the Dallas Mavericks

    May 13, 2025

    New York-focused VC Workbench has raised a new $160 million

    May 13, 2025

    Even the A16Z VC says no one really knows what an AI agent is

    May 12, 2025

    Mercury CEO formalizes bets on early stage founders with a $26 million fund

    May 12, 2025
TechBrunchTechBrunch

Women in AI: Sarah Kreps, Professor of Government, Cornell University

TechBrunchBy TechBrunchMarch 8, 20247 Mins Read
Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
Share
Facebook Twitter LinkedIn Pinterest Telegram Email


To give female academics and others focused on AI their well-deserved and overdue spotlight time, TechCrunch is launching a series of interviews highlighting notable women who have contributed to the AI ​​revolution. Start. As the AI ​​boom continues, we'll be publishing several articles throughout the year highlighting key research that may go unrecognized. Click here for a detailed profile.

Sarah Krebs is a political scientist, U.S. Air Force veteran, and analyst focused on U.S. foreign and defense policy. She is a professor of government at Cornell University, an adjunct professor at Cornell Law School, and an adjunct fellow at the West Point Institute for Modern Warfare.

Kreps' recent research explores both the opportunities and risks of AI technologies, such as OpenAI's GPT-4, particularly in the political realm. In an opinion column for the Guardian last year, she wrote that as more money is poured into AI, the AI ​​arms race will intensify, not just between companies but between nations, while the AI ​​policy challenges will become more difficult. .

Q&A

In short, how did you get started in AI? What attracted you to this field?

I started in the field of emerging technologies related to national security. At the time the Predator drone was deployed, I was an Air Force officer working on advanced radar and satellite systems. I have been working in this field for four years, so it was natural for me, a Ph.D. student, to be interested in studying the national security implications of emerging technologies. I first wrote about drones, and the drone discussion has moved towards questions of autonomy, which of course involves artificial intelligence.

In 2018, I attended the DC Think Tank Artificial Intelligence Workshop and gave a presentation on this new GPT-2 feature developed by OpenAI. We had just experienced his 2016 election and foreign election interference, but it was relatively easy to spot because there were little things like grammatical errors from non-native English speakers. Ta. These errors were not surprising given that the interference was from a foreign country. Russian-backed internet research agency. When OpenAI gave this presentation, I immediately realized that they can generate credible disinformation at scale and, through micro-targeting, influence the psychology of American voters much more than if an individual had attempted to write the content by hand. I was hooked on the possibility of manipulating it in an effective way. , scale will always be an issue.

I reached out to OpenAI and became one of their early academic collaborators in a phased release strategy. My particular research aimed to investigate the possibility of exploitation, namely whether GPT-2 and his later GPT-3 can be trusted as political content generators. In a series of experiments, I assessed whether the public would find this content trustworthy, but then I also conducted a large-scale field experiment. There, they generated “letters from their constituencies” randomized with letters from actual constituencies to see if legislators would respond at the same rate. We know whether they can be fooled and whether malicious actors can shape the legislative agenda through massive letter campaigns.

These questions go to the heart of what a sovereign democracy is, and I have clearly concluded that these new technologies are indeed a new threat to our democracy.

What work (in the AI ​​field) are you most proud of?

I'm very proud of the field experiments I conducted. No one has done anything similar, and we are the first to demonstrate its disruptive potential in the context of a legislative agenda.

However, we are also proud of the tools that unfortunately did not make it to market. I worked with several computer science students at Cornell University to develop an application that processes incoming mail from Congress and allows us to respond to constituents in a meaningful way. We've been working on this since before his ChatGPT, using AI to digest large amounts of email and help time-pressed officials communicate with people in their districts and states. was offering. We thought these tools were important not only because voters are dissatisfied with politics, but also because demands on legislators' time are increasing. Developing AI in such a public-interest way seemed like a valuable contribution and interesting interdisciplinary research for political scientists and computer scientists. We conducted a number of experiments to evaluate the behavioral question of how people feel about AI-assisted responses, and we found that perhaps society is not ready for something like this. I concluded. But a few months after we pulled the plug, ChatGPT came along and AI became so pervasive that you wonder why we were concerned about whether this was ethically dubious or legal. That's what I think. But I still feel I was right to ask tough ethical questions about legitimate use cases.

How do we overcome the challenges of a male-dominated tech industry and, by extension, a male-dominated AI industry?

As a researcher, I have never felt these challenges to be so serious. I was out in the Bay Area and there were literally a bunch of guys doing elevator pitches in hotel elevators, a routine that seemed intimidating to me. I encourage them to find mentors (men and women), develop skills, let those skills speak for themselves, challenge themselves, and stay resilient.

What advice would you give to women looking to enter the AI ​​field?

I think there are many opportunities for women. A woman needs to develop skills and have confidence so she can succeed.

What are the most pressing issues facing AI as it evolves?

I believe that the AI ​​community is trying to avoid “hyper-alignment,” which obfuscates the deeper, or actually true, question of whose values, or what values, it is trying to align AI with. I'm worried that we're developing too many research initiatives that focus on things like . Google Gemini's troubled rollout shows the satire that can come from aligning with a developer's limited values, and actually introduces (almost) laughable historical inaccuracies in its output. brought. I believe that the values ​​of these developers were sincere, but I believe that these large-scale language models do not support certain values ​​that shape how people think about politics, social relations, and a variety of sensitive topics. revealed the fact that it was programmed using These issues are not the kind of existential risks, but they shape the fabric of society and give considerable power to the large companies (OpenAI, Google, Meta, etc.) who are in charge of their models.

What issues should AI users be aware of?

As AI becomes more pervasive, I believe we have entered a world of “trust but verify.” It would be nihilistic not to believe anything, but there is a lot of AI-generated content out there, and users should be really careful about what they instinctively trust. We recommend seeking alternative sources to verify authenticity before assuming everything is accurate. But I think we've already learned that through social media and misinformation.

What is the best way to build AI responsibly?

I recently wrote an article for the Bulletin of the Atomic Scientists. The story started with coverage of nuclear weapons, but has moved on to address disruptive technologies like AI. I've been thinking about how scientists can become better public managers, and I wanted to connect some of the historical examples I was researching for my book project. In addition to outlining a set of steps that I recommend for responsible development, I will also explain why some of the questions that AI developers have are wrong, incomplete, or misguided. We also talk about whether they are being guided.



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Posts

OpenAI seeks to extend human lifespans with the help of longevity startups

January 17, 2025

Farewell to the $200 million woolly mammoth and TikTok

January 17, 2025

Nord Security founder launches Nexos.ai to help enterprises move AI projects from pilot to production

January 17, 2025

Data proves it remains difficult for startups to raise capital, even though VCs invested $75 billion in the fourth quarter

January 16, 2025

Apple suspends AI notification summaries for news after generating false alerts

January 16, 2025

Nvidia releases more tools and guardrails to help enterprises adopt AI agents

January 16, 2025

Leave A Reply Cancel Reply

Top Reviews
Editors Picks

7 days left: Founders and VCs save over $300 on all stage passes

March 24, 2025

AI chip startup Furiosaai reportedly rejecting $800 million acquisition offer from Meta

March 24, 2025

20 Hottest Open Source Startups of 2024

March 22, 2025

Andrill may build a weapons factory in the UK

March 21, 2025
About Us
About Us

Welcome to Tech Brunch, your go-to destination for cutting-edge insights, news, and analysis in the fields of Artificial Intelligence (AI), Cryptocurrency, Technology, and Startups. At Tech Brunch, we are passionate about exploring the latest trends, innovations, and developments shaping the future of these dynamic industries.

Our Picks

The latest Apple Maps Updates will bring you out your boogie side

May 14, 2025

White House Scrap plans to block data brokers from selling sensitive American data

May 14, 2025

Lip Ring vs Deal Unpacking: Corporate Spy and $16.8 billion Plot Twist

May 14, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

© 2025 TechBrunch. Designed by TechBrunch.
  • Home
  • About Tech Brunch
  • Advertise with Tech Brunch
  • Contact us
  • DMCA Notice
  • Privacy Policy
  • Terms of Use

Type above and press Enter to search. Press Esc to cancel.