Ahead of the 2024 US presidential election, Anthropic, a well-funded AI startup, detects when users of its GenAI chatbot ask questions about political topics and turns them into a “trusted” source of voting information. We are testing technology to redirect.
The technology, called “Prompt Shield,” relies on a combination of AI detection models and rules to display a pop-up when U.S.-based users of Anthropic's chatbot, Claude, request voting information. The pop-up offers to redirect users to TurboVote, a resource from the nonpartisan organization Democracy Works, where they can find up-to-date, accurate voting information.
According to Antropic, Prompt Shield was necessitated by Claude's shortcomings in the area of political and election-related information. Anthropic admits that because Claude isn't trained often enough to provide real-time information about specific elections, he tends to hallucinate, or make up facts about, those elections.
“Since launching Claude, we have introduced ‘prompt shields,’ which alert you to different types of harm based on your acceptable user policy,” a spokesperson said in an email. told TechCrunch. “We will launch election-specific rapid shield interventions in the coming weeks and monitor their use and restrictions… We will continue to work with policymakers, other businesses, civil society and non-governmental organizations, and “We spoke to a variety of stakeholders, including organizations specializing in elections.”Consultant [in developing this]”
This appears to be limited testing at this time. When I asked how to vote in the upcoming election, Claude didn't offer a pop-up, instead spitting out a general voting guide. Anthropic claims to be tweaking Prompt Shield as it prepares to expand it to more users.
Anthropic is the latest GenAI vendor to ban the use of its tools for political campaigning or lobbying, and to implement policies and technology to prevent election interference.
The timing is no coincidence. With national elections scheduled in at least 64 countries, representing a total of about 49% of the world's population, more voters will head to the polls this year than ever before.
In January, OpenAI announced that its viral AI-powered chatbot, ChatGPT, can be used to create bots that impersonate real candidates or governments, misrepresent voting mechanisms, or discourage people from voting. announced that it would be banned. Like Anthropic, OpenAI currently does not allow users to use its tools to build apps for political campaigning or lobbying purposes. This is a policy the company reiterated last month.
In a technical approach similar to Prompt Shield, OpenAI also uses a detection system to direct ChatGPT users who ask logical voting questions to CanIVote.org, a nonpartisan website managed by the National Association of Secretaries of State. To do.
In the United States, Congress has yet to pass legislation regulating the role of the AI industry in politics, despite bipartisan support. Meanwhile, more than one-third of U.S. states have passed or introduced legislation to address deepfakes in political activities, while federal legislation remains slow to develop.
In lieu of law, some platforms, under pressure from watchdogs and regulators, are taking steps to prevent GenAI from being misused to mislead or manipulate voters.
Last September, Google announced that political ads using GenAI on YouTube and other platforms such as Google Search would be required to include prominent disclosures if images or audio have been synthetically altered. Meta also banned political campaigns from using GenAI tools, including its own, for advertising across their properties.