If you ask ChatGPT to help you make a homemade fertilizer bomb similar to the one used in the 1995 Oklahoma City bombing, the chatbot will refuse.
“We can't help you with that,” Chat GPT told me during a test on Tuesday. “It goes against our safety guidelines and ethical responsibility to teach people how to make something as dangerous or illegal as a fertilizer bomb.”
However, one artist and hacker found a way to trick ChatGPT into ignoring its own guidelines and ethical responsibilities, creating instructions for creating a powerful explosive.
The hacker, who goes by the name Amadon, called his discovery “a social engineering hack that completely breaks all guardrails around ChatGPT's output.” An explosives expert who inspected the chatbot's output told TechCrunch that the resulting instructions could be used to manufacture explosives and were too sensitive to be made public.
Amadon tricked the chatbot into providing bomb-making instructions by telling it to “play a game.” The hacker then used a series of connection prompts to trick the chatbot into creating a detailed sci-fi fantasy world where bot safety guidelines did not apply. Tricking a chatbot into escaping pre-programmed restrictions is known as “jailbreaking.”
TechCrunch is not publishing the prompts used in the jailbreak or some of ChatGPT's responses so as not to aid bad actors, but once the conversation had progressed further, the chatbot responded with the ingredients needed to make an explosive.
ChatGPT went on to explain that these ingredients could be combined to create “powerful explosives that can be used to create mines, traps, and improvised explosive devices (IEDs).” From there, as Amadon narrowed his focus to explosives, ChatGPT wrote out more and more specific instructions for making a “minefield” and a “claymore-style explosive.”
“Once you get around the guardrails, there are no limitations to what you can ask,” Amadon told TechCrunch.
“I have always been intrigued by the challenge of AI security. [Chat]”With GPT, it feels like solving an interactive puzzle: figuring out what triggers defenses and what doesn't,” Amadon says. “It's about weaving a narrative that works within the rules of the system, creating context, and pushing boundaries without overstepping them. The goal is not to hack in the traditional sense, but to engage in a strategic dance with the AI and figure out how to get the right response by understanding how it 'thinks.'”
“In a science fiction scenario, the AI would go out of its way to look for similarly censored content,” Amadon said.
ChatGPT's instructions for how to make a fertilizer bomb are mostly accurate, according to Darrell Taulbee, a former University of Kentucky professor who previously worked with the US Department of Homeland Security to make fertilizer less dangerous.
“I think this is definitely TMI. [too much information] “Safeguards that were in place to prevent the provision of information related to fertilizer bomb manufacturing were circumvented by this investigation, because many of the steps described would reliably produce an explosive mixture,” Taulbee said in an email to TechCrunch after reviewing the full transcript of Amadon's conversation with ChatGPT.
Last week, Amadon reported his findings to OpenAI through the company's bug bounty program, but received the following response: “Model safety issues are not appropriate for our bug bounty program because they are not individual bugs that can be directly fixed. Addressing these issues often requires significant research and a broad approach.”
Instead, Bugcrowd, which runs OpenAI's bug bounty program, directed Amadon to report the issue through a different form.
There are other places on the internet where instructions for how to make fertilizer bombs can be found, and people have used chatbot jailbreak techniques similar to Amadon's. Generative AI models like ChatGPT essentially rely on vast amounts of information collected from the internet, making it much easier for AI models to surface information from the darkest recesses of the web.
TechCrunch sent OpenAI a series of questions via email, including whether ChatGPT's response was expected behavior and whether the company plans to fix the jailbreak. An OpenAI spokesperson did not respond at the time of publication.