Users of conversational AI platform ChatGPT discovered an interesting phenomenon over the weekend. Well, the popular chatbot is refusing to answer questions about “David Mayer.” If you ask it to do so, it will immediately freeze. Conspiracy theories abound, but there may be a more mundane reason at the heart of this strange behavior.
Word that the name is harmful to chatbots spread quickly over the weekend, with more people trying to trick the service into simply recognizing the name. Bad luck: Every time you try to get ChatGPT to spell that particular name, it may fail or cut off the middle name.
If I try to say something, it will say “Unable to respond.”
Image credit: TechCrunch/OpenAI
But what started as a one-off curiosity quickly blossomed as people discovered that David Mayer wasn't the only one ChatGPT couldn't name.
The names Brian Hood, Jonathan Turley, Jonathan Zittrain, David Faber, and Guido Scorza were also found to have crashed the service. (This list is not exhaustive, as many more have undoubtedly been discovered since then.)
Who are these men? And why does ChatGPT hate them so much? Since OpenAI hasn't responded to repeated inquiries, we're left to compile as much information as we can ourselves.
Some of these names can belong to any number of people. But a potential thread of connection was soon discovered. These people may have been public or semi-public figures who preferred certain information to be “forgotten” by search engines and AI models.
For example, Brian Hood immediately stood out. If it's the same guy, I wrote about him last year. Australian Mayor Hood accused ChatGPT of falsely portraying him as the perpetrator of a decades-old crime when in fact he was the one who reported it.
His lawyer contacted OpenAI, but no lawsuit was filed. He told the Sydney Morning Herald earlier this year: “The offending material has been removed and version 4 has been released to replace version 3.5.”
Image credit: TechCrunch/OpenAI
As for the other name's most prominent owners, David Faber is a longtime reporter at CNBC. Jonathan Turley is a lawyer and FOX News commentator who was “swatted” (meaning armed police were sent to his home after making a fake 911 call) in late 2023. Jonathan Zittlein is also a legal expert who has lectured extensively on the subject of the “right to…''. It will be forgotten. ” and Guido Scorza, director of the Italian Data Protection Authority.
They are not exactly the same type of job, and they were not chosen at random. Each of these individuals is likely someone who, for some reason, has formally requested that information about them be restricted in some way.
So back to David Meyer. There are no lawyers, journalists, mayors, or other obviously notable people with that name that anyone can find (with apologies to the many venerable David Meyers).
But there was Professor David Mayer, who specialized in the connections between late Victorian and early cinema and taught theater and history. Mayer passed away in the summer of 2023 at the age of 94. But for several years before that, the British-American academic had faced legal and online problems being linked to wanted criminals who used his name as a pseudonym. Where he can no longer travel.
Even as he taught until the end of his life, Mayer continued to fight to keep his name away from that of a one-armed terrorist.
So what can we conclude from all this? There's no official explanation from OpenAI, but our guess is that the model has ingested a list of people whose names require some special processing. I think so. Whether due to legal, safety, privacy, or other concerns, these names, like many other names and identities, may be subject to special rules. For example, if you ask a question about a political candidate, ChatGPT's response may change if it matches a name you've written on your candidate list.
There are many such special rules, and all prompts go through various forms of processing before being answered. But these quick-action rules are rarely made public, other than in policy announcements such as, “This model does not predict election results for any candidate.''
It's possible that one of these lists (which is almost certainly being actively maintained or updated automatically) has somehow broken its code, causing the chat agent to fail when called. This means that it will be interrupted immediately. To be clear, this is just our own speculation based on what we've learned, but this isn't the first time that post-training guidance has caused AI to behave strangely. (Incidentally, as I was writing this, the name “David Mayer” started working again for some, but other names still caused crashes.)
As is often the case with these things, Hanlon's razor also applies. Never attribute to malice (or conspiracy) something that can be adequately explained by stupidity (or syntax error).
This whole drama is a reminder that not only are these AI models not magic, but they're pretty fancy autocompletes that are actively monitored and interfered with by their manufacturers. The next time you think about getting facts from a chatbot, consider whether it would be better to go directly to the source instead.