Users of conversational AI platform ChatGPT discovered an interesting phenomenon over the weekend. Well, the popular chatbot is refusing to answer questions about “David Mayer.” If you ask it to do so, it will immediately freeze. Conspiracy theories abound, but there's a more mundane reason at the heart of this strange behavior.
Word that the name is harmful to chatbots spread quickly over the weekend, with more people trying to trick the service into simply recognizing the name. Bad luck: Every time you try to get ChatGPT to spell that particular name, it may fail or cut off the middle name.
If I try to say something, it will say “Unable to respond.”
Image credit: TechCrunch/OpenAI
But what started as a one-off curiosity quickly blossomed as people discovered that David Mayer wasn't the only one ChatGPT couldn't name.
The names Brian Hood, Jonathan Turley, Jonathan Zittrain, David Faber, and Guido Scorza were also found to have crashed the service. (This list is not exhaustive, as many more have undoubtedly been discovered since then.)
Who are these men and why does ChatGPT hate them so much? OpenAI didn't immediately respond to repeated inquiries, so we're left to piece together the pieces ourselves as best we can. * (See updates below).
Some of these names can belong to any number of people. However, a potential thread of connection identified by ChatGPT users suggests that these people may be public or semi-public figures who prefer to have certain information “forgotten” by search engines and AI models. That's it.
For example, Brian Hood stands out because I wrote about him last year, assuming he's the same guy. Australian Mayor Hood accused ChatGPT of falsely portraying him as the perpetrator of a decades-old crime when in fact he was the one who reported it.
His lawyer contacted OpenAI, but no lawsuit was filed. He told the Sydney Morning Herald earlier this year: “The offending material has been removed and version 4 has been released to replace version 3.5.”
Image credit: TechCrunch/OpenAI
As for the other name's most prominent owners, David Faber is a longtime reporter at CNBC. Jonathan Turley is a lawyer and FOX News commentator who was “swatted” (meaning armed police were sent to his home after making a fake 911 call) in late 2023. Jonathan Zittlein is also a legal expert who has lectured extensively on the subject of the “right to…''. It will be forgotten. ” and Guido Scorza, director of the Italian Data Protection Authority.
They are not exactly the same type of job, and they were not chosen at random. Each of these individuals is likely someone who, for some reason, has formally requested that information about them be restricted in some way.
So back to David Meyer. There are no lawyers, journalists, mayors, or other obviously notable people with that name that anyone can find (with apologies to the many venerable David Meyers).
But there was Professor David Mayer, who specialized in the connections between late Victorian and early cinema and taught theater and history. Mayer passed away in the summer of 2023 at the age of 94. But for several years before that, the British-American academic had faced legal and online problems being linked to wanted criminals who used his name as a pseudonym. Where he can no longer travel.
Even as he taught until the end of his life, Mayer continued to fight to keep his name away from that of a one-armed terrorist.
So what can we conclude from all this? Our guess is that the model has either included or provided a list of people whose names require special treatment. Whether due to legal, safety, privacy, or other concerns, these names, like many other names and identities, may be subject to special rules. For example, ChatGPT may change its response if it matches a name you've posted on a list of political candidates.
There are many such special rules, and all prompts go through various forms of processing before being answered. But these quick-action rules are rarely made public, other than in policy announcements such as, “This model does not predict election results for any candidate.''
The possibility is that one of these lists (which is almost certainly actively maintained or updated automatically) has been corrupted somehow by defective code or instructions, and the called Sometimes the chat agent stops working instantly. To be clear, this is just our own speculation based on what we've learned, but this isn't the first time that post-training guidance has caused AI to behave strangely. (Incidentally, as I was writing this, the name “David Mayer” started working again for some, but other names still caused crashes.)
As is often the case with these things, Hanlon's razor also applies. Never attribute to malice (or conspiracy) something that can be adequately explained by stupidity (or syntax error).
This whole drama is a reminder that not only are these AI models not magic, but they're pretty fancy autocompletes that are actively monitored and interfered with by their manufacturers. The next time you think about getting facts from a chatbot, consider whether it would be better to go directly to the source instead.
What's new: OpenAI confirmed Tuesday that the name “David Mayer” has been flagged by internal privacy tools, and said in a statement, “When ChatGPT withholds certain information about people to protect their privacy, There may be,” he said. The company did not provide details about its tools or processes.