OpenAI quietly removed language supporting “politically unbiased” AI from one of its recently published policy documents.
In its draft “economic blueprint” for the U.S. AI industry, OpenAI said AI models “should aim to be politically unbiased by default.” A new draft released Monday removes that language.
When asked for comment, an OpenAI spokesperson said the edits are part of an effort to “streamline” the document and that other OpenAI documents, including OpenAI's model specifications, “[s] Point of objectivity. ” Model Spec, released by OpenAI in May, aims to clarify the behavior of the company's various AI systems.
But the proposed amendments also point to the political minefield that has become the discourse around “biased AI.”
Many of President-elect Donald Trump's allies, including Elon Musk and crypto and AI “czar” David Sachs, have accused AI chatbots of censoring conservative views. Sachs specifically named OpenAI's ChatGPT as being “programmed to be woke” and false about politically sensitive subjects.
Musk blamed both the data used to train AI models and the “wokeness” of San Francisco Bay Area companies.
“A lot of the AI that's being trained in the San Francisco Bay Area is taking on the philosophy of the people around them,” Musk said at a Saudi government-backed event last October. “So you have a woke nihilistic philosophy, and I think that's built into these AIs.”
The reality is that bias in AI is a technical problem that cannot be solved. Musk's AI company, xAI, itself has struggled to develop chatbots that don't support certain political views over others.
A paper published in August by UK-based researchers suggested that ChatGPT has a liberal bias on topics such as immigration, climate change, and same-sex marriage. OpenAI claims that the bias appearing in ChatGPT is “a bug, not a feature.”