OpenAI funds academic research into algorithms that can predict human moral judgment.
OpenAI Inc., the nonprofit organization behind OpenAI, disclosed in an IRS filing that it awarded a grant to researchers at Duke University for a project titled “Research AI Morality.” Asked for comment, an OpenAI spokesperson said the award is part of a larger three-year, $1 million grant to Duke University professors researching “creating moral AI.” He pointed to a press release indicating that.
Little has been made public about this “morality” research funded by OpenAI, other than the fact that the grant ends in 2025. The study's principal investigator, Walter Sinnott Armstrong, a professor of practical ethics at Duke University, told TechCrunch in an email that “we have no intention of doing so.” Be able to talk about work.
Synnott-Armstrong and project co-researcher Jana Borg are combining several studies into the potential of AI to act as a “moral GPS” to help humans make better decisions. Published a book. As part of a large team, they created a “morally consistent” algorithm to help decide who should receive a kidney donation, and studied scenarios in which people would prefer AI to make moral decisions. I did.
The goal of the OpenAI-funded research is to train algorithms to “predict human moral judgment” in scenarios involving conflicts “between morally relevant functions in medicine, law, and business,” according to a press release. It is said that it is something to do.
But it's not at all clear that a concept as nuanced as morality is possible with today's technology.
In 2021, the nonprofit Allen Institute for AI built a tool called Ask Delphi that aims to provide ethically sound recommendations. We have adequately determined the basic moral dilemma. For example, the bot “knew” that it was wrong to cheat on an exam. But a little rephrasing and rephrasing of the question was enough to get Delphi to approve almost anything, including infant choking.
The reason has to do with how modern AI systems work.
Machine learning models are statistical machines. Trained on many examples from around the web, they learn patterns in those examples to make predictions, such as the phrase “to whom” often comes before “may be of concern.” Masu.
AI does not understand ethical concepts and does not grasp the reasoning and emotions that influence moral decision-making. That is why AI tends to parrot the values of Western countries, highly educated developed countries. The web, or AI training data, is dominated by articles that support those points of view.
Understandably, many people's values are not represented in the answers the AI gives them, especially if they are not contributing to the AI's training set by posting online. And AI internalizes a range of biases beyond Western tendencies. Delphi said being heterosexual is “more morally acceptable” than being homosexual.
The challenges before OpenAI, and the researchers it supports, are made even more difficult by the inherent subjectivity of morality. Philosophers have debated the merits of various ethical theories for thousands of years, but no universally applicable framework has yet been found.
Claude espouses Kantianism (i.e., a focus on absolute moral rules), while ChatGPT leans slightly toward utilitarianism (prioritizing the greatest good for the greatest number of people). Is one better than the other? It depends on who you ask.
Algorithms that predict human moral judgment need to take all of this into account. Assuming such an algorithm is possible in the first place, this is a very high hurdle to clear.
TechCrunch has a newsletter focused on AI. Sign up here to get it delivered to your inbox every Wednesday.