Microsoft has changed its policy to prohibit U.S. law enforcement from using AI generated through Azure OpenAI Service, the company's fully managed enterprise OpenAI technology wrapper.
Language added Wednesday to the Azure OpenAI Service terms of service allows U.S. police departments to use integrations with Azure OpenAI Service “by or for” the police department, including integrations with OpenAI's text and speech analytics models. It is prohibited to do so.
Another new bullet point targets “any law enforcement agency worldwide” and uses mobile cameras, such as body cameras and dash cams, to “attempt to identify individuals in uncontrolled, indoor settings” and “real-time The use of facial recognition technology is explicitly prohibited. -Wild” environment.
The changes come after Axon, a maker of technology and weapons products for the military and law enforcement, announced a new product that leverages OpenAI's GPT-4 generated text model to summarize audio from body cameras. It took place a week later. Critics have criticized hallucinations (even today's best generative AI models fabricate facts) and racial bias introduced from training data (people of color are far more likely to be stopped by police). He was quick to point out potential pitfalls, which are particularly worrying given the (from my white peers).
It is unclear whether Axon was using GPT-4 via Azure OpenAI Service. If so, it is unclear whether the updated policy was in response to his Axon product launch. OpenAI has previously restricted the use of facial recognition models through its API. We reached out to Axon, Microsoft, and OpenAI. I'll update this post if I hear back.
The new terms leave Microsoft with some flexibility.
The complete ban on the use of Azure OpenAI Service applies only to US police forces and does not apply to international police forces. It also excludes facial recognition performed by fixed cameras in controlled environments such as back offices (although the rules prohibit the use of facial recognition by U.S. police).
This is consistent with Microsoft and close partner OpenAI's recent approach to AI-related law enforcement and defense contracts.
In January, a Bloomberg report revealed that OpenAI was working with the Department of Defense on a number of projects, including cybersecurity capabilities, a move the company had previously made to provide its AI to the military. This is a starting point from what was prohibited. Elsewhere, Microsoft is touting the use of OpenAI's image generation tool, DALL-E, to help the Department of Defense (DoD) build software to conduct military operations, according to The Intercept. I'm here.
Azure OpenAI Service was made available in Microsoft's Azure Government product in February, adding compliance and management capabilities for government agencies, including law enforcement. In her blog post, Candice Ling, senior vice president of Microsoft Federation, Microsoft's government-focused division, said that Azure OpenAI Service will require the Department of Defense “additional authorization” for workloads that support the Pentagon's missions. I promised to apply.
Microsoft and OpenAI did not respond to requests for comment.