Microsoft has taken legal action against a group it says intentionally developed and used tools to circumvent safety guardrails in its cloud AI products.
A group of 10 unnamed defendants used stolen customer credentials and custom-designed software to access Microsoft's fully managed services, according to a complaint filed by the company in December in the U.S. District Court for the Eastern District of Virginia. It is said to have infiltrated the Azure OpenAI service. Powered by technology from ChatGPT manufacturer OpenAI.
In the complaint, Microsoft alleges that the defendant (referred to only by his legal pseudonym “Does”) violated the Computer Fraud and Abuse Act, the Digital Millennium Copyright Act, and He is accused of violating federal racketeering laws. Servers intended for “offensive” and “creation of harmful and illegal content”. Microsoft did not provide specific details about the malicious content that was generated.
The company is seeking injunctive and “other equitable” relief and damages.
Microsoft alleges in its complaint that customers with Azure OpenAI Service credentials, specifically API keys, which are unique strings of characters used to authenticate apps and users, are being used to generate content that violates the service's terms of service. He stated that he discovered in July 2024 that the After a subsequent investigation, Microsoft discovered that the API keys had been stolen from paying customers, according to the complaint.
“It is unclear exactly how Defendants obtained all of the API keys used to perform the fraudulent activities described in this complaint,” Microsoft's complaint states. In order to steal Microsoft API keys from multiple Microsoft customers. ”
Microsoft alleges that the defendants created a “hacking-as-a-service” scheme using stolen Azure OpenAI Service API keys belonging to U.S.-based customers. To carry out this scheme, the defendants created a client-side tool called de3u and software to process and route communications from de3u to Microsoft's systems, according to the complaint.
De3u allows users to leverage stolen API keys to generate images using DALL-E, one of the OpenAI models available to Azure OpenAI Service customers, without having to write their own code. claims Microsoft. According to the complaint, De3u also attempted to prevent the Azure OpenAI Service from revising the prompts used to generate images. This can occur, for example, if the text prompt contains words that trigger Microsoft's content filters.
Screenshot of De3u tool from Microsoft complaint. Image credit: Microsoft
The repository containing the de3u project's code hosted on GitHub, a company owned by Microsoft, is no longer accessible as of this article.
“These features, combined with Defendants' illegal programmatic API access to the Azure OpenAI service, enabled Defendants to perform reverse engineering measures that circumvented Microsoft's content and abuse protections,” the complaint states. It is listed. “Defendants knowingly and intentionally gained unauthorized access to computers protected by Azure OpenAl Service and caused damage and loss as a result of such conduct.”
In a blog post published Friday, Microsoft said the court had authorized the company to seize websites that were “instruments” of the defendants' activities. This will allow the company to gather evidence, decipher how the defendants' alleged services are being monetized, and disrupt additional services. Find your technical infrastructure.
Microsoft also said it had “taken measures” that the company did not explicitly state, as well as “added additional safety mitigations” to its Azure OpenAI service targeting the observed activity.