Late Friday afternoon, a time typically reserved for unwelcome corporate disclosures, AI startup Hugging Face announced that earlier this week its security team detected “unauthorized access” to Spaces, its platform for creating, sharing, and hosting AI models and resources.
Hugging Face said in a blog post that the intrusion related to Spaces secrets – personal information that serves as keys to unlock protected resources like accounts, tools and development environments – and that it “suspects” that some secrets may have been accessed without authorization by third parties.
As a precautionary measure, Hugging Face has invalidated some of the tokens in these secrets. (Tokens are used to verify identity.) Hugging Face says that email notifications have already been sent to users whose tokens were invalidated, and it encourages all users to “refresh their keys or tokens” and consider switching to fine-grained access tokens, which Hugging Face claims are more secure.
It wasn't immediately clear how many users or apps were affected by the potential breach. We've reached out to Hugging Face for more information and will update this post if we hear back.
“We are working with external cybersecurity forensic experts to investigate this matter and review our security policies and procedures. We have also reported this incident to law enforcement and our data center. [sic] “We deeply apologize for the inconvenience caused to security authorities and for any confusion this incident has caused. We understand the inconvenience caused and will use this opportunity to strengthen the security of our entire infrastructure,” Hugging Face wrote in the post.
The potential hack of Spaces comes as Hugging Face, one of the largest platforms for collaborative AI and data science projects with more than 1 million models, datasets and AI-powered apps, faces increasing scrutiny over its security practices.
In April, researchers at cloud security firm Wiz discovered (and have since fixed) vulnerabilities that could allow attackers to execute arbitrary code and probe network connections from machines when building apps hosted on Hugging Face. Earlier this year, security firm JFrog found evidence that code uploaded to Hugging Face covertly installed backdoors and other types of malware on end-user machines, and security startup HiddenLayer identified that Hugging Face's seemingly secure serialization format, Safetensors, could be exploited to create sabotaged AI models.
Hugging Face recently partnered with Wiz to use the company's vulnerability scanning and cloud environment configuration tools “to improve the security of our entire platform and our entire AI/ML ecosystem,” the company said.