Ahead of Apple's private AI cloud, Private Cloud Compute, debuting next week, the tech giant is offering up to $1 million to security researchers to discover vulnerabilities that could compromise the security of its private AI cloud. He announced that he would pay.
In a post on Apple's security blog, the company said it would pay a reward of up to $1 million to anyone who reports an exploit that allows remote execution of malicious code on private cloud computing servers. Apple also announced it will award up to $250,000 to researchers who privately report an exploit that can extract sensitive user information and prompts that customers send to the company's private cloud.
Apple said it would “consider security issues with significant impact” outside of the publicly disclosed category, including up to $150,000 for exploits that could access sensitive user information from privileged network locations. It is said to be included.
“We award the maximum amount for vulnerabilities that compromise external user data and inference request data. [private cloud compute] The line of trust,” Apple said.
This is the latest logical expansion of Apple's bug bounty program, which rewards hackers and security researchers who privately report flaws and vulnerabilities that could be used to compromise customer devices or accounts. It offers financial rewards.
In recent years, Apple has sought to improve the security of its devices, which are often targeted by spyware makers, by creating a special iPhone for researchers that is designed for hacking. Security has been opened.
Apple detailed the security of its private cloud computer service in a blog post as well as its source code and documentation.
Apple bills its private cloud computing as an online extension of its customers' on-device AI model, called Apple Intelligence, which handles much more intensive AI tasks in a way that Apple claims protects customer privacy. can be processed.