According to a press release on Friday, Apple has signed onto a White House voluntary initiative to develop safe, secure and trustworthy AI. The company will soon introduce its generative AI product, Apple Intelligence, into its core products and bring generative AI to Apple's 2 billion users.
Apple joins 15 other technology companies, including Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI, who committed to following the White House's ground rules for developing generative AI in July 2023. At the time, Apple did not reveal how deeply it planned to incorporate AI into iOS. But at WWDC in June, Apple made it very clear that it was going all-in on generative AI, starting with a partnership to incorporate ChatGPT into iPhones. As a frequent target of federal regulators, Apple wants to show early that it is willing to follow the White House's AI rules. This may be an attempt to curry favor before a regulatory battle over AI erupts in the future.
But how effective is Apple's voluntary commitment to the White House? It's not much, but it's a start. The White House called it a “first step” to encourage Apple and 15 other AI companies to develop AI that is safe, secure, and trustworthy. The second step was President Biden's executive order on AI in October, and there are currently several bills pending in Congress and state legislatures to better regulate AI models.
The commitment commits AI companies to red team (acting as adversarial hackers and stress-testing an organization's safeguards) AI models before public release and to make that information publicly available. The White House's voluntary commitment also requires AI companies to keep unreleased AI model weights confidential. Apple and other companies have agreed to work on AI model weights in a secure environment and ensure that as few employees as possible have access to the model weights. Finally, AI companies have agreed to develop content labeling systems, such as watermarks, to help users tell what is AI-generated and what is not.
Meanwhile, the Commerce Department said it will soon release a report on the potential benefits, risks, and impacts of an open source foundation model. Open source AI is becoming an increasingly politically tense regulatory battleground. Some camps want to limit the model weights accessible to powerful AI models for safety reasons. But doing so could severely limit the AI startup and research ecosystem. This White House stance could have major implications for the entire AI industry.
The White House also noted that federal agencies have made great progress on the tasks laid out in the October executive order: They have hired more than 200 AI talent, given more than 80 research teams access to computing resources, and released several frameworks for AI development (governments love frameworks).