Apple has published a technical paper detailing the models it developed to power Apple Intelligence, a set of generative AI features coming to iOS, macOS, and iPadOS in the coming months.
In the paper, Apple refutes accusations that it took an ethically questionable approach to training some of its models, reiterating that it does not use private user data and that it uses a combination of public and licensed data for Apple Intelligence.
“[The] “The pre-training datasets consist of data licensed from publishers, curated public or open-source datasets, and public information crawled by our web crawler, Applebot,” Apple wrote in the paper. “Because we are committed to protecting user privacy, please note that the data mix does not contain any private data of Apple users.”
In July, Proof News reported that Apple had used a dataset called The Pile, which contains subtitles from hundreds of thousands of YouTube videos, to train a series of models designed to process them on-device. Many YouTube creators whose subtitles were included in The Pile were unaware of this or consented to it. Apple later issued a statement saying it had no intention of using these models for AI features in its products.
The technical paper reveals details about the models called “Apple Foundation Models (AFM)” that Apple first unveiled at WWDC 2024 in June, and emphasizes that the training data for the AFM models was obtained in a “responsible” manner — at least according to Apple's definition of responsible.
The training data for the AFM model includes publicly available web data as well as data licensed from undisclosed publishers. According to The New York Times, Apple contacted several publishers, including NBC, Condé Nast, and IAC, in late 2023 to sign multi-year contracts worth more than $50 million to train the model on the publishers' news archives. Apple's AFM model was also trained on open source code hosted on GitHub, specifically Swift, Python, C, Objective-C, C++, JavaScript, Java, and Go code.
Training models with code, even open code, without permission has been a contentious issue among developers. Some developers argue that some open source codebases are unlicensed or don't allow AI training in their terms of use. But Apple says it “filtered by license” to include code only from repositories with minimal usage restrictions, such as the MIT, ISC, and Apache licenses.
To improve the mathematical skills of its AFM model, Apple specifically included math problems and answers from web pages, math forums, blogs, tutorials, and seminars in its training set, according to the paper. The company also leveraged “high-quality, publicly available” data sets (not named in the paper) with “licenses that allow them to be used to train the model,” and filtered them to remove sensitive information.
Overall, the training data set for the AFM model is about 6.3 trillion tokens. (Tokens are bite-sized pieces of data that are typically easy for generative AI models to ingest.) By comparison, this is less than half the number of tokens (15 trillion) that Meta used to train its flagship text generation model, Llama 3.1 405B.
With additional data, including human feedback and synthetic data, Apple has been trying to tweak the AFM model to mitigate undesirable behaviors, such as toxic eruptions.
“Our models are designed to help users carry out their everyday activities with Apple products.
“This is a core value at Apple and we are rooted in responsible AI principles every step of the way,” the company said.
The paper contains neither conclusive evidence nor stunning insights, but that's by careful design: Such papers rarely reveal too much detail, not just because of competitive pressures but also because disclosing too much information could land companies in legal trouble.
Some companies that scrape public web data to train their models argue that their actions are protected by the fair use doctrine, but this is a highly contentious issue and the number of litigation cases is increasing.
In its paper, Apple says it lets webmasters block its crawlers from harvesting their data, but that puts individual creators in a tricky position: What can an artist do, for example, if their portfolio is hosted on a site that doesn't block Apple from harvesting their data?
The legal battle will determine the fate of generative AI models and how they are trained, but for now, Apple is trying to position itself as an ethical player while avoiding unwanted legal scrutiny.