After months of speculation, Apple Intelligence made headlines at WWDC 2024 in June. The platform was announced following a torrent of generated AI news from companies such as Google and Open AI, raising concerns that the notoriously tight-lipped tech giant had missed the latest tech boom. .
But contrary to such speculation, Apple had a team working on what turned out to be a very Apple-like approach to artificial intelligence. It's still lively during demos, and Apple always loves to put on a show. But Apple Intelligence ultimately has a very pragmatic take on the category.
Apple Intelligence (yes, AI for short) is not a standalone feature. Rather, it is important to integrate it into existing services. While this is a true branding effort, large-scale language model (LLM)-driven technology works behind the scenes. As far as consumers are concerned, this technology primarily comes in the form of new features in existing apps.
We learned more at Apple's iPhone 16 event on September 9th. During the event, Apple announced a number of tweaks to Apple Watch Series 10's translation, iPhone visual search, and Siri features. The first wave of Apple Intelligence will arrive at the end of October as part of iOS 18.1, iPadOS 18.1, and macOS Sequoia 15.1. The second wave of features is available as part of the iOS 18.2, iPadOS 18.2, and macOS Sequoia 15.2 developer betas.
This feature was first released in US English. Apple has since added English localizations for Australia, Canada, New Zealand, South Africa, and the United Kingdom.
In 2025, support for Chinese, English (India), English (Singapore), French, German, Italian, Japanese, Korean, Portuguese, Spanish, and Vietnamese will be available. In particular, users in both China and the EU may not have access to the following languages: Due to regulatory hurdles, Apple Intelligence capabilities are included.
What is Apple Intelligence?
Image credit: Apple
Cupertino marketing executives have dubbed Apple Intelligence “AI for the rest of us.” The platform is designed to improve on existing capabilities by leveraging what generative AI is already good at, such as text and image generation. Similar to other platforms such as ChatGPT and Google Gemini, Apple Intelligence was trained on large-scale information models. These systems use deep learning to form connections between text, images, video, music, and more.
Text delivery through LLM serves as a writing tool in itself. This feature is available in a variety of Apple apps, including Mail, Messages, Pages, and Notifications. It can be used to provide summaries of long texts, proofread, and even compose messages using content and tone prompts.
Image generation is integrated in a similar way, but not as seamlessly. Users can prompt Apple Intelligence to generate custom emojis (Genmoji) in the Apple house style. Image Playground, on the other hand, is a standalone image generation app that uses prompts to create visual content that you can use in messages, keynotes, or share on social media.
Apple Intelligence also brings a long-awaited makeover to Siri. Smart assistants have been around for a while, but have been largely ignored over the past few years. Siri is more deeply integrated into Apple's operating system. For example, when your iPhone is running, you'll see glowing lights around the edges of your iPhone screen instead of the familiar icons.
More importantly, the new Siri works with multiple apps. So, for example, you can ask Siri to edit your photo and insert it directly into your text message. It's a smooth experience that Assistant has lacked before. On-screen awareness means Siri uses the context of the content you're currently consuming to provide appropriate answers.
Who gets Apple Intelligence and when?
Image credit: Darrell Etherington
The first wave of Apple Intelligence will arrive in October through the iOS 18.1, iPadOS 18., and macOS Sequoia 15.1 updates. These include integrated writing tools, image cleanup, article summaries, and typing input for a redesigned Siri experience.
Many of the remaining features will be added in the next release in October as part of iOS 18.1, iPadOS 18.1, and macOS Sequoia 15.1. The second wave of features is available as part of iOS 18.2, iPadOS 18.2, and macOS Sequoia 15.2. That list includes Genmoji, Image Playground, Visual Intelligence, Image Wand, and ChatGPT integration.
This product is free to use as long as you have one of the following hardware:
iPhone 16 All models iPhone 15 Pro Max (A17 Pro) iPhone 15 Pro (A17 Pro) iPad Pro (M1 or later) iPad Air (M1 or later) iPad mini (A17 or later) MacBook Air (M1 or later) MacBook Pro (M1 or later) iMac (M1 or later) Mac mini (M1 or later) Mac Studio (M1 Max or later) Mac Pro (M2 Ultra)
It is worth noting that only the Pro version of the iPhone 15 has access to it, due to a defect in the chipset of the standard model. However, the entire iPhone 16 product line will probably be able to run Apple Intelligence when it launches.
private cloud computing
Image credit: Apple
Apple takes a small model, bespoke approach to training. Rather than relying on a kitchen-sink approach that powers platforms like GPT and Gemini, the company compiled datasets in-house for specific tasks like email composition. The biggest advantage of this approach is that many of these tasks are much less resource intensive and can be performed on the device.
However, this doesn't apply to everything. For more complex queries, take advantage of the new Private Cloud Compute offering. The company currently operates remote servers running on Apple Silicon, which it claims can provide the same level of privacy as consumer devices. Whether an action is performed locally or via the cloud is not visible to the user unless the device is offline, at which point remote queries will fail.
Apple Intelligence and third-party apps
Image credit: Didem Mente/Anadolu Agency / Getty Images
Ahead of WWDC, there was a lot of talk about Apple's pending partnership with OpenAI. But in the end, it turned out that the deal wasn't really about enhancing Apple Intelligence, but about providing an alternative platform for something that wasn't really built for it. It is implicitly accepted that there are limits to the construction of small-scale model systems.
Apple Intelligence is free. The same goes for accessing ChatGPT. However, users with paid accounts of the latter have access to premium features not available to free users, such as unlimited queries.
The ChatGPT integration, introduced in iOS 18.2, iPadOS 18.2, and macOS Sequoia 15.2, has two primary roles: supplementing Siri's knowledge base and adding to existing writing tools options.
After enabling the service, the new Siri will prompt the user to authorize access to ChatGPT with a specific question. Recipes and travel plans are examples of questions where options may come up. Users can also directly prompt Siri to “Ask ChatGPT.”
Compose is another major ChatGPT feature available through Apple Intelligence. Users can access the new Burning Tools feature from any app that supports it. Compose adds the ability to create content based on prompts. This joins existing writing tools such as style and outline.
We do know that Apple plans to partner with additional generative AI services. The company merely said that next on that list is Google Gemini.