At WWDC 2024 on Monday, Apple unveiled its long-awaited ecosystem-wide generative AI initiative, Apple Intelligence. As previous rumors had suggested, the new feature is called Apple Intelligence (AI, get it?). The company promised that the feature will deliver a highly personalized experience with safety at its core.
“Most importantly, it needs to understand you and be rooted in your personal context: your daily life, your relationships and your communications,” CEO Tim Cook noted. “And of course, it needs to be built from the ground up with privacy in mind. All of this goes beyond artificial intelligence. It's personal intelligence, and it's the next big step for Apple.”
The company has been promoting the feature as an essential feature across all of its various operating systems, including iOS, macOS, and the latest visionOS.
Image credit: Apple
“It needs to be powerful enough to help you do the things that matter most to you,” Cook said. “It needs to be intuitive and easy to use. It needs to be deeply integrated into your product experience. Most importantly, it needs to understand you and be rooted in your personal context: your daily life, your relationships and your communications. And of course, it needs to be built with privacy in mind from the start. All of this goes beyond artificial intelligence. This is personal intelligence, and it's the next big step for Apple.”
SVP Craig Federighi added that “Apple Intelligence is based on your personal data and context,” and that the feature will be built on all the personal data you input into apps like Calendar and Maps.
The system is built on large-scale language and intelligence models, and the company says much of its processing is done locally, using the latest version of Apple silicon. “Many of these models run entirely on-device,” Federighi claimed during the event.
But these consumer systems still have limitations, which is why some of the heavy lifting must happen in the cloud, away from the device. Apple is adding private cloud computing to this service, using a service that runs on Apple chips on the back end to enforce the privacy of this very personal data.
Apple Intelligence also includes perhaps the biggest update to Siri since it was introduced over a decade ago, with the company saying the feature is now “more deeply integrated” into the operating system. For iOS, this means replacing the familiar Siri icon with a glowing blue border that surrounds your desktop when you're using it.
Siri is no longer just a voice interface: Apple is also adding the ability to type queries directly into the system to access generative AI-based intelligence, an acknowledgment that voice is often not the best interface for these systems.
Image credit: Apple
Meanwhile, App Intents will give users the ability to integrate the Assistant more directly into different apps. This will start with first-party applications, but the company plans to open up access to third parties as well. This addition will significantly improve the types of functions Siri can perform directly.
The service also greatly enhances multitasking, enabling some sort of cross-app compatibility, meaning users won't have to keep switching between Calendar, Mail, and Maps to schedule a meeting, for example.
Apple Intelligence will be integrated into most of the company's apps, including features like helping you compose messages within Mail (and third-party apps) and the ability to reply using Smart Reply, a feature Google has offered in Gmail for some time and has continued to build on using its own generative AI model, Gemini.
The company is also bringing the feature to emoji with Genmoji (yes, that's the name), which uses a text field to create customized emojis, while Image Playground is an on-device image generator built into apps like Messages, Keynote, Pages, Freeform, etc. Apple is also bringing a standalone Image Playground app to iOS and opening up access to its services via APIs.
Image credit: Apple
Meanwhile, Image Wand is a new tool for Apple Pencil that lets you create images by surrounding text — essentially Apple's version of Google's Circle to Search, but with a focus on images.
It's also building search capabilities for content like photos and videos. The company promises to further enhance natural language search within these apps. The GenAI model also makes it easy to create slideshows within Photos using natural language prompts. Apple Intelligence will be rolled out to the latest versions of the operating systems, including iOS and iPadOS 18, macOS Sequoia, and visionOS 2. It will be available for free with these updates.
The feature is expected to arrive on the iPhone 15 Pro, M1 Macs, and iPad devices. The standard iPhone 15 won't have the feature, likely due to chip limitations.
As expected, Apple also announced a partnership with OpenAI to bring ChatGPT to services like Siri. Powered by GPT 4.0, the feature will leverage the company's image and text generation. The service will allow users to access it without signing up for an account or paying a fee (though it is possible to upgrade to premium).
It's due for release later this year for iOS, iPadOS, and macOS. The company said it also plans to integrate with other third-party LLMs, but didn't provide details. Google's Gemini is likely near the top of that list.