Apple at WWDC introduced Apple Intelligence, its long-expected approach to generative AI.
The system theoretically handles a wide range of tasks while preserving context, privacy and speed.
It uses large language models (LLMs) to summarize and prioritize notifications and messages, create personalized images (including “Genmoji” icons, Image Playground originals, and pictures of your friends) and even carry out tasks on our behalf. You can ask to move a file, or to play the podcast you shared the other day.
Personal context plays a crucial role. Apple Intelligence can understand who you’re talking to, or what your daily commute looks like.
However, Apple claimed it won’t compromise on privacy. The A17 Pro and M-series chips can handle on-device tasks and keep information local. You can control who accesses your data, and Private Cloud Compute can draw from larger server-based LLMs while sending only the necessary data and preventing it from being stored elsewhere.
Outside researchers can even study Apple’s data usage to make sure there’s no incorrect behavior.
Siri, meanwhile, uses Apple Intelligence to improve its functionality. It can correct for mistakes, recognize what you’re talking about (such as “there”) and accept typed commands. It can offer device help even if you don’t know a feature’s exact name.
Siri can even perform in-app actions, such as jumping to a specific photo, editing it, and sharing it. The assistant will also know how to look for data across your device. It can find your driver’s license in Photos, and tell you when a flight lands.
Searches are more in-depth. You can now search through videos, and automatically create Memory Movies (complete with Apple Music soundtracks) based on natural language criteria.
The platform can even recognize when it needs to use another AI model to help, such as switching to OpenAI’s ChatGPT. You can create a bedtime story in Pages, or generate images beyond what Apple does. Apple vows to maintain strong privacy, as you won’t need to create an account or have your data stored.
Apple Intelligence is built into iOS 18, iPadOS 18, and macOS Sequoia, and will be usable for free. It’s also coming to Xcode to help developers producing Swift-based apps. You’ll be able to try it this summer, but you’ll need at least an iPhone 15 Pro or M1-based iPad or Mac to make use of it.
The release is a direct counter to Google’s increasing use of AI in Android as well as Microsoft’s Copilot. It’s also a not-so-subtle response to Samsung’s hints it will eventually charge for Galaxy AI features. Apple is making generative AI a system-wide element that is included at no extra charge, and potentially with deeper integration than Gemini currently has with Android or ChromeOS.