While many tuned into last week’s WWDC expecting positive announcements about Apple’s on-device AI agent (Siri), they were met with “maybe next year.” Despite this disappointment, and the usual parade of incremental improvements, I found two key takeaways for their future Apple Intelligence strategy.
The first key revelation was their new Liquid Glass interface. The demo looked oddly over-the-top to me but, in practice, it’s really beautiful and usable. I do think they’ll have to tweak it a bit for contrast though. Depending on the background, controls can blend into the background so well I have trouble finding them. On the whole, I love the way it looks and it’s highly usable on my iPad.
But it’s not just about sporting a new look to divert attention from Siri failures. The strategic significance of Liquid Glass is this: they’re creating a completely new design for the current generation of devices that will easily accommodate their XR devices (Apple Vision Pro and their upcoming AR glasses) in the same way. If you’ve ever seen the way the environment looks when you’re using Apple Vision Pro, you may know how the controls and apps float in space and have a translucent look to them. Liquid Glass takes that basic concept and unifies it across all of their devices, and the fact that Apple is making everything else look more like an XR device environment rather than the other way around indicates to me that they’re preparing for a day when XR devices are more prominent (even more prominent than phones) as input devices for Apple Intelligence. By giving iOS and macOS elements the same depth and translucency already standard in Vision Pro, Apple back-ports spatial UI cues to legacy hardware.
They’re in good company here: Meta is putting everything behind their AI powered Ray Bans and upcoming Orion glasses, Google is working hard on Project Astra glasses, and Apple executive Eddie Cue (in testimony for the recent Google antitrust trial) said that in 10 years there might not even be iPhones.
The second revelation came as they doubled down in multiple places in the keynote in promoting Apple’s continuing dedication to privacy, and one way they can enforce it is through on-device processing for their own foundation models as well as Apple’s GenAI private cloud. This includes their new Neural Engine privacy architecture that ensures personal data never leaves the device, and their innovative federated learning system that improves AI models without collecting individual user data. Given that they’ve failed so miserably to live up to what they said they’d deliver last year—particularly their promised on-device large language model and advanced Siri capabilities—2025 gave them an opportunity to reset their aspirations based on what’s transpired since then. But they didn’t—they reinforced their commitment to privacy and making Apple Intelligence work for their customer rather than advertisers. This is important, because it highlights the differences between the big players vying to be your top level agent on your device. Let’s do a brief recap of the current players in that space.
- Apple Apple Intelligence (Siri) is the top-level agent, with an emphasis on privacy and user control of what the agent does and how it works. Many queries will be resolved either on-device or in the Apple private cloud. The user has maximum agency in this scenario—the agent is there to support only them. So when you ask for recommendations on where to go for dinner, or what kind of running shoe to buy, or where to stay during a 4-day weekend, you’ll get answers that are personalized to your needs, not those of advertisers.
- Meta MetaAI is the top-level agent, with an aim to resolve user requests heavily influenced by the needs of advertisers. Meta designed MetaAI to be “steerable”, so that it can persuade users as well as inform them. Meta could, and I believe likely would, steer its agent to make recommendations that favor its advertising business rather than its users, thereby reducing the agency of its users. Remember this post? Agents can be very persuasive.
- Google Gemini is Google’s agent, and given their advertising business it’s likely they will do something similar to what Meta is doing, but potentially with a twist. Google’s story has been fueled by advertising so far, but that business is under pressure from GenAI and is beginning to falter, so Google has to find a new business model, or at least augment the old one. Subscriptions to the rescue! Though at a much lower level right now, Google has found that some people will pay for GenAI models, and they may split their business and follow the lead of streaming services like Netflix where paying customers get a different version of the experience from free customers. In that scenario, Google could offer the Gemini agent with the advertising focus (like Meta) to free customers and the agent with a user focus (like Apple) to paying customers.
- OpenAI Until recently, it looked as though OpenAI might follow in Apple’s footsteps. But they recently hired Fidji Simo, a former Facebook senior executive, to run the OpenAI product line. This foreshadows a fully-fledged advertising business in the near term. I put them into the same camp as Google, where they could offer two options for paid vs. free subscribers. This sounds sensible to me because Sam Altman has repeatedly exhibited a distaste for an ad-driven model; this gives them the opportunity to do both, to placate their investors while retaining the option for a user-focused agent.
- Perplexity While Perplexity doesn’t have a frontier model (they connect to multiple models on the back end), they are definitely vying to be your top-level agent. Advertising is part of their model but it remains to be seen how they’ll try to differentiate themselves from the rest of the pack.
The Road Ahead
These two strategic moves, Liquid Glass and doubled-down privacy commitments, help us see the direction of Apple’s strategy for AI. While competitors race to build the most capable AI agents, Apple is playing a different game. They’re building an ecosystem where AI isn’t just a feature but an integral part of how we interact with technology across all of our devices, while maintaining user privacy as a core differentiator. The integration of Liquid Glass across their device spectrum suggests they see XR as the next major computing platform, with GenAI as its backbone.
The question now is whether their privacy-first approach will resonate strongly enough with users to overcome the earlier to market, but less private, offerings from their competitors. Given the increasing concerns about AI and data privacy, especially as we turn to foundation models for everything from purchasing decisions to personal therapy, Apple’s bet on privacy might just prove to be a winning hand.