There is a version of this story where Apple quietly became the most thoughtful AI company in the room. Where its obsession with privacy, its custom silicon, and its tightly controlled ecosystem gave it advantages that cloud-first competitors simply couldn't replicate. Where its patience paid off and Siri 2.0 arrived polished, capable, and worth the wait.
That version of the story is still being written. And right now, it is not going well.
The Promise: WWDC 2024 and the Hype Machine
Cast your mind back to June 2024. Apple took the stage at WWDC and unveiled Apple Intelligence with the kind of confidence the company usually reserves for products that are actually ready to ship. The presentation was slick, the demos were compelling, and the message was clear: Apple had not just caught up to the AI moment, it had found a better way to do it.
The headline feature was a reimagined Siri. Not the bumbling, forgetful assistant that had been the industry's longest-running punchline since 2011, but something genuinely new. A Siri with on-screen awareness, meaning it could see what you were looking at and act on it. Personal context, meaning it could understand your life, your relationships, your calendar, your emails, and respond accordingly. In-app actions, meaning it could reach into third-party applications and get things done without you having to navigate there yourself. Apple also dangled the prospect of Google Gemini as a future integration partner, with executives winking broadly at audiences across multiple events throughout the year.
The marketing followed. iPhone 16 was sold, in no small part, on Apple Intelligence. Adverts saturated television, social media, and every surface Apple could buy. The message to consumers was direct: this phone is the AI phone. Buy it now, the features are coming.
What followed was one of the most embarrassing product rollouts in Apple's recent history. In March 2025, Apple publicly admitted it had missed its own quality bar. The smarter Siri, the one with personal context and on-screen awareness, was pulled. The features that had defined the entire iPhone 16 marketing campaign were quietly kicked down the road. A federal lawsuit was filed in California alleging false advertising. Apple eventually settled.
What We Actually Got
To be fair to Apple, not everything about Apple Intelligence was vaporware. iOS 18.1 and 18.2 did deliver some features. Notification summaries arrived, though they quickly became notorious for generating embarrassingly wrong summaries, including one that the BBC formally complained about after it misrepresented a breaking news story. Image Playground and Genmoji shipped. Writing tools that clean up grammar and adjust tone landed in Notes and Mail. ChatGPT was integrated for complex queries, though the implementation was so conservative that it barely registered.
What never arrived was the part that actually mattered. The three flagship capabilities that Siri was supposed to gain: Personal Context, On-Screen Awareness, and In-App Actions. These were not peripheral features. They were the entire point. Without them, Apple Intelligence was a set of useful but unremarkable additions to an OS that already had most of what people needed.
The Siri Timeline
Siri debuted in 2011 as a genuine novelty. Between 2014 and 2020, it received incremental updates focused on Apple's own apps. When ChatGPT exploded onto the scene in late 2022, it reportedly blindsided Apple executives entirely. The company scrambled, began work on a new foundation model internally codenamed "Ajax," and by 2024 felt enough pressure to announce a full overhaul before it was ready. By October 2025, iOS 26 had shipped and the major Apple Intelligence features were still absent with no confirmed release date.
In September 2025, Apple shipped five incremental Siri improvements in iOS 26: faster follow-up queries, richer answers, tighter Shortcuts integration, a refreshed calling interface, and instant language switching. Helpful, yes. But these were never a substitute for what had been promised. Internal testers on early iOS 26.4 builds were reportedly warning that the new Siri still did not compete with current chatbots.
Why Did This Happen?
The honest answer is more complicated than "Apple was lazy" or "Apple doesn't care about AI." Neither of those is true, and accepting them as explanations would be doing a disservice to what is actually an interesting and structural problem.
Apple's approach to AI was always different from its competitors. While Google, Microsoft, and Meta raced to cloud-based large language models that could dazzle in demos, Apple planted its flag firmly in on-device intelligence. The idea was that your phone's neural engine does the heavy lifting, your data never leaves your device, and you get AI without the privacy tradeoff. It is, philosophically, an admirable position. It is also, technically, an enormously difficult one.
Building a model capable enough to do what Apple promised, while keeping it small enough to run locally on an iPhone, is a genuinely hard engineering problem. The rest of the industry solved the capability question by throwing cloud compute at it. Apple's self-imposed constraints meant that solution was not available to them, at least not without compromising the privacy narrative that underpins their entire brand identity.
There is also the ecosystem problem. Apple promised that the new Siri would reach into third-party apps via App Intents. For that to work, developers need to build support for it. And developers, watching Apple delay and delay and delay, have not exactly been rushing to invest engineering time in a feature that keeps not shipping.
The result was a company that announced something it genuinely intended to build, encountered the reality of building it, and discovered the gap between the two was larger than it had anticipated. That is not a scandal. But announcing it to the world before closing that gap, and building an entire product marketing cycle around it, is where Apple deserves the criticism it has received.
The Gemini Admission
On January 12, 2026, Apple and Google issued a joint statement that, depending on how you read it, was either a pragmatic partnership or a quiet admission of defeat.
The two companies announced a multi-year collaboration under which the next generation of Apple Foundation Models will be based on Google's Gemini models and cloud technology, with those models set to power future Apple Intelligence features including a more personalized Siri.
Let that sit for a moment. Apple, the company that has spent years positioning itself as the privacy-first alternative to Google's data-harvesting model, has agreed to pay Google reportedly around $1 billion a year to use Gemini as the foundation of its AI ambitions. Apple evaluated technologies from OpenAI and Anthropic before selecting Google, citing its models as the most capable foundation.
The company is framing this as a technology choice rather than a retreat, emphasising that Apple Intelligence will continue to run through Private Cloud Compute and that privacy standards are maintained. That framing is not entirely wrong. But it does not change what the deal represents at a strategic level: Apple building toward its biggest announced product feature by licensing the core capability from its oldest and most complex rival.
Reports suggest the deal is valued at around $1 billion per year for access to a custom 1.2 trillion parameter Gemini model. Google, for its part, now has Gemini running inside Apple's ecosystem, powering Siri on two billion devices. That is a remarkable outcome for a company that Apple has been quietly trying to reduce its dependence on.
What Is Actually Coming in 2026
To Apple's credit, there is genuine momentum now. WWDC 2026, scheduled for June 8, is expected to unveil a new Siri interface integrated into the Dynamic Island, with a standalone Siri app that supports back-and-forth conversation and conversation history. The Gemini-powered foundation is reportedly being distilled into a smaller on-device model, which would preserve Apple's privacy architecture while delivering significantly better capability.
Apple is reportedly planning to turn Siri into a full chatbot experience, capable of competing with OpenAI's ChatGPT, with a dedicated interface tested internally under the codename "Campos." The three long-promised flagship features: Personal Context, On-Screen Awareness, and In-App Actions, are reportedly still on the roadmap and expected to be more capable than what was originally shown at WWDC 2024.
Apple has also stated publicly, to CNBC, that the upgraded Siri remains on track for 2026. Given the company's track record over the past 18 months, that assurance deserves some scrutiny. But the Gemini partnership, the new AI leadership under Mike Rockwell (previously of Apple Vision Pro), and the visible pressure from investors and the legal settlement all point to an organisation that knows it cannot miss again.
The Bigger Question: What Does This Mean for AI?
Here is where this story connects to something larger.
We have spent the last few weeks on this blog talking about focused tools versus distraction machines. The E-Ink phone piece. The Windows Phone legacy. The idea that the best technology respects your time and attention rather than monetising them. AI, done well, fits naturally into that philosophy. A genuinely intelligent assistant that knows your context, understands what you need, and helps you get it done is the ultimate focused tool. It is the promise that every voice assistant has made since 2011 and that none of them has fully delivered.
The trouble is that the AI race, as it is currently being run, is not primarily about building focused tools. It is about building platforms. Microsoft wants Copilot embedded in everything you do at work. Google wants Gemini to be the layer through which you interact with all of Google's services. OpenAI wants ChatGPT to become a default operating layer for a generation of users. These are not neutral productivity tools. They are ecosystems competing for attention and data, dressed up in the language of helpfulness.
Apple's stated alternative was different. On-device. Private. Yours. If it had worked, it would have been the most interesting AI proposition in the industry: intelligence that genuinely serves the user rather than the platform.
The Third Platform Question
We asked in the Windows Phone piece whether the current duopoly leaves space for a third platform. The AI layer makes that question more urgent. If Apple's privacy-first approach fails, or gets quietly absorbed into the Google ecosystem through deals like this one, then the AI experience on every major platform converges toward the same model: cloud-dependent, data-hungry, and optimised for engagement over utility. That convergence could be exactly the space a third platform steps into. Not with a better spec sheet, but with a fundamentally different answer to the question of what your AI is actually for.
Android has Google baked into its foundations. iOS is now paying Google to power its AI. The two dominant mobile platforms are, at the AI layer, increasingly the same thing. If a genuine alternative to that model emerges, whether from a new OS, a privacy-first hardware maker, or something we haven't seen yet, it will likely find its audience among exactly the people who read pieces like this one.
Final Thoughts
Apple's failure with Apple Intelligence is not a story about incompetence. The engineering problems are real and difficult. The privacy constraints are genuine, even if they also served as convenient cover for delays. The company still has the best consumer hardware in the industry, custom silicon that its competitors cannot match, and a developer ecosystem that remains the most valuable in the world.
But it made a promise it could not keep, marketed a product it had not built, and spent 18 months watching the gap between its announcements and its shipping reality become a running joke. That matters, not just for Apple's reputation, but because the promise itself was worth making. A private, on-device, user-first AI would have been genuinely good for consumers. The fact that Apple could not deliver it without turning to Google suggests that the version of AI that respects your privacy and serves only you is harder to build than anyone wants to admit.
WWDC 2026 is in June. The Gemini-powered Siri is coming. The conversation history, the Dynamic Island interface, the chatbot experience: all reportedly on track. Maybe this is finally the year Apple closes the gap between what it promised and what it ships.
But I'd suggest watching what it promises next before deciding whether to believe it.
What do you think? Has Apple permanently damaged its credibility on AI, or is this a stumble the company can recover from? And if Apple's privacy-first approach ultimately gets swallowed by the Google partnership, does that change what you think about the future of AI on mobile? Let me know in the comments.