I would bet significant money that, within two years, it will become Generally Obvious that Apple has the best consumer AI story among any tech company.
I can explain more in-depth reasoning, but the most critical point: Apple builds the only platform where developers can construct a single distributable that works on mobile and desktop with standardized, easy access to a local LLM, and a quarter million people buy into this platform every year. The degree to which no one else on the planet is even close to this cannot be understated.
The thing that people seem to have forgotten is that the companies that previously attempted to monetize data center based voice assistants lost massive amounts of money.
> Amazon Alexa is a “colossal failure,” on pace to lose $10 billion this year... “Alexa was getting a billion interactions a week, but most of those conversations were trivial commands to play music or ask about the weather.” Those questions aren’t monetizable.
Google expressed basically identical problems with the Google Assistant business model last month. There’s an inability to monetize the simple voice commands most consumers actually want to make, and all of Google’s attempts to monetize assistants with display ads and company partnerships haven’t worked. With the product sucking up server time and being a big money loser, Google responded just like Amazon by cutting resources to the division.
It doesn't help that Google also keeps breaking everything with the home voice assistants, and this has been true for ages and ages.
I only have a single internet-enabled light in my house (that I got for free), and 90% of the time when I ask the Assistant to turn on the light, it says "Which one?". Then I tell it "the only one that exists in my house", and it says "OK" and turns it on.
Getting it to actually play the right song is on the right set of speakers is also nearly impossible, but I can do it no problem with the UI on my phone.
I don't fear a future where computers can do every task better than us: I fear a future where we have brain-damaged robots annoy the hell out of me because someone was too lazy to do anything besides throw an LLM at things.
I feel like you're getting at something different here, but my conclusion is that maybe the problem is the approach of wanting to monetize each interaction.
Almost every company today wants their primary business model to be as a service provider selling you some monthly or yearly subscription when most consumers just want to buy something and have it work. That has always been Apple's model. Sure, they'll sell you services if need be, iCloud, AppleCare, or the various pieces of Apple One, but those all serve as complements to their devices. There's no big push to get Android users to sign up for Apple Music for example.
Apple isn't in the market of collecting your data and selling it. They aren't in the market of pushing you to pick brand X toilet paper over brand Y. They are in the market of selling you devices and so they build AI systems to make the devices they sell more attractive products. It isn't that Apple has some ideologically or technically better approach, they just have a business model that happens to align more with the typical consumers' wants and needs.
That is exactly why Apple's on-device strategy is the only economically viable one. If every Siri request cost $0.01 for cloud inference, Apple would go bankrupt in a month. But if inference happens on the Neural Engine on the user's phone, the cost to Apple is zero (well, aside from R&D). This solves the problem of unmonetizable requests like "set a timer," which killed Alexa's economics
The assistant thing really shows the lie behind most of the "big data" economy.
1) They thought an assistant would be able to operate as an "agent" (heh) that would make purchasing decisions to benefit the company. You'd say "Alexa, buy toilet paper" and it would buy it from Amazon. Except it turns out people don't want their computer buying things for them.
2) They thought that an assistant listening to everything would make for better targeted ads. But this doesn't seem to be the case, or the increased targeting doesn't result in enough value to justify the expense. A customer with the agent doesn't seem to be particularly more valuable than one without.
I think that this AI stuff and LLMs in particular is an excuse, to some extent, to justify the massive investment already made in big data architecture. At least they can say we needed all this data to train an LLM! I've noticed a similar pivot towards military/policing: if this data isn't sufficiently valuable for advertising maybe it's valuable to the police state.
> Those questions aren’t monetizable. ... There’s an inability to monetize the simple voice commands most consumers actually want to make.
There lies the problem. Worse, someone may solve it in the wrong way:
I'll turn on the light in a minute, but first, a word from our sponsor...
Technically, this will eventually be solved by some hierarchical system. The main problem is developing systems with enough "I don't know" capability to decide when to pass a question to a bigger system. LLMs still aren't good at that, and the ones that are require substantial resources.
What the world needs is a good $5 LLM that knows when to ask for help.
I think of my Alexa often when I think about AI and how Amazon, of all people, couldn't monetize it. What hope do LLM providers have? Alexa is in rooms all around my house and has gotten amazing at answering questions, setting timers, telling me the weather, etc., but would I ever pay a subscription for it? Absolutely not. I wouldn't even have bought the hardware except that it was a loss leader and was like $20. I wouldn't have even paid $100 for it. Our whole economy is mortgaged on this?
Some features are not meant to be revenue sources. I'd lump assistive technology and AI assistants into the category of things that elevate the usefulness of one's ecosystem, even when not directly monetizable.
Edit: IMO Apple is under-investing in Siri for that role.
Voice assistants that were at the level of a fairly mediocre internet-connected human assistant might be vaguely useful. But they're not. So even if many of us have one or two in our houses or sometimes lean on them for navigation in our cars we mostly don't use them much.
Amazon at one point was going to have a big facility in Boston as I recall focused on Alexa. It's just an uninteresting product that, if it were to go away tomorrow I wouldn't much notice. And I certainly wouldn't pay an incremental subscription for.
This is the part that hasn't made much sense to me. Maybe just.. have a better product?
As you quoted above, "most of those conversations were trivial commands to play music or ask about the weather." Why does any of this need to consume provider resources? Could a weather or music command not just be.. a direct API call from the device to a weather service / Spotify / whatever? Why does everything need to be shipped to Google/Amazon HQ?
My mother always enjoyed playing Jeopardy! on alexa, it was a novel format and everybody could participate while sitting around and chatting. She happily would have paid for it, even the dreaded monthly subscription, but it was neglected. The service started being buggy (lagging, repeatedly restarting the day's question series) and now they've moved on.
If anyone knows of an open-source alternative I could stitch together, I am all ears!
The difference is previous version of alexa wasn't good enough to pay for it. Now it is good enough that millions of users are paying $10-100 for these services.
Much of the cost of Alexa wasn't the data center costs, as Alexa was not, until recently, an AI. Amazon lost tons of money selling cheap Echo speakers at below cost expecting people would use Alexa on those to buy things. Turns out, people don't like to buy things by yelling at a speaker.
As a sibling poster has said, I don't know how much on-device AI is going to matter.
I have pretty strong views on privacy, and I've generally thrown them all out in light of using AIs, because the value I get out of them is just so huge.
If Apple actually had executed on their strategy (of running models in privacy-friendly sandboxes) I feel they would've hit it out of the park. But as it stands, these are all bleeding edge technologies and you have to have your best and brightest on them. And even with seemingly infinite money, Apple doesn't seem to have delivered yet.
I hope the "yet" is important here. But judging by the various executives leaving (especially rumors of Johnny Srouji leaving), that's a huge red flag that their problem is that they're bleeding talent, and not a lack of money.
I’m much more optimistic on device-side matmul. There’s just so much of it in aggregate and the marginal cost is so low especially since you need to drive fancy graphics to the screen anyway.
Somebody will figure out how to use it—complementing Cloud-side matmul, of course—and Apple will be one of the biggest suppliers.
You don't have to abandon privacy when using an eye - use a service that accesses enterprise APIs, which have good privacy policies. I use the service from the guys who create the This day in AI podcast called smithery.ai -we are access to all of the sota models so we can flip between any model including lots of open source ones within one chat or within multiple chats and compared the same query, using various MCPs and lots of other features. If you're interested have a look at the discord to simtheory.ai (I have no connection to the service or to the creators)
On-device moves all compute cost (incl. electricity) to the consumer. I.e., as of 2025 that means much less battery life, a much warmer device, and much higher electricity costs. Unless the M-series can do substantially more with less this is a dead end.
I don't think the throughput of a general purpose device will make a competitive offering; so being local is a joke. All the fun stuff is running on servers at the moment.
From there, AI integration is enough of a different paradigm that the existing apple ecosystem is not a meaningful advantage.
Best case Apple is among the fast copies of whoever is actually innovative, but I don't see anything interesting coming from apple or apple devs anytime soon.
People said the same things about mobile gaming [1] and mainframes. Technology keeps pushing forward. Neural coprocessors will get more efficient. Small LLMs will get smarter. New use-cases will emerge that don't need 160IQ super-intellects (most use-cases even today do not)
The problem for other companies is not necessarily that data center-borne GPUs aren't technically better; its that the financials might never make sense, much like how the financials behind Stadia never did, or at least need Google-levels of scale to bring in advertising and ultra-enterprise revenue.
> All the fun stuff is running on servers at the moment.
With "Apple Intelligence" it looks like Apple is setting themselves up (again) to be the gatekeeper for these kind of services, "allow" their users to participate and earn a revenue share for this, all while collecting data on what types of tasks are actually in high-demand, ready to in-source something whenever it makes economic sense for them...
Outside of fun stuff there is potential to just make chat another UI technology that is coupled with a specific API. Surely smaller models could do that, particularly as improvements happen. If that was good enough what would be the benefit of an app developer using an extra API? Particularly if Apple can offer an experience that can be familiar across apps.
Also why would you want it sucking your battery or heating your room when a data center is only 20 milliseconds away and it's nothing more than a few kilobytes of text. It makes no sense for the large majority of users' preferences which downweight privacy and the ability to tinker.
An LLM on your phone can know everything else that is on your phone. Even Signal chat plaintexts are visible on the phone itself.
People definitely will care that such private data stays safely on the phone. But it’s kind of a moot point since there is no way to share that kind of data with ChatGPT anyway.
I think Apple is not trying to compete with the big central “answer machine” LLMs like Google or ChatGPT. Apple is aiming at something more personal. Their AI goal may not be to know everything, but rather to know you better than any other piece of tech in the world.
And monetization is easy: just keep selling devices that are more capable than the last one.
I don't know, I feel like Apple shot themselves in the foot selling 8GB consumer laptops up until around 2024 while packing them with advanced AI inference, and usually had lower RAM on their mobile and ipads.
On the other hand all devs having to optimize for lower RAM will help with freeing it up for AI on newer devices with more.
I'd loved to see a strong on-device multi-modal Siri + flexibility with shortcuts.
Besides the "best consumer AI story" they could additionally create a strong offering to SMBs with FileMaker + strong foundation models support baked in. Actually rooting for both!
I agree with the assessment that Apple has by far the best platform to ship features.
That being said, if people spend all their time interacting with LLMs for nearly everything, which is the direction we seem to be going in, what locks them in the Apple ecosystem?
i'd have a lot more respect for apple's "cautious" approach to AI if they didn't keep promising and then failing to deliver siri upgrades (while still calling out to cloud backends, despite all the talk about local LLM), or if they hadn't shipped the absolute trash that is notification summaries.
i think at this point it's pretty clear that their AI products aren't bad because it's some clever strategy, it's bad because they're bad at it. I agree that their platform puts them in a good place to provide a local LLM experience to developers, but i remain skeptical that they will be able to execute on it.
it will become Generally Obvious that Apple has the best consumer AI story among any tech company.
I love my Macbooks and think they can be great for local LLMs in the future. But the vast majority do not care and they do not want to setup complicated local LLMs. They want something that just works on the computer, tablets, and phones - ideally all synced together.
Local LLMs will never be better than cloud LLMs. They can close the gap if/when cloud LLM progress stalls.
Let's not conflate Apple's failure in cutting edge transformer models with good strategy.
I said "Consumer AI". Even Apple is likely beating Google in consumer AI DAUs, today. Google has the Pixel and gemini.google.com, and that's it; practically zero strategy.
Unfortunately, apple will never ditch its luxury brand, so like its memory, even if its good, their business model will never leverage wide spread adoption.
Local AI sounds nice but most of Apple’s PCs and other devices don’t come with enough RAM for a decent price needed for good model performance and macOS itself is incredibly bloated.
That's true for current LLMs, but Apple is playing the long game.
First, they are masters of quantization optimization (their 3-4 bit models perform surprisingly well).
Second, Unified Memory is a cheat code. Even 8GB on M1/M2 allows for things impossible on a discrete GPU with 8GB VRAM due to data transfer overhead. And for serious tasks, there's the Mac Studio with 192GB RAM, which is actually the cheapest way to run Llama-400B locally
Depends what you are actually doing. It's not enough to run a chatbot that can answer complex questions. But it's more than enough to index your data for easy searching, to prioritise notifications and hide spam ones, to create home automations from natural language, etc.
Apple has the ability and hardware to deeply integrate this stuff behind the scenes without buying in to the hype of a shiny glowing button that promises to do literally everything.
FWIW, AI is not entirely locked down in the Apple ecosystem. Sure, they control it but they've already built the foundation of a major opportunity for developers.
There's an on device LLM that is packaged in iOS, iPadOS and macOS 26 (Tahoe) [1]. They even have a HIG on use of generative AI [2]
Something like half of all macs are running macOS 26 [3] already, so this could be the most widely distributed on-device LLM on the planet.
I think people are sleeping on this, partly because the model is seen as under powered. But I think we can presume it won't always be so.
I've just posted a Show HN of app for macOS 26 I created that uses Apple's local LLM to summarize conversations you've had with Claude Code and Codex. [3]
I've been somewhat surprised at the quality and reliability of Apple's built-in LLM and have only been limited by the logic I've built around it.
I think Apple's packaging of an LLM in its core operating systems is actually a fast move with AI and even has potential to act as an existential threat to Windows.
I can second this. I am nearing launch on an app that uses both the new SpeechAnalyzer and on device LLM and it has met or exceeded my expectations. A longer context would always be nice but then I remember its running on a phone.
Don’t a lot of Android devices come with Gemini Nano on the device?
Probably not as many out there as there are Apple devices because it is only the high end ones at the moment. I don’t think they are that far behind in numbers though.
I'd be curious to see an estimate on the google side.
Here are some real rough estimates in Apple's ecosystem:
For macos alone the install base is something like 110-130 million, and only Apple Silicon macs can run the new model, so maybe 45 million active macs are updated to macos 26 and can run their model.
There are a bunch of details but of the iPhones out there that are new enough to run Apple Intelligence and have iOS 26, something like 220 million can.
For iPad same conditions but for iPados its something like 60 million.
So, something like 325 million active devices are out there ready to run LLM completion requests.
I've tested almost every LLM which will work on a modern iPhone and Apples models are universally terrible in comparison to almost every open-weights model, they're so bad it's a joke amongst devs who work in this space.
The only thing it's useful is super basic tasks like sentiment classification, summarization (sort of), or stuff like, "Does this message contain toxic/bad language, answer yes or no only".
It might as well be the visualization of the two strategies:
- Everyone else: "We mainly build huge AI compute clusters to process large amount of data and create value, at high cost for ramp-up and operation."
- Apple: "We mainly build small closed-down AI compute-chips we can control, sell them for-profit to individual consumers and then orchestrate data-processing on those chips, with setup and operational cost all paid by the consumer."
I can't think of any company which has comparable know-how and, most of all, a comparable sell-out scale to even consider Apple's strategy.
No matter what they do, they will sell hundreds of millions compute devices for the foreseeable future. They use this to build out AI infrastructure they control, pre-paid by the future consumers.
> We mainly build small closed-down AI compute-chips we can control, sell them for-profit to individual consumers and then orchestrate data-processing on those chips, with setup and operational cost all paid by the consumer
I wish they did but they don't. They have been for decade so stingy on RAM for iPhone and iPad. There are at current point that only small percent of their userbase have iPhone or iPad with 8GB RAM that somehow can run any AI models even open source and be of any use. Not mentioning they don't compare to big Models.
They don't even provide option to sell iPhone with bigger RAM. iPad can have max 16GB RAM. Those mainstream macbook air also can have max 32 GB RAM.
And for the current price of cheap online AI where e.g. perplexity provides so many promo for PRO version for like less $10 per year and all ai providers give good free models with enough rate limit for many users I don't see apple hardware like particularly bought because of AI compute-chips - at least not non-pro users.
If the loose AI though and because of that won't have good AI integrations they will loose also eventually in hardware. e.g. Polish language in Siri still not supported so my mum cannot use it. OSS Whisper v3 turbo was available ages ago but apple still support only few languages. 3rd party keyboard cannot integrate so well with audio input and all sux in this case because platform limitation.
The existential hope that all the other players have is that AI will drive adoption of a form factor that replaces the phone. Because if in 5 years the dominant device is still the phone, Apple wins.
Consumer hardware chips will be plenty powerful to run “good enough” models.
If I’m an application dev, do I want to develop something on top of OpenAI, or Apple’s on device model that I can use as much as a I want for free? On device is the future
In 5 years, the dominant form-factor will still be a phone. This is not the risk.
The existential FEAR of the smartphone ecosystem players (Apple, Google) is, that another ecosystem (!) may come along, one that is tighter integrated into the daily lives, is more predictive of the users' needs, requires less interaction and is not under THEIR control.
Because this is not about devices, it's about owning the total userbase of that OS-ecosystem.
Replacing the Smartphone has been attempted numerous times in the past decade, but no device was able to replace it as a consumption device. Now technology has reached a level of maturity that Smart Glasses may have a shot at this. AND they come along with their own ecosystem as well.
Whatever happens, they won't replace all phones within 5 years. But it's possible that such a device would become a companion to an iOS/Android phone and within 5 years gradually eases off users of their phones into that other ecosystem.
And that's scary for Apple and Google.
Because this is not a device-war, this is an ecosystem-war.
Yes, as I said in another thread a few days ago: Apple's strength is in making personal computing endpoint devices for consumers. That's what's in their DNA. They have not done well at anything else.
While that’s definitely true, I think it’s maybe more fair to say that their actual strength has always been to take a personal computing technology that’s just about “ready-for-prime-time” and make it as accessible and fashionable as possible. Almost all of their failed products have been errors in judging how close a tech is to being ready for mass adoption.
I'm not sure how Apple is enabling anything interesting around AI right now.
That's what this bland article is not even touching on. Yes, having missed the boat is great if the boat ends up sinking. That doesn't make missing boats a great strategy.
Building huge models and huge data centers is not the only thing they could have done.
They had some interesting early ideas on letting AI tap app functionality client-side. But that has gone nowhere, and now everything of relevance is happening on servers.
Apple's devices are not even remotely the best dumb terminals to tap into that. Even that crown goes to Android.
> They use this to build out AI infrastructure they control, pre-paid by the future consumers.
I'm not following. What infrastructure? Pre-paid how?
Apple pays for materials and chips before it sells the finished product to consumers. Nothing is pre-paid.
And what infrastructure? The inference chips on iPhones aren't part of any Apple AI infrastructure. Apple's not using them as distributed computing for LLM training or anything, or for relaying web queries to a complete stranger's device -- nor would they.
> Apple pays for materials and chips before it sells the finished product to consumers. Nothing is pre-paid.
The AI-capabilities of the devices will be pre-paid, as they will come with the product without delivering any significant value yet.
The end-user will bear the cost for that before he is getting anything meaningful in return, because Apple's production volume is at such a scale that they can offset those investments without risking to lose any meaningful sales volume.
Other players can't do that because they don't sell 200mn units per year. If they would add on-device inference chips, they would have to significantly increase the device-price, risking to not sell any product
> Magic Cue - Magic Cue proactively surfaces relevant info and suggests actions, similar to how Apple's personalized Siri features were supposed to work. It can display flight information when you call an airline, or cue up a photo if a friend asks for an image.
Likewise Daily Hub didn't work but was shipped anyway.
> In our testing, Daily Hub rarely showed anything beyond the weather, suggested videos, and AI search prompts. When it did integrate calendar data, it seemed unable to differentiate between the user’s own calendar and data from shared calendars. This largely useless report was pushed to the At a Glance widget multiple times per day, making it more of a nuisance than helpful.
They roll out hardware to consumers they can use for AI once their service is ready, with users paying for that rollout until then.
Meanwhile they have started to deploy a marketplace ecosystem for AI tasks on iOS, where Apple has the first right-to-refuse, allowing the user to select a (revenue-share-vetted) 3rd party provider to complete the task.
So until Apple is ready, the user can select OpenAI (or soon other providers) to fulfill an AI-task, and Apple will collect metrics on the demand of each type of task.
This will help them prioritize for development of own models, to finally make use of their own marketplace rules to direct the business away from third parties to themselves.
My guess is that they will offer a mixed on-device/cloud AI-service that will use the end-users hardware where possible, offloading compute from their clouds to the end-users hardware and energy-bill, with a "cheap" subscription price undercutting others on that AI-marketplace.
> I can't think of any company which has comparable know-how and, most of all, a comparable sell-out scale to even consider Apple's strategy.
I'm not sure where you position Samsung or Xiaomi, Oppo etc. They're competitive on price with chipsets that can handle AI loads in the same ballpark, as attested by Google's features running on them.
They're not vertically integrated and don't have the same business structure, but does it matter regarding on-device AI ?
Vertical integration matters for sure, but people often underestimate the scale in which this market is already skewed.
- Apple owns more than 50% of this market-segment, the annual sales of iPhones is roughly 200 Million units. In comparison, Samsung Galaxy S-series sits at roughly 20-25 Millions.
- Apple's is alone in the iOS ecosystem, while Samsung, Xiaomi and Oppo have to compete within the Android space every year. iOS is extremely sticky, which makes a certain volume of iPhones almost guaranteed to sell every year, at a lofty profit margin.
In comparison, Samsung always has to consider that the next BAD Galaxy-S might only sell a fraction of the previous one, because users might move horizontally to another Android brand (even to Pixel, a first-party product of their ecosystem provider). So Samsung cannot even make bets based on the sale of 20 million units, they are already at risk to make bets on the initial shipment-volume (~5 millions) because if the device doesn't sell they will have to PAY money to the carriers to get them into the market.
Apple has a much lower risk here. If the next iPhone is not catching on, Apple will likely still sell 200mn iPhones in that year, because the ecosystem lock-in is so strong that there is little risk of losing customers to anything else than ANOTHER (then more-profitable) iPhone.
So even when assuming a MASSIVE annual drop of 25% in Sales, Apple can still make development bets based on a production forecast of 150 MILLION units.
For their supply-chain that's still an average production output of ~400k units per DAY for each component. With that volume you can get entire factories to only produce for you.
That's why I can't think of any company in a comparable position. Apple can add hardware to their device and sell the resulting product to the consumer for profit before delivering any actual value with it.
If any competitor in the Android space attempts that, just the component costs alone will risk the device to be dead-on-arrival just because "some other Android device" delivers the same experience at lower cost.
I agree that this is a reasonable perspective, but from my cursory understanding of the “shakeup” at Apple, I am not sure it is seen that way by the Board and Cook.
I don't want to imply that this is their only play or that it will even work out.
The EU (and others) already identified this general scheme of stiffling competition by "brokering" between the consumer and the free market, so outside of the US I'm not even sure how much Apple will be able to rely on such a strategy (again)...
I recently tried to figure out what their offerings currently are. I'm hoping for `efficent but performant AI compute-chips` by Apple ever since they kicked out Nvidia in 2015 (for the ML Models / Exploration parts bellow). It will be interesting to see how good their products will feel in this fast-paced environment and how much legroom (RAM + Compute) will be left non-platform offerings.
To my understanding, they market their ML stack as four layers [1]:
- Platform Intelligence: ready-made OS features (e.g., Writing Tools, Genmoji, Image Playground) that apps can adopt with minimal customization.
- ML-powered APIs: higher-level frameworks for common tasks—on-device Foundation Models (LLM), plus Vision, Natural Language, Translation, Sound Analysis, and Speech; with optional customization via Create ML.
- ML Models (Core ML): ship your own models on-device in Core ML format; convert/optimize from PyTorch/TF via coremltools, and run efficiently across CPU/GPU/Neural Engine (optionally paired with Metal/Accelerate for more control).
- Exploration/Training: Metal-backed PyTorch/JAX for experimentation, plus Apple’s MLX for training/fine-tuning on Apple Silicon using unified memory, with multi-language bindings and models commonly sourced from Hugging Face.
I think one of Apple's strengths since Tim Cook took over is their ability to avoid "gimmicks". As much criticism as people have of apple for not innovating on the iPhone, I appreciate their ability to not screw products up.
I'm not saying AI is a gimmick, but the caution they show is a good quality I think
Apple could have avoid that by released it half arsed like all the AI stuff, claim that it does all those things and write somewhere "AI may make mistakes".
I work in UI in enterprise, where slight color shade differences between releases can cause uproar. I cannot imagine the thought process behind liquid glass in any sense.
OSX's Aqua was also an insanely bold UI with a lot of gimmicks, but was still usable for the most part. I'm so very curious about the internal discussions around this.
They haven't really updated Siri though? That's still in the pipeline. So not a very fair comparison. The article states that they are behind and I think everyone knows that
AI isn't a gimmick, but a huge portion of the way it's presented to consumers is, especially given the fact that it never really was meant for consumers. As an Apple user, I'm thrilled at how "behind" they are.
But also, their tendency to "not fall from gimmicks" sometimes makes it so we didn't get a 2nd mouse button for decades. Ultimately, the way they implemented this was super cool, but still.
The balancing act of figuring out what you can reasonably rely on from an LLM and what you need to be skeptical or dismissive of is not the type of experience an iPhone user should be expected to navigate.
I was going to link you the Apple Vision Pro as a counterpoint, but after clicking the link and being reminded of what that product actually looks like, I really don't know what to say any more. I'm literally dumbfounded anyone could make your comment at all
To their credit, they specifically decided not to make a big deal out of AR like Meta did and keep production small and expensive. They realized the tech wasn't ready for a mass adoption campaign. I'd say Apple, overall, has been pretty cautious with AR. I wouldn't be surprised if they even have the guts to cancel that project entirely like they did with self-driving cars
I ran into an AVP recently and it actually is a great piece of hardware. It only has two issues: price and software. The former is forgivable because it really is an amazing piece of hardware and the price is justified. The latter is not and is the original sin that has killed it.
There's an unfulfilled promise of spatial computing. I wish I could load up my preferred CAD program and have wide and deep menus quickly traversable with hand gestures. Barring that the least it could do is support games. Maybe if some combination of miracle shims (fex emu, asahi, w/e) were able to get onto the platform it might be savable. The input drivers alone would be a herculean task.
Seems that this is apples modus operandi since the app store, their last "thing" they've made really.
Hype about self driving cars -> apple chases it with apple car -> investors pleased they kept up with the joneses -> apple car is behind or not good enough or whatever -> quietly cancelled -> investors pleased they culled the deadweight.
You can replace apple car with vision pro or soon apple intelligence and it will play out the same formula. Luckily it allows investors to profit.
Google's headline new AI feature for this year's Pixel phone, Magic Cue, shipped despite not working.
> “The right info, right when you need it.” That’s how Google describes Magic Cue, one of the most prominent new AI features on the Pixel 10 series. Using the power of artificial intelligence, Magic Cue is supposed to automatically suggest helpful info in phone calls, text messages, and other apps without you having to lift a finger.
However, the keyword there is “supposed” to... even when going out of my way to prompt Magic Cue, it either doesn’t work or does so little that I’m amazed Google made as big a deal about the feature as it did.
It actually popped up and was useful for me yesterday when I was calling a hotel I had booked. I was kind of surprised because I had forgotten about the feature, but it is there and does occasionally offer helpful info.
Because 8 people worldwide own one, and it will stop receiving support shortly, if it hasn't already.
OP doesn't literally mean they haven't made anything, he means that they've made nothing of real substance - which holds true when their biggest recent release is already completely forgotten by the public writ large.
They’ve made plenty of things. I liken them to the Lexus of consumer electronics; expensive for what they are, thoughtfully designed, and conservative in their approach to adopting new trends.
They completely revolutionized laptop processors, were the first to put meaningful health data in watches, and created the first good bluetooth earbuds, but I guess they don't do things anymore.
You know I would be happy to offer this service to investors for a mere tens of millions of dollars. I'll send you photos of our weekly money bonfire, built with your money, and when you're tired of pictures of your money on fire, I'll simply... stop.
Heck, in accordance with the several zeitgeists of our age, I'll even do you the solid of fraudulently generating the money-on-fire pictures with AI, so when you get tired of seeing your money on fire I'll even hand, say, 25% of it back to you, as the result of my tireless efforts to bring value to my shareholders. That's a better return than you'll get from most of these investments!
This is the thing I've found amazing about people's complaints about Apple and AI.
Historically the strength of Apple was that they didn't ship things until they actually worked. Meaning that the technology was there and ready to make an experience that was truly excellent.
People have been complaining for years that Apple isn't shipping fast enough in this area. But if anything I think that they have been shipping (or trying to ship) too fast. There are a lot of scenarios that AI is actually great at but the ones that move the needle for Apple just aren't there yet in terms of quality.
The stuff that is at a scale that it matters to them are integrations that just magically do what you want with iMessage/calendars/photos/etc. There are potentially interesting scenarios there but the fact is that any time you touch my intimate personal (and work) data and do something meaningful I want it to work pretty much all the time. And current models aren't really there yet in my view. There are lots of scenarios that do work incredibly well right now (coding most obviously). But I don't think the Apple mainline ones do yet.
>> Historically the strength of Apple was that they didn't ship things until they actually worked. Meaning that the technology was there and ready to make an experience that was truly excellent.
They dragged their feet on a host of technologies that other handset makers adopted, released and subsequently improved.
- USB C charging
- 90hz, 120Hz refresh rates
- wireless charging
- larger batteries (the iPhone 17 still lags behind Samsung and Google)
I'm not sure what happened, but the iPhone used to have the most fluid, responsive experience compared to Android. Now, both Google and Samsung have surpassed them in that regard.
I've used both Android and have owned several iPhones and it just seems like its not an issue of releasing something that isn't ready, but more about them not being capable enough to release phones to compete with other phones that are regularly beating them in the specs race.
This isn't necessarily a counterargument. Apple's always been conservative with their specs but their tight link between software and hardware has meant they've been able to do more with less. Batteries are a good example of that. Apple has always had a much smaller battery than flagship competitors but has had similar or better battery life than, say, Samsung
this night I got accidentially the update to the latest iOS with this liquid glass stuff - and its schockingly bad in any dimension. keyboard input lags, many thing ned MORE clicks/touches then before, weird contenxt menu popovers that don't even register taps 50% of the time, general lags and sluggishness and UI artifacts everywhere. Its really really a degradiation of UI/UX even though I personally am a fan of that glass-style design in itself
I feel like the only people who say that still are people that don't actively or daily use Apple products because macOS Tahoe is a joke. Jelly scrolling on the iPad mini was a noticeable issue that should never have shipped. Antenna-gate on the iPhone 4. iOS 7... etc etc
> Historically the strength of Apple was that they didn't ship things until they actually worked. Meaning that the technology was there and ready to make an experience that was truly excellent.
Tell that to almost anything they've shipped in the last 5-10 years. It's gotten so bad that I wait halfway through entire major OS version before upgrading. Every new thing they ship is almost guaranteed to be broken in some way, ranging from minor annoyance to fully unusable.
I buy Apple-everything, but I sure wish there were better options.
they had to say something and show they're working on something even if it doesn't work to appease the market spirits so they didn't lose their best people (stock compensation, right?)
now the tides are turning, so they can go back to scheming behind the closed doors without risking their top people leaving for meta for a bazillion dollars.
What people hate about Apple is that they ship things other people couldn't get to capital-W Work, and they're seen as 'stealing' the idea instead of perfecting them.
Historically the strength of Apple was that they didn't ship things until they actually worked. Meaning that the technology was there and ready to make an experience that was truly excellent.
iOS26 is a shit show. Glass looks terrible on my old 12 Pro Max, and just recently it has started trying to connect phone calls to my child's iPad Pro. That is, the speaker button, which previously I pushed to enable the speaker, now pops up a menu with other nearby devices listed in an annoyingly small font. My wife finally asked me for an Android because all her friends get far better pictures. Something isn't right over there, and a lot of people are leaving.
The core of Apple's problem boils down to apathy towards their product quality. I just recently switched from using Siri to Google Gemini in my car. The experience is dramatically better.
And this is the case across the board.
My friend's Fitbit works way better than my Apple watch.
Third and final example is how bad Apple's native dictation engine is. I can run OpenAI Whisper models on my Mac and get dramatically better output.
As a long time Apple fan who's had everything since before the first iPhone, I feel this apathy towards product quality cannot be disguised as some strategic decision to fast follow with AI.
> My friend's Fitbit works way better than my Apple watch.
My husband has a Fitbit and it's so buggy he left it sit on the shelf most of the time - the only times he'd wear it is for exercise.
Siri is bad though, but I have found Google Voice Assistant and Alexa both really have become bad over time, to the point of us just giving up on them completely. My husband is on Android and I'm really surprised how bad voice assistant is despite all the Gemini launches! (mind you he has an Australian accent)
>> My friend's Fitbit works way better than my Apple watch.
That's odd because I've used both, along with a bunch other wearables (e.g. Whoop), and I wouldn't give up my Apple Watch for anything. Massively useful, can take calls, make payments, stream music from my Apple playlists, read and reply to messages, and a ton of other things.
The wearos devices can do all that stuff too, and fitbit is kind of getting blended into those devices piece by piece -- so after years of Fitbit use I can say that the best fitbit device i've had is ... a Pixel Watch 4.
I mention this because , at least for the functionalities that you mention, I think the pixel watches are catching up nicely.
... but they still haven't been able to make me feel less stupid talking into a watch for phone calls like some off-brand James Bond wannabe, even if it works great.
But for everything else, you literally just said, the handful of AI features are better on Google products... That seldom makes the product as a whole better.
You're arguing about product quality by using product availability examples.
Siri isn't competing with Gemini, yet.. Siri is old tech, Gemini is the new tech.
Same with dictation.
Siri hasn't been updated generationally with SOTA to compete with Gemini yet.. it simply hasn't been updated. This is part of the "slow pace" that the post is talking about (part of, not entirely the slowness though).
For example, Amazon updated my old Echo dots with Alexa+ beta, and it's pretty good. I have Grok in my Tesla, and though I don't like Grok or xAI, it's there and I use it occasionally.
Apple hasn't done their release of these things yet.
How so? Their brand new Siri _is_ available. I am using their Apple intelligence on my new iPhone. They even have half baked ChatGPT integrations everywhere. They got into lot of trouble last year for running ads for overselling what their new siri can do.
Overselling abilities is for sure a lack of quality.
No, when I bought my first iphone, Siri could start a chronometer. Then it couldn’t for 5 years, and today it can again. It’s a big flaw for a product which can barely do anything else.
I only have Apple product because it’s good build quality. But it’s quite bad products.
I think Apple secretly doesn’t want more market share, to avoid anticompetitive accusations.
> Shares of Apple Inc. were battered earlier this year as the iPhone maker faced repeated complaints about its lack of an artificial intelligence strategy.
Everyone’s shares were battered earlier this year, and it had nothing to do with AI, and everything to do with tariffs.
I think it was mostly Buffett's dumping. He's a smart guy and the world's best investor, but I think this was a mistake. The winning play is long on Apple, short on Microsoft.
I can explain more in-depth reasoning, but the most critical point: Apple builds the only platform where developers can construct a single distributable that works on mobile and desktop with standardized, easy access to a local LLM, and a quarter million people buy into this platform every year. The degree to which no one else on the planet is even close to this cannot be understated.
> Amazon Alexa is a “colossal failure,” on pace to lose $10 billion this year... “Alexa was getting a billion interactions a week, but most of those conversations were trivial commands to play music or ask about the weather.” Those questions aren’t monetizable.
Google expressed basically identical problems with the Google Assistant business model last month. There’s an inability to monetize the simple voice commands most consumers actually want to make, and all of Google’s attempts to monetize assistants with display ads and company partnerships haven’t worked. With the product sucking up server time and being a big money loser, Google responded just like Amazon by cutting resources to the division.
https://arstechnica.com/gadgets/2022/11/amazon-alexa-is-a-co...
Moving to using much more resource intensive models is only going to jack up the datacenter costs.
I only have a single internet-enabled light in my house (that I got for free), and 90% of the time when I ask the Assistant to turn on the light, it says "Which one?". Then I tell it "the only one that exists in my house", and it says "OK" and turns it on.
Getting it to actually play the right song is on the right set of speakers is also nearly impossible, but I can do it no problem with the UI on my phone.
I don't fear a future where computers can do every task better than us: I fear a future where we have brain-damaged robots annoy the hell out of me because someone was too lazy to do anything besides throw an LLM at things.
Almost every company today wants their primary business model to be as a service provider selling you some monthly or yearly subscription when most consumers just want to buy something and have it work. That has always been Apple's model. Sure, they'll sell you services if need be, iCloud, AppleCare, or the various pieces of Apple One, but those all serve as complements to their devices. There's no big push to get Android users to sign up for Apple Music for example.
Apple isn't in the market of collecting your data and selling it. They aren't in the market of pushing you to pick brand X toilet paper over brand Y. They are in the market of selling you devices and so they build AI systems to make the devices they sell more attractive products. It isn't that Apple has some ideologically or technically better approach, they just have a business model that happens to align more with the typical consumers' wants and needs.
1) They thought an assistant would be able to operate as an "agent" (heh) that would make purchasing decisions to benefit the company. You'd say "Alexa, buy toilet paper" and it would buy it from Amazon. Except it turns out people don't want their computer buying things for them.
2) They thought that an assistant listening to everything would make for better targeted ads. But this doesn't seem to be the case, or the increased targeting doesn't result in enough value to justify the expense. A customer with the agent doesn't seem to be particularly more valuable than one without.
I think that this AI stuff and LLMs in particular is an excuse, to some extent, to justify the massive investment already made in big data architecture. At least they can say we needed all this data to train an LLM! I've noticed a similar pivot towards military/policing: if this data isn't sufficiently valuable for advertising maybe it's valuable to the police state.
There lies the problem. Worse, someone may solve it in the wrong way:
I'll turn on the light in a minute, but first, a word from our sponsor...
Technically, this will eventually be solved by some hierarchical system. The main problem is developing systems with enough "I don't know" capability to decide when to pass a question to a bigger system. LLMs still aren't good at that, and the ones that are require substantial resources.
What the world needs is a good $5 LLM that knows when to ask for help.
Useful Douglas Adams reference: [1]
[1] http://technovelgy.com/ct/content.asp?Bnum=135
Edit: IMO Apple is under-investing in Siri for that role.
Amazon at one point was going to have a big facility in Boston as I recall focused on Alexa. It's just an uninteresting product that, if it were to go away tomorrow I wouldn't much notice. And I certainly wouldn't pay an incremental subscription for.
This is the part that hasn't made much sense to me. Maybe just.. have a better product?
As you quoted above, "most of those conversations were trivial commands to play music or ask about the weather." Why does any of this need to consume provider resources? Could a weather or music command not just be.. a direct API call from the device to a weather service / Spotify / whatever? Why does everything need to be shipped to Google/Amazon HQ?
If anyone knows of an open-source alternative I could stitch together, I am all ears!
In my experience none of these voice assistance are accurate enough to trust with my money
https://en.wikipedia.org/wiki/Nuance_Communications#Acquisit...
I have pretty strong views on privacy, and I've generally thrown them all out in light of using AIs, because the value I get out of them is just so huge.
If Apple actually had executed on their strategy (of running models in privacy-friendly sandboxes) I feel they would've hit it out of the park. But as it stands, these are all bleeding edge technologies and you have to have your best and brightest on them. And even with seemingly infinite money, Apple doesn't seem to have delivered yet.
I hope the "yet" is important here. But judging by the various executives leaving (especially rumors of Johnny Srouji leaving), that's a huge red flag that their problem is that they're bleeding talent, and not a lack of money.
Somebody will figure out how to use it—complementing Cloud-side matmul, of course—and Apple will be one of the biggest suppliers.
From there, AI integration is enough of a different paradigm that the existing apple ecosystem is not a meaningful advantage.
Best case Apple is among the fast copies of whoever is actually innovative, but I don't see anything interesting coming from apple or apple devs anytime soon.
The problem for other companies is not necessarily that data center-borne GPUs aren't technically better; its that the financials might never make sense, much like how the financials behind Stadia never did, or at least need Google-levels of scale to bring in advertising and ultra-enterprise revenue.
[1] https://apps.apple.com/us/app/resident-evil-3/id1640630077
Until the first Cambridge Analytica-sized privacy story hits a major cloud LLM provider, maybe.
With "Apple Intelligence" it looks like Apple is setting themselves up (again) to be the gatekeeper for these kind of services, "allow" their users to participate and earn a revenue share for this, all while collecting data on what types of tasks are actually in high-demand, ready to in-source something whenever it makes economic sense for them...
Consumers don't care about whether an LLM is local, and one that runs on your phone is always going to be vastly worse than ChatGPT.
I see zero indication that Apple is going to replace people going to chatgpt.com or using its app.
All I see Apple doing is eventually building a better new generation of Siri, not much different from Google/Alexa.
People definitely will care that such private data stays safely on the phone. But it’s kind of a moot point since there is no way to share that kind of data with ChatGPT anyway.
I think Apple is not trying to compete with the big central “answer machine” LLMs like Google or ChatGPT. Apple is aiming at something more personal. Their AI goal may not be to know everything, but rather to know you better than any other piece of tech in the world.
And monetization is easy: just keep selling devices that are more capable than the last one.
On the other hand all devs having to optimize for lower RAM will help with freeing it up for AI on newer devices with more.
That being said, if people spend all their time interacting with LLMs for nearly everything, which is the direction we seem to be going in, what locks them in the Apple ecosystem?
i think at this point it's pretty clear that their AI products aren't bad because it's some clever strategy, it's bad because they're bad at it. I agree that their platform puts them in a good place to provide a local LLM experience to developers, but i remain skeptical that they will be able to execute on it.
Local LLMs will never be better than cloud LLMs. They can close the gap if/when cloud LLM progress stalls.
Let's not conflate Apple's failure in cutting edge transformer models with good strategy.
I think you have steve jobs colored glasses.
Apple has the ability and hardware to deeply integrate this stuff behind the scenes without buying in to the hype of a shiny glowing button that promises to do literally everything.
Dead Comment
Dead Comment
You can do that right now, on the stock market. Sometimes it's good to put your money where your mouth is, that forces you to correct your world view.
There's an on device LLM that is packaged in iOS, iPadOS and macOS 26 (Tahoe) [1]. They even have a HIG on use of generative AI [2]
Something like half of all macs are running macOS 26 [3] already, so this could be the most widely distributed on-device LLM on the planet.
I think people are sleeping on this, partly because the model is seen as under powered. But I think we can presume it won't always be so.
I've just posted a Show HN of app for macOS 26 I created that uses Apple's local LLM to summarize conversations you've had with Claude Code and Codex. [3]
I've been somewhat surprised at the quality and reliability of Apple's built-in LLM and have only been limited by the logic I've built around it.
I think Apple's packaging of an LLM in its core operating systems is actually a fast move with AI and even has potential to act as an existential threat to Windows.
[1] https://developer.apple.com/videos/play/wwdc2025/286/
[2] https://developer.apple.com/design/human-interface-guideline...
[3] https://news.ycombinator.com/item?id=46209081
Probably not as many out there as there are Apple devices because it is only the high end ones at the moment. I don’t think they are that far behind in numbers though.
Here are some real rough estimates in Apple's ecosystem:
For macos alone the install base is something like 110-130 million, and only Apple Silicon macs can run the new model, so maybe 45 million active macs are updated to macos 26 and can run their model.
There are a bunch of details but of the iPhones out there that are new enough to run Apple Intelligence and have iOS 26, something like 220 million can.
For iPad same conditions but for iPados its something like 60 million.
So, something like 325 million active devices are out there ready to run LLM completion requests.
Deleted Comment
The only thing it's useful is super basic tasks like sentiment classification, summarization (sort of), or stuff like, "Does this message contain toxic/bad language, answer yes or no only".
- Everyone else: "We mainly build huge AI compute clusters to process large amount of data and create value, at high cost for ramp-up and operation."
- Apple: "We mainly build small closed-down AI compute-chips we can control, sell them for-profit to individual consumers and then orchestrate data-processing on those chips, with setup and operational cost all paid by the consumer."
I can't think of any company which has comparable know-how and, most of all, a comparable sell-out scale to even consider Apple's strategy.
No matter what they do, they will sell hundreds of millions compute devices for the foreseeable future. They use this to build out AI infrastructure they control, pre-paid by the future consumers.
THIS is their unique strength.
I wish they did but they don't. They have been for decade so stingy on RAM for iPhone and iPad. There are at current point that only small percent of their userbase have iPhone or iPad with 8GB RAM that somehow can run any AI models even open source and be of any use. Not mentioning they don't compare to big Models.
They don't even provide option to sell iPhone with bigger RAM. iPad can have max 16GB RAM. Those mainstream macbook air also can have max 32 GB RAM.
And for the current price of cheap online AI where e.g. perplexity provides so many promo for PRO version for like less $10 per year and all ai providers give good free models with enough rate limit for many users I don't see apple hardware like particularly bought because of AI compute-chips - at least not non-pro users.
If the loose AI though and because of that won't have good AI integrations they will loose also eventually in hardware. e.g. Polish language in Siri still not supported so my mum cannot use it. OSS Whisper v3 turbo was available ages ago but apple still support only few languages. 3rd party keyboard cannot integrate so well with audio input and all sux in this case because platform limitation.
That's a selective list. High RAM Macs are available. MBPro goes up to 128GB. Mac Studio goes up to 512GB. Not cheap, but available.
Consumer hardware chips will be plenty powerful to run “good enough” models.
If I’m an application dev, do I want to develop something on top of OpenAI, or Apple’s on device model that I can use as much as a I want for free? On device is the future
The existential FEAR of the smartphone ecosystem players (Apple, Google) is, that another ecosystem (!) may come along, one that is tighter integrated into the daily lives, is more predictive of the users' needs, requires less interaction and is not under THEIR control.
Because this is not about devices, it's about owning the total userbase of that OS-ecosystem.
Replacing the Smartphone has been attempted numerous times in the past decade, but no device was able to replace it as a consumption device. Now technology has reached a level of maturity that Smart Glasses may have a shot at this. AND they come along with their own ecosystem as well.
Whatever happens, they won't replace all phones within 5 years. But it's possible that such a device would become a companion to an iOS/Android phone and within 5 years gradually eases off users of their phones into that other ecosystem.
And that's scary for Apple and Google.
Because this is not a device-war, this is an ecosystem-war.
I'm not sure how Apple is enabling anything interesting around AI right now.
That's what this bland article is not even touching on. Yes, having missed the boat is great if the boat ends up sinking. That doesn't make missing boats a great strategy.
Building huge models and huge data centers is not the only thing they could have done.
They had some interesting early ideas on letting AI tap app functionality client-side. But that has gone nowhere, and now everything of relevance is happening on servers.
Apple's devices are not even remotely the best dumb terminals to tap into that. Even that crown goes to Android.
I'm not following. What infrastructure? Pre-paid how?
Apple pays for materials and chips before it sells the finished product to consumers. Nothing is pre-paid.
And what infrastructure? The inference chips on iPhones aren't part of any Apple AI infrastructure. Apple's not using them as distributed computing for LLM training or anything, or for relaying web queries to a complete stranger's device -- nor would they.
The AI-capabilities of the devices will be pre-paid, as they will come with the product without delivering any significant value yet. The end-user will bear the cost for that before he is getting anything meaningful in return, because Apple's production volume is at such a scale that they can offset those investments without risking to lose any meaningful sales volume.
Other players can't do that because they don't sell 200mn units per year. If they would add on-device inference chips, they would have to significantly increase the device-price, risking to not sell any product
> Magic Cue - Magic Cue proactively surfaces relevant info and suggests actions, similar to how Apple's personalized Siri features were supposed to work. It can display flight information when you call an airline, or cue up a photo if a friend asks for an image.
https://www.macrumors.com/2025/08/20/google-pixel-10-ai-feat...
Google shipped it, despite it not working.
> I spent a month with the Pixel 10's most hyped AI feature, and it hasn't gone well
https://www.androidauthority.com/google-pixel-10-magic-cue-o...
Likewise Daily Hub didn't work but was shipped anyway.
> In our testing, Daily Hub rarely showed anything beyond the weather, suggested videos, and AI search prompts. When it did integrate calendar data, it seemed unable to differentiate between the user’s own calendar and data from shared calendars. This largely useless report was pushed to the At a Glance widget multiple times per day, making it more of a nuisance than helpful.
https://arstechnica.com/google/2025/09/google-pulls-daily-hu...
Apple announced that the Siri uodate didn't work well enough to ship, and didn't ship it.
They roll out hardware to consumers they can use for AI once their service is ready, with users paying for that rollout until then.
Meanwhile they have started to deploy a marketplace ecosystem for AI tasks on iOS, where Apple has the first right-to-refuse, allowing the user to select a (revenue-share-vetted) 3rd party provider to complete the task.
So until Apple is ready, the user can select OpenAI (or soon other providers) to fulfill an AI-task, and Apple will collect metrics on the demand of each type of task.
This will help them prioritize for development of own models, to finally make use of their own marketplace rules to direct the business away from third parties to themselves.
My guess is that they will offer a mixed on-device/cloud AI-service that will use the end-users hardware where possible, offloading compute from their clouds to the end-users hardware and energy-bill, with a "cheap" subscription price undercutting others on that AI-marketplace.
I'm not sure where you position Samsung or Xiaomi, Oppo etc. They're competitive on price with chipsets that can handle AI loads in the same ballpark, as attested by Google's features running on them.
They're not vertically integrated and don't have the same business structure, but does it matter regarding on-device AI ?
- Apple owns more than 50% of this market-segment, the annual sales of iPhones is roughly 200 Million units. In comparison, Samsung Galaxy S-series sits at roughly 20-25 Millions.
- Apple's is alone in the iOS ecosystem, while Samsung, Xiaomi and Oppo have to compete within the Android space every year. iOS is extremely sticky, which makes a certain volume of iPhones almost guaranteed to sell every year, at a lofty profit margin.
In comparison, Samsung always has to consider that the next BAD Galaxy-S might only sell a fraction of the previous one, because users might move horizontally to another Android brand (even to Pixel, a first-party product of their ecosystem provider). So Samsung cannot even make bets based on the sale of 20 million units, they are already at risk to make bets on the initial shipment-volume (~5 millions) because if the device doesn't sell they will have to PAY money to the carriers to get them into the market.
Apple has a much lower risk here. If the next iPhone is not catching on, Apple will likely still sell 200mn iPhones in that year, because the ecosystem lock-in is so strong that there is little risk of losing customers to anything else than ANOTHER (then more-profitable) iPhone.
So even when assuming a MASSIVE annual drop of 25% in Sales, Apple can still make development bets based on a production forecast of 150 MILLION units.
For their supply-chain that's still an average production output of ~400k units per DAY for each component. With that volume you can get entire factories to only produce for you.
That's why I can't think of any company in a comparable position. Apple can add hardware to their device and sell the resulting product to the consumer for profit before delivering any actual value with it.
If any competitor in the Android space attempts that, just the component costs alone will risk the device to be dead-on-arrival just because "some other Android device" delivers the same experience at lower cost.
I don't want to imply that this is their only play or that it will even work out.
The EU (and others) already identified this general scheme of stiffling competition by "brokering" between the consumer and the free market, so outside of the US I'm not even sure how much Apple will be able to rely on such a strategy (again)...
To my understanding, they market their ML stack as four layers [1]:
- Platform Intelligence: ready-made OS features (e.g., Writing Tools, Genmoji, Image Playground) that apps can adopt with minimal customization.
- ML-powered APIs: higher-level frameworks for common tasks—on-device Foundation Models (LLM), plus Vision, Natural Language, Translation, Sound Analysis, and Speech; with optional customization via Create ML.
- ML Models (Core ML): ship your own models on-device in Core ML format; convert/optimize from PyTorch/TF via coremltools, and run efficiently across CPU/GPU/Neural Engine (optionally paired with Metal/Accelerate for more control).
- Exploration/Training: Metal-backed PyTorch/JAX for experimentation, plus Apple’s MLX for training/fine-tuning on Apple Silicon using unified memory, with multi-language bindings and models commonly sourced from Hugging Face.
[1] https://developer.apple.com/videos/play/wwdc2025/360/
I'm not saying AI is a gimmick, but the caution they show is a good quality I think
https://www.axios.com/2025/03/20/apple-suit-false-advertisin...
OSX's Aqua was also an insanely bold UI with a lot of gimmicks, but was still usable for the most part. I'm so very curious about the internal discussions around this.
Deleted Comment
I’m an hour from Cambridge, MA. Ask the weather? I always get Cambridge, UK. Siri is terrible.
They can’t even make a functional keyboard anymore. The text prediction and autocorrect is worse now than it was in 2010!
These are all solved problems in 2025.
Cook might be less susceptible to gimmickery than some of his peers. But he's definitely got an imperfect batting average, here.
But also, their tendency to "not fall from gimmicks" sometimes makes it so we didn't get a 2nd mouse button for decades. Ultimately, the way they implemented this was super cool, but still.
Having to license Gemini from Google and Qwen from Alibaba for Siri isn’t Apple falling severely behind?
Deleted Comment
There's an unfulfilled promise of spatial computing. I wish I could load up my preferred CAD program and have wide and deep menus quickly traversable with hand gestures. Barring that the least it could do is support games. Maybe if some combination of miracle shims (fex emu, asahi, w/e) were able to get onto the platform it might be savable. The input drivers alone would be a herculean task.
quality sacrificed for speed, resulting in mediocre, buggy software.
The classic AI business manager formula.
Hype about self driving cars -> apple chases it with apple car -> investors pleased they kept up with the joneses -> apple car is behind or not good enough or whatever -> quietly cancelled -> investors pleased they culled the deadweight.
You can replace apple car with vision pro or soon apple intelligence and it will play out the same formula. Luckily it allows investors to profit.
> “The right info, right when you need it.” That’s how Google describes Magic Cue, one of the most prominent new AI features on the Pixel 10 series. Using the power of artificial intelligence, Magic Cue is supposed to automatically suggest helpful info in phone calls, text messages, and other apps without you having to lift a finger.
However, the keyword there is “supposed” to... even when going out of my way to prompt Magic Cue, it either doesn’t work or does so little that I’m amazed Google made as big a deal about the feature as it did.
https://www.androidauthority.com/google-pixel-10-magic-cue-o...
I'd rather see companies admit that a promised feature isn't ready for prime time than hype it up only to ship it broken.
OP doesn't literally mean they haven't made anything, he means that they've made nothing of real substance - which holds true when their biggest recent release is already completely forgotten by the public writ large.
They’ve made plenty of things. I liken them to the Lexus of consumer electronics; expensive for what they are, thoughtfully designed, and conservative in their approach to adopting new trends.
Heck, in accordance with the several zeitgeists of our age, I'll even do you the solid of fraudulently generating the money-on-fire pictures with AI, so when you get tired of seeing your money on fire I'll even hand, say, 25% of it back to you, as the result of my tireless efforts to bring value to my shareholders. That's a better return than you'll get from most of these investments!
That's not lucky. That's sad. They never ask the question "could we have earned _more_ profits with a better strategy?"
The market is not rational.
Historically the strength of Apple was that they didn't ship things until they actually worked. Meaning that the technology was there and ready to make an experience that was truly excellent.
People have been complaining for years that Apple isn't shipping fast enough in this area. But if anything I think that they have been shipping (or trying to ship) too fast. There are a lot of scenarios that AI is actually great at but the ones that move the needle for Apple just aren't there yet in terms of quality.
The stuff that is at a scale that it matters to them are integrations that just magically do what you want with iMessage/calendars/photos/etc. There are potentially interesting scenarios there but the fact is that any time you touch my intimate personal (and work) data and do something meaningful I want it to work pretty much all the time. And current models aren't really there yet in my view. There are lots of scenarios that do work incredibly well right now (coding most obviously). But I don't think the Apple mainline ones do yet.
They dragged their feet on a host of technologies that other handset makers adopted, released and subsequently improved.
- USB C charging
- 90hz, 120Hz refresh rates
- wireless charging
- larger batteries (the iPhone 17 still lags behind Samsung and Google)
I'm not sure what happened, but the iPhone used to have the most fluid, responsive experience compared to Android. Now, both Google and Samsung have surpassed them in that regard.
I've used both Android and have owned several iPhones and it just seems like its not an issue of releasing something that isn't ready, but more about them not being capable enough to release phones to compete with other phones that are regularly beating them in the specs race.
In general I would agree, but Siri is honestly still so bad.
Tell that to almost anything they've shipped in the last 5-10 years. It's gotten so bad that I wait halfway through entire major OS version before upgrading. Every new thing they ship is almost guaranteed to be broken in some way, ranging from minor annoyance to fully unusable.
I buy Apple-everything, but I sure wish there were better options.
now the tides are turning, so they can go back to scheming behind the closed doors without risking their top people leaving for meta for a bazillion dollars.
Great artists steal.
And this is the case across the board.
My friend's Fitbit works way better than my Apple watch.
Third and final example is how bad Apple's native dictation engine is. I can run OpenAI Whisper models on my Mac and get dramatically better output.
As a long time Apple fan who's had everything since before the first iPhone, I feel this apathy towards product quality cannot be disguised as some strategic decision to fast follow with AI.
My husband has a Fitbit and it's so buggy he left it sit on the shelf most of the time - the only times he'd wear it is for exercise.
Siri is bad though, but I have found Google Voice Assistant and Alexa both really have become bad over time, to the point of us just giving up on them completely. My husband is on Android and I'm really surprised how bad voice assistant is despite all the Gemini launches! (mind you he has an Australian accent)
I went through three FitBits. After the third failed just outside warranty I got an Apple watch, which has outlasted all three FitBits.
That's odd because I've used both, along with a bunch other wearables (e.g. Whoop), and I wouldn't give up my Apple Watch for anything. Massively useful, can take calls, make payments, stream music from my Apple playlists, read and reply to messages, and a ton of other things.
I mention this because , at least for the functionalities that you mention, I think the pixel watches are catching up nicely.
... but they still haven't been able to make me feel less stupid talking into a watch for phone calls like some off-brand James Bond wannabe, even if it works great.
But for everything else, you literally just said, the handful of AI features are better on Google products... That seldom makes the product as a whole better.
Deleted Comment
Siri isn't competing with Gemini, yet.. Siri is old tech, Gemini is the new tech.
Same with dictation.
Siri hasn't been updated generationally with SOTA to compete with Gemini yet.. it simply hasn't been updated. This is part of the "slow pace" that the post is talking about (part of, not entirely the slowness though).
For example, Amazon updated my old Echo dots with Alexa+ beta, and it's pretty good. I have Grok in my Tesla, and though I don't like Grok or xAI, it's there and I use it occasionally.
Apple hasn't done their release of these things yet.
Overselling abilities is for sure a lack of quality.
I only have Apple product because it’s good build quality. But it’s quite bad products.
I think Apple secretly doesn’t want more market share, to avoid anticompetitive accusations.
Everyone’s shares were battered earlier this year, and it had nothing to do with AI, and everything to do with tariffs.