Wow, this is... underwhelming. Some text summaries for apps nobody use, and minor Siri improvements that bring it up to par where Google Assistant was 5-10 years ago? Even the "do you want to change it, or send it?" prompt is straight outta Android. It also seems like they copied Google Photos and Gmail features.
And the place a better Siri would be really useful, Apple TV, isn't included at all :(
All that marketing for this...? None of these things require a dramatic new AI chip or months of announcements. They're minor improvements at best.
I know this is an honest response, but it's a bit funny that most (if not all) of those features are not useful at all in daily applications. And they will be added in the future™!
It's obvious they just shoehorned this stuff in after missing the bus. Now they made the promises, and are working feverishly to deliver in order to protect the stock price.
It's sad really. They could have had the "courage" to say, you know what, there's not much to LLMs yet. We've been shipping NPUs since 2017 and already use ML across our ecosystem to improve the user experience, but we're not going fall for the AI hype. We'll continue shipping useful features as they become ready. And hey, you already have lots of ways to do gen stuff with apps!
But no, instead we get THE FIRST IPHONE BUILT FROM THE GROUND UP FOR AI!!!1!1! which... doesn't even ship with it and when they finally roll it out it's... standard gen stuff. And not even all that great compared to what you can already do, even local.
Which apps had you supposed no one is using? I can't say the summaries in Mail, Messages, and random notifications on my Mac have been super useful yet, although I just started using those features. The summaries I have looked at due to the novelty of the feature have been satisfactorily accurate.
I welcome the writing tools. I'd been using Grammarly as a glorified text and grammar checker. While I have zero interest in using it or any other AI tool to write text, it's nice for finding minor mistakes like using the the same word twice in a sentence. And now I have something free and built-in that's as good as Grammarly at the things I want to use it for.
It's nice that it's built in but I would have preferred they wrote an API for it and let apps provide the service. There's no particular reason we should have to use Apple's models. I already do all the same things locally with ollama with my choice of models.
Does anyone else want to talk to Siri like a normal human? Like an actual assistant?
It drives me nuts that Siri can't interact correctly when spoken to like this:
'Siri, could you text my wife that I will be home in 20 minutes'
Converts to:
Text Wife:
That I will be home in 20 minutes
Should be:
I will be home in 20 minutes
Drives me nuts. This is what I actually want. It's just so much more natural. This is my biggest grievance with virtual assistants. I want to talk to it like a real assistant. Hopefully after the LLM refactor of Siri this will happen, but on 18.2, still doesn't work with redesigned Siri. I don't know if they have added the LLM integration with her, but I thought they had in 18.2
This sort of misunderstanding w.r.t. Siri is actually an area where it historically does quite well. I can't replicate this issue on my phone and i almost suspect you pulled it out of thin air.
I have zero interest in Ai helping me to write or rewrite email or other text. Siri maybe, lets hope it finally becomes somewhat useful, considering how useless and stupid the current version is.
For me, AI helping with writing email has been wonderful. I do a lot of email that is fairly generic and really just needs a basic template for replying or informing people. For those tasks it works great. I also had a more serious email that I wasn't sure how to respond to and needed to make sure the tone was appropriate and AI helped me get a good draft to work from.
Email is one of those things that I would put off as I just found it hard to get started and would worry about grammatical error and trivial mistakes that people seem to focus on more than they should. AI has helped me just be better at email.
I imagine the value of this feature varies depending on how much email you send. If you only send one or two emails a day the value may not be obvious - if most of your job is email communication this could be a whole lot more impactful.
I’ll take ‘not relying on my computer to do tasks that are inherently and inextricably human, like actually reading a text message from my mother or daughter, or replying to them,’ for $5, Alex.
You can also upgrade and just ... not enable it. In order to use the feature, not only do you have to join a waitlist, but you then have to also explicitly opt-in after getting it. Even after opting-in, you can opt out.
Or, you can use some features and not others. You can disable summaries, for instance.
Blanket, knee-jerk reactions like this are silly, and this is coming from someone who, after playing with it a bit, is underwhelmed.
I find the whole "summarize" use-case bizarre. The core problem with too many emails and messages isn't that I want to read all of it but it's too much. It's that most of it is stuff I fundamentally don't care about, from people who just invited themselves into my comms. I don't want a summary of random email blasts. I want my email inbox to only contain messages from people I actually want to hear from in full, and that's simple enough to do with filters, stars, and "high priority" tags.
I did upgrade to it after the first week and it was the worst MacOS upgrade I've ever done (since Leopard), so I went back to Sonoma.
Firewall was broken, Wifi was broken, Contact key Verification was broken. It was the biggest pile of shit I've ever seen, even worse than Windows Vista or Windows 8 ever were for me, by comparison. And the second worst thing besides how buggy it was was how there was literally no benefit to using it - Sequoia is basically all AI nonsense, which I don't intend to use anyway.
Yes it's my fault for upgrading, especially in a dot 0 release, but I was curious and had a free weekend day so I figured I'd give it the plunge. I ended up spending more time investigating just how long I can stay on Sonoma, especially once it gets end of lifed, because I have absolutely no faith in Apple's software going forward and I have 0 interest in having my OS do any kind of AI processing, whether on device or off.
A few more random fuck ups I noticed: Bluetooth Codec quality can no longer be manually controlled or even seen by holding option and clicking the BT icon in the menu bar, nor can you use the Bluetooth Explorer tool to edit the codec and bitrate. Just gone. Sequoia also completely broke my Homebrew and Python3 installs, along with doing something to pip independently of the other two such that I couldn't run any programs that had pip dependencies. That was all on a fresh formatted USB install of Sequoia, too.
That OS is genuinely an unbelievable pile of shit.
Not sure I've spent more than 10 minutes combined over the past 20 years 'reading' a spam or scam email. When those things do manage to get through spam filters it's usually pretty obvious what they are. As for 'marketing messages' I can only assume you mean spam and added this to make the list look longer.
Get back to me in 3-5 years and let me know how getting AI to condense your work emails is going for you - my guess is that the first time ChatGPT manages to fuck up a distillation for you, either by garbling the meaning of something important or just missing a crucial point altogether, you'll swear off it for good. If you still have a job by then...
Point is, if you've been so genuinely bothered by spam and Nigerian princes that you're happy to outsource judgment and critical thinking to a probabilistic bot that hallucinates once in every 15 to 20 tries, then - aside from your skill issue getting your spam filter to work - you and I have very divergent views on what makes man's brain valuable and unique and indeed what parts of our cognition are worth preserving.
It's pretty impressive. I read it back in June when this came out, but the basic gist of it is that everything that _can_ be done on device _is_ done on device, and everything else is run in a _provably_ secure and private cloud environment.
> Security researchers need to be able to verify, with a high degree of confidence, that our privacy and security guarantees for Private Cloud Compute match our public promises.
How is this possible if the software runs on Apple hardware? Do the security researchers get access to the VLSI designs?
Honestly, this was the biggest thing that pushed me from Android to iOS.
I don't trust Google to be (a) incentivized (because ad revenue) or (b) organizationally-capable (because product fiefdoms) to ship privacy arch to that level in Android.
And if I'm going to adopt LLMs on mobile as part of my workflow, I want very strong guarantees about where that data is ending up.
Answered in the linked article. That Apple Intelligence only works on hardware with their neural chip should give a clue. It mostly happens on device. In December they will offer anonymous ChatGPT integration, free, opt-in.
To give you a preliminary answer: IIRC, these models could run locally, and therefor aren't supported by older hardware. But what "... it’s all built on a foundation of privacy with on-device processing and Private Cloud Compute" exactly entails, I'm not sure.
Edit: from what I gather, "Private Cloud Compute" is indeed phoning home, but (supposedly) secure/private.
I updated my macOS and iOS device as soon as I could because I was curious to finally see how these features will work.
Turns out it's not even available today! The Apple Intelligence settings just showed a "Join waitlist" button, I clicked it and it says "You'll be notified when Apple Intelligence is available for [you]".
A not-so-fun footnote for those who looked forward to this:
> The first set of Apple Intelligence features is available now as a free software update with iOS 18.1, iPadOS 18.1, and macOS Sequoia 15.1, and can be accessed in most regions around the world when the device and Siri language are set to U.S. English.
More to come later:
> Apple Intelligence is quickly adding support for more languages. In December, Apple Intelligence will be available for localized English in Australia, Canada, Ireland, New Zealand, South Africa, and the U.K., and in April, a software update will deliver expanded language support, with more coming throughout the year. Chinese, English (India), English (Singapore), French, German, Italian, Japanese, Korean, Portuguese, Spanish, Vietnamese, and other languages will be supported.
And the place a better Siri would be really useful, Apple TV, isn't included at all :(
All that marketing for this...? None of these things require a dramatic new AI chip or months of announcements. They're minor improvements at best.
But no, instead we get THE FIRST IPHONE BUILT FROM THE GROUND UP FOR AI!!!1!1! which... doesn't even ship with it and when they finally roll it out it's... standard gen stuff. And not even all that great compared to what you can already do, even local.
I welcome the writing tools. I'd been using Grammarly as a glorified text and grammar checker. While I have zero interest in using it or any other AI tool to write text, it's nice for finding minor mistakes like using the the same word twice in a sentence. And now I have something free and built-in that's as good as Grammarly at the things I want to use it for.
Have you actually used Apple Intelligenge? It sounds like you don’t even use iOS, so not sure how you’re able to judge “minor improvements at best.”
Also privacy. That’s substantially different than Android or anything from Google.
It drives me nuts that Siri can't interact correctly when spoken to like this: 'Siri, could you text my wife that I will be home in 20 minutes'
Converts to:
Text Wife: That I will be home in 20 minutes
Should be: I will be home in 20 minutes
Drives me nuts. This is what I actually want. It's just so much more natural. This is my biggest grievance with virtual assistants. I want to talk to it like a real assistant. Hopefully after the LLM refactor of Siri this will happen, but on 18.2, still doesn't work with redesigned Siri. I don't know if they have added the LLM integration with her, but I thought they had in 18.2
Email is one of those things that I would put off as I just found it hard to get started and would worry about grammatical error and trivial mistakes that people seem to focus on more than they should. AI has helped me just be better at email.
My impression of LLM-generated emails has been they tend towards verbosity. At least relative to the minimal-characters some higher-role folks prefer.
I haven't spent much time trying to get them to exec memo edit/format information.
I enjoy seeing entire text threads and email chain summarized in a notification. Really helps choose what is worth reading now vs later
Deleted Comment
I’ll stay on Sonoma for as long as I safely can.
Or, you can use some features and not others. You can disable summaries, for instance.
Blanket, knee-jerk reactions like this are silly, and this is coming from someone who, after playing with it a bit, is underwhelmed.
Firewall was broken, Wifi was broken, Contact key Verification was broken. It was the biggest pile of shit I've ever seen, even worse than Windows Vista or Windows 8 ever were for me, by comparison. And the second worst thing besides how buggy it was was how there was literally no benefit to using it - Sequoia is basically all AI nonsense, which I don't intend to use anyway.
Yes it's my fault for upgrading, especially in a dot 0 release, but I was curious and had a free weekend day so I figured I'd give it the plunge. I ended up spending more time investigating just how long I can stay on Sonoma, especially once it gets end of lifed, because I have absolutely no faith in Apple's software going forward and I have 0 interest in having my OS do any kind of AI processing, whether on device or off.
A few more random fuck ups I noticed: Bluetooth Codec quality can no longer be manually controlled or even seen by holding option and clicking the BT icon in the menu bar, nor can you use the Bluetooth Explorer tool to edit the codec and bitrate. Just gone. Sequoia also completely broke my Homebrew and Python3 installs, along with doing something to pip independently of the other two such that I couldn't run any programs that had pip dependencies. That was all on a fresh formatted USB install of Sequoia, too.
That OS is genuinely an unbelievable pile of shit.
Get back to me in 3-5 years and let me know how getting AI to condense your work emails is going for you - my guess is that the first time ChatGPT manages to fuck up a distillation for you, either by garbling the meaning of something important or just missing a crucial point altogether, you'll swear off it for good. If you still have a job by then...
Point is, if you've been so genuinely bothered by spam and Nigerian princes that you're happy to outsource judgment and critical thinking to a probabilistic bot that hallucinates once in every 15 to 20 tries, then - aside from your skill issue getting your spam filter to work - you and I have very divergent views on what makes man's brain valuable and unique and indeed what parts of our cognition are worth preserving.
It's pretty impressive. I read it back in June when this came out, but the basic gist of it is that everything that _can_ be done on device _is_ done on device, and everything else is run in a _provably_ secure and private cloud environment.
If we could "prove" security, we would. Proving security in a networked environment? Hahaha - there have been successful attacks on airgapped envs.
How is this possible if the software runs on Apple hardware? Do the security researchers get access to the VLSI designs?
I don't trust Google to be (a) incentivized (because ad revenue) or (b) organizationally-capable (because product fiefdoms) to ship privacy arch to that level in Android.
And if I'm going to adopt LLMs on mobile as part of my workflow, I want very strong guarantees about where that data is ending up.
Deleted Comment
Edit: from what I gather, "Private Cloud Compute" is indeed phoning home, but (supposedly) secure/private.
it's "everyday" people doing everyday tasks.
Turns out it's not even available today! The Apple Intelligence settings just showed a "Join waitlist" button, I clicked it and it says "You'll be notified when Apple Intelligence is available for [you]".
> The first set of Apple Intelligence features is available now as a free software update with iOS 18.1, iPadOS 18.1, and macOS Sequoia 15.1, and can be accessed in most regions around the world when the device and Siri language are set to U.S. English.
More to come later:
> Apple Intelligence is quickly adding support for more languages. In December, Apple Intelligence will be available for localized English in Australia, Canada, Ireland, New Zealand, South Africa, and the U.K., and in April, a software update will deliver expanded language support, with more coming throughout the year. Chinese, English (India), English (Singapore), French, German, Italian, Japanese, Korean, Portuguese, Spanish, Vietnamese, and other languages will be supported.