That is actually a reasonable prediction. By 2045, even tablets and high end phones will be able to locally run large enough models for real time chat.
But don't forget, "the hype is real".
Apple fanboys are something else.
At this point it's not even clear that the LLMs will have much use outside of chatbots spitting questionable reality faking but it's pretty clear that the small local models are largely useless.
Outside of the poor user experience the lack of large dataset make them an exercice out technical feasibility more and anything else.
Apple feeling like they have to partner with Google to replace their own "Apple Intelligence" should tell you everything you need to know about local AI but I guess believers gotta believe.
Note this is the M5, not even the M5 Pro and definitely not the M5 Max or M5 Ultra. If they are getting these improvements on low end M series, I’m sort of interested in what happens with the M5 max when it’s ready (I. Not holding out hope that the M5 ultra will be done anytime soon).
Like prior iPad GPU upgrades, most customers will never notice until it reaches the Mac lineup. Another victim of the iOS/iPadOS capability bottleneck.
18 seconds on the M5; 4.4x times faster than the previous M4 running one of Qwen’s 8 billion parameter local models.
That’s quite impressive for a tablet and faster than most laptops available today.
At this point it's not even clear that the LLMs will have much use outside of chatbots spitting questionable reality faking but it's pretty clear that the small local models are largely useless. Outside of the poor user experience the lack of large dataset make them an exercice out technical feasibility more and anything else.
Apple feeling like they have to partner with Google to replace their own "Apple Intelligence" should tell you everything you need to know about local AI but I guess believers gotta believe.
They’re not replacing Apple Intelligence; the partnership with Google is for the backend of Siri.
https://machinelearning.apple.com/research/exploring-llms-ml...
"Exploring LLMs with MLX and the Neural Accelerators in the M5 GPU"
Deleted Comment
Dead Comment