(It is plausible they added some new DRM but it's not going to be anything too crazy)
It might be nearer to EOL, but it's not actually EOL and should be fine for 5+ years after any EOL is announced.
Others seem to have delayed their announcements after realising they can still make a load of money off ddr4. Also not a great situation for a raspberry pi chip. https://www.trendforce.com/news/2025/09/02/news-samsung-sk-h...
They want to own something but it's always going to be a drop in the ocean. They have a small new music label thing called RADAR but I imagine the failure rate on that is very high. They need to buy a label if they want to meaningfully change this. Just like Amazon now owns MGM and Netflix maybe getting Warner Bros. Presumably they can't afford to do this, and I don't think that integration would work as well in the music industry.
Besides, China's RAM manufacturing is reasonably new, and only makes DDR4 and LPDDR4, not the older LPDDR2 which the RP3A0 uses.
But yes, they would have known LPDDR2 was EOL. It was EOLed 6 years ago, before they even launched the zero 2 (which they only introduced because the BCM2835 chip used by the original Zero was EOL), so it's not exactly clear why they are launching the CM0 now.
What makes the most sense to me is that they are currently developing a new chip, that will be a more-or-less drop in replacement for the RP3A0. If it's drop-in, then the design work on the CM0 won't be wasted.
Which would give us some clues on what the RP4x chip is, and it's current status (close enough that they know it will arrive before they run out of RP3A0 chips for the Pi Zero 2, but far enough away to bother launching the CM0 now, as long as the supply is limited).
This RP4x chip presumably needs to have low enough power/costs to fit the Pi Zero 3 budget (so quad Cortex-A725 cores?), while also using modern memory, LPDDR4 if not LPDDR5 to push the EOL out as far as possible. Since the Raspberry Pi 3 depends on the same EOL LPDDR2 memory, this theoretical RP4x chip will probably be used for a product refresh there too (and lowering their costs, as a bonus).
The story with Intel around these times was usually that AMD or Cyrix or ARM or Apple or someone else would come around with a new architecture that was a clear generation jump past Intel's, and most importantly seemed to break the thermal and power ceilings of the Intel generation (at which point Intel typically fired their chip design group, hired everyone from AMD or whoever, and came out with Core or whatever). Nvidia effectively has no competition, or hasn't had any - nobody's actually broken the CUDA moat, so neither Intel nor AMD nor anyone else is really competing for the datacenter space, so they haven't faced any actual competitive pressure against things like power draws in the multi-kilowatt range for the Blackwells.
The reason this matters is that LLMs are incredibly nifty often useful tools that are not AGI and also seem to be hitting a scaling wall, and the only way to make the economics of, eg, a Blackwell-powered datacenter make sense is to assume that the entire economy is going to be running on it, as opposed to some useful tools and some improved interfaces. Otherwise, the investment numbers just don't make sense - the gap between what we see on the ground of how LLMs are used and the real but limited value add they can provide and the actual full cost of providing that service with a brand new single-purpose "AI datacenter" is just too great.
So this is a press release, but any time I see something that looks like an actual new hardware architecture for inference, and especially one that doesn't require building a new building or solving nuclear fusion, I'll take it as a good sign. I like LLMs, I've gotten a lot of value out of them, but nothing about the industry's finances add up right now.