> Even on a low-quality image, GPT‑5.2 identifies the main regions and places boxes that roughly match the true locations of each component
I would not consider it to have "identified the main regions" or to have "roughly matched the true locations" when ~1/3 of the boxes have incorrect labels. The remark "even on a low-quality image" is not helping either.
Edit: credit where credit is due, the recently-added disclaimer is nice:
> Both models make clear mistakes, but GPT‑5.2 shows better comprehension of the image.
As I got older, not only did computers stop doing that, my hearing also got worse (entirely normal for my age, but still), so that's mostly a thing of the past.
Again, sorry if this seemed antagonistic or something, I really am just unsure of what you were saying.
A relatively major (but not M8.8) quake has now hit in December 2025. It is intelligent to expect there may be aftershocks in the days after a significant earthquake actually happens, which can sometimes be larger than the initial quake. This is a well-accepted scientific fact born out of large amounts of data and statistical patterns, not whimsical doomsdayism.
Fukushima's M9.0-9.1 was around a 1-in-1000-year scale event. The last time Japan saw such a powerful earthquake was in the 869 AD. It would be reasonable to expect one of that scale to not happen again for another 1000 years.
[0]: For me this is really an important part of working with Claude, the model improves with the time but stay consistent, its "personality" or whatever you want to call it, has been really stable over the past versions, this allows a very smooth transition from version N to N+1.
That being said, the only SSD I’ve ever had fail on me was from Crucial.
In recent builds I have been using less expensive memory from other companies with varying degrees of brand recognizability, and never had a problem. And the days of being able to easily swap memory modules seem numbered, anyway.
Hopefully these high quality CT scans show the battery makers that people are going to notice when too many corners have been cut, even if there isn't a flood of reports of their product causing fires (yet).
In general, neural nets do not have insight into what they are doing, because they can't. Can you tell me what neurons fired in the process of reading this text? No. You don't have access to that information. We can recursively model our own network and say something about which regions of the brain are probably involved due to other knowledge, but that's all a higher-level model. We have no access to our own inner workings, because that turns into an infinite regress problem of understanding our understanding of our understanding of ourselves that can't be solved.
The terminology of this next statement is a bit sloppy since this isn't a mathematics or computer science dissertation but rather a comment on HN, but: A finite system can not understand itself. You can put some decent mathematical meat on those bones if you try and there may be some degenerate cases where you can construct a system that understands itself for some definition of "understand", but in the absence of such deliberation and when building systems for "normal tasks" you can count on the system not being able to understand itself fully by any reasonably normal definition of "understand".
I've tried to find the link for this before, but I know it was on HN, where someone asked an LLM to do some simple arithmetic, like adding some numbers, and asked the LLM to explain how it was doing it. They also dug into the neural net activation itself and traced what neurons were doing what. While the LLM explanation was a perfectly correct explanation of how to do elementary school arithmetic, what the neural net actually did was something else entirely based around how neurons actually work, and basically it just "felt" its way to the correct answer having been trained on so many instances already. In much the same way as any human with modest experience in adding two digit numbers doesn't necessarily sit there and do the full elementary school addition algorithm but jumps to the correct answer in fewer steps by virtue of just having a very trained neural net.
In the spirit of science ultimately being really about "these preconditions have this outcome" rather than necessarily about "why", if having a model narrate to itself about how to do a task or "confess" improves performance, then performance is improved and that is simply a brute fact, but that doesn't mean the naive human understanding about why such a thing might be is correct.