The more complex that an eye gets, the more the brain evolves not just the physics and chemistry of optics, but also rich feature sets about predator/prey labels, tracking, movement, self-localization, distance, etc.
These might not be separate things. These things might just come "for free".
If you train a system to memorize A-B pairs and then you normally use it to find B when given A, then it's not surprising that finding A when given B also works, because you trained it in an almost symmetrical fashion on A-B pairs, which are, obviously, also B-A pairs.
"The webpage credits another author: Native binary debugging for OCaml (written by Claude!) @joelreymont, could you please explain where you obtained the code in this PR?"
That pretty much sums up the experience of coding with LLMs. They are really damn awesome at regurgitating someone else's source code. And they have memorized all of GitHub. But just like how you can get sued for using Mickey Mouse in your advertisements (yes, even if AI drew it), you can get sued for stealing someone else's source code (yes, even if AI wrote it).
If you want to predict future text, you use an LLM. If you want to predict future frames in a video, you go with Diffusion. But what both of them lack is object permanence. If a car isn't visible in the input frame, it won't be visible in the output. But in the real world, there are A LOT of things that are invisible (image) or not mentioned but only implied (text) that still strongly affect the future. Every kid knows that when you roll a marble behind your hand, it'll come out on the other side. But LLMs and Diffusion models routinely fail to predict that, as for them the object disappears when it stops being visible.
Based on what I heard from others, world models are considered the missing ingredient for useful robots and self-driving cars. If that's halfway accurate, it would make sense to pour A LOT of money into world models, because they will unlock high-value products.
Edit: Found a link to the article content, I gather that's basically the point you're making?
They tried that this year and called it iPhone Air
What people like me wanted was an iPhone 13 mini that's a bit thicker so it can have a bit more battery capacity. And with the 120 Hz PWM nausea fixed.
The iPhone Air has worse battery life. And it has a larger screen. And it's worse to handle one-handed. Coming from the 13 mini, it's not an improvement.