Paint is ready at the hardware store Table is ready at the restaurant Construction is done on a bridge
All kinds of things that we need a one-time notification for.
Paint is ready at the hardware store Table is ready at the restaurant Construction is done on a bridge
All kinds of things that we need a one-time notification for.
1. Difference in focal length/ position.
2. Difference in color processing
But…the article is fairly weak on both points?
1. It’s unclear why the author is comparing different focal lengths without clarifying what they used. If I use the 24mm equivalent on either my full frame or my iPhone, the perspective will be largely the same modulo some lens correction. Same if I use the 70mm or whatever the focal length is.
2. Color processing is both highly subjective but also completely something you can disable on the phone and the other camera. It’s again, no different between the two.
It’s a poor article because it doesn’t focus on the actual material differences.
The phone will have a smaller sensor. It will have more noise and need to do more to combat it. It won’t have as shallow a depth of field.
The phone will also of course have different ergonomics.
But the things the post focuses on are kind of poor understandings of the differences in what they’re shooting and how their cameras work.
He has some good points, maybe, but in general it’s a pretty naive comparison.
> The criticism assumes they're redesigning everything when they explicitly documented the opposite.
Does Control Center fit those guidelines for applying Liquid Glass ?
It doesn't look like Apple has as much restraint as you're giving them credit for.
Jotai, mentioned briefly in the article, may not be built in but is as intuitive as signals get and isn’t even tied to React as of later versions.
I’ve very rarely met a state management problem in clientside state management where neither tanstack query (for io related state) nor jotai (for everything else) are the best answer technically speaking. The rare exceptions are usually best served by xstate if you want to model things with FSMs or with zustand if you actually need a reducer pattern. There’s a tiny niche where redux makes sense (you want to log all state transitions or use rewind or are heavily leaning on its devtools) but it was the first to get popular and retains relevance due to the fact that everyone has used it.
You can go a long way with useContext and useReducer/useState but few would opt for alternatives if jotai came batteries included with react.
Sadly the whole idea of composable query builders seems to have fallen out of fashion.
> Then, a physics-based neural network was used to process the images captured by the meta-optics camera. Because the neural network was trained on metasurface physics, it can remove aberrations produced by the camera.
Seems like an LLM should be able to judge a prompt, and collaboratively work with the user to improve it if necessary.