So called "budget" phones these days have OLED screens some even come with 120Hz displays (beyond me why someone would want that) and plenty of compute and memory.
You want camera, buy a camera. You want gaming, buy a console or gaming machine.
"Buy a camera" doesn't work because (a) I don't want to pocket two devices, (b) most point-and-shoot dedicated cameras that are actually better are more bulky, too, (c) even entry-level good digital cameras are >$500 (e.g. a ZV-1F or something), so even the combo with a midrange phone often comes out more expensive and (d) a seperate camera makes it really annoying to send photos anywhere on the go.
That said: I came away fairly unhappy with the Pixel 8 Pro camera, which my book has a too editorialized post-processing look that I simply don't like. In retrospect, I think I should have gone for the Xperia in that generation, which appears to have been the last phone with high-end smartphone camera gear that took neutral-looking shots. My S21, despite having a worse sensor and optics, took subjectively nicer photos.
I've now updated my definition of "maximum phone camera" to be more choosy ...
That the term "sideloading" has normalized treating this as a special case is a problem.
It's unlikely the terminology can be rolled back at this point, but occasionally reflecting on this is useful.
A feature like that definitely wouldn't help everyone, but it might help some. If it was sold as a blanket solution that would be indeed absurd however.
That said: Let me be clear that I'm very happy I'm father to a two-year old and we have some time to figure out our "AI policy" still, and for the tech/services to improve. I don't envy parents to the 8+ crowd currently.
But a big part of the issue is that OpenAI wants user engagement - and "not being sycophantic" goes against that.
They knew feeding raw user feedback into the training process invites disaster. They knew damn well that it encourages sycophancy - even if they somehow didn't before the GPT-4o debacle, they sure knew afterwards. They even knew their initial GPT-5 mitigations were imperfect and in part just made the residual sycophancy more selective and subtle. They still caved to the pressure of "users don't like our update" and unrolled a lot of those mitigations.
It'd be a good start if services let you enter emergency contact info, making escalation opt-in.
Doesn't necessarily even need to call (particular in case of false positives) but there absolutely should be detection and a cutoff switch, where the chatbots just refuse to continue the conversation and then print out the hotline numbers (much like with reddit cares messages).
I'm generally not in favor of censorship or overly protective safeguards on LLMs, but maybe it's needed for hosted models/services that are available to the masses.
But before they get locked down more, we should try some legislation to limit how they can be marketed and sold. Stop letting OpenAI, etc. call the models "intelligent" for one. Make the disclaimers larger, not just small print in the chat window but an obvious modal that requires user agreement to dismiss - disclaim that it's a predictive engine, it is not intelligent, it WILL make mistakes, do not trust its output. Make it clear during the chat session over and over again, and then have a killswitch for certain paths.
The moderation tech is already there, and if there's even a small amount of mentally ill who would fill this in on a good day and be saved by it on a bad day / during an episode, it'd be worth it.