Google don't even expose a per-app toggle for app Internet access, why am I surprised?
This is disgusting.
Freedom died a little bit more today.
Why is end-user choice and consent not considered?
It's really disturbing that the EU and Google would do this.
I can't recommend Android or iPhone because of this nonsense.
The only credible explanation I can come up with is that they need the keys in order to produce indistinguishably backdoored versions of applications, handy for tools like signal.
Otherwise one would never think of requesting the private keys-- if google wants to rebuild apps themselves they could sign with their own keys and possessing anyone elses private key is just pure liability as if there is any discovered abuse they can't show that they weren't the vector.
Doesn't this make it prohibitively difficult to do local builds of open source projects? It's been a long time since I've done this, but my recollection was that the process to do this was essentially you would build someone else's (the project's) package/namespace up through signing, but sign it locally with your own dev keys. A glance at the docs they've shared makes it sound like the package name essentially gets bound to an identity and you then can't sign it with another key. Am a I misremembering and/or has something changed in this process? Am I missing something?
Deleted Comment
Jokes aside, teenagers can be smart at evading censorship. Should a democratic society play a cat and mouse game with a more motivated and more nimble adversary, who will often be smarter too?
Are they not aware that it's something they would have to force on everyone's device? GDPR cookie banners are a good example of poorly executed government meddling, the Online Safety Bill could yet be even more annoying, and we're still learning about the lies and ineptitude seen by the Post Office scandal. Why entertain playing a cat and mouse game like that? I don't think we want to see how invasive government computer meddling can be. People coming up with these ideas need to think really carefully about what the actual results could be once they have gone through committee and started bothering their constituents.
Perhaps more importantly, they should remember 16 year olds are going to be able to vote soon.
As an aside, ChatGPT has always been "overconfident" in the capabilities of its associated image model. It'll frequently offer to generate images which exceed its ability to execute, or which would need to be based on information which it doesn't know. Perhaps OpenAI developers need to place more emphasis on knowing when to refuse unrealistic image generation requests?
After gpt-image-1 has produced an image is another helpful intervention point, it can do a better self-review for detecting problems after image generation, but it's still not very thorough. However OpenAI has small teams, they try to keep them small and really focused, and everything is always changing really fast, they probably have gpt-image-2 or something else soon anyway.
In a way, reliable prediction is the main job OpenAI has to solve, and always has been. Some researches say the main way models are trained causes "Entangled Representations", which makes them unreliable. They also suffer from the "Reverse Curse". Maybe when they fix these issues, it might be real AGI and ASI all in one go?
The same issue exists with a bunch of other types of image output from ChatGPT - graphs, schematics, organizational charts, etc. It's been getting better at generating images which look like the type of image you requested, but the accuracy of the contents hasn't kept up.
ChatGPT's image generation was not introduced as part of the GPT-5 model release (except SVG generation).
The article leads with "The latest ChatGPT [...] can’t even label a map".
Yes, ChatGPT's image gen has uncanny valley issues, but OpenAI's GPT-5 product release post says nothing about image generation, it only mentions analysis [1].
As far as I can tell, GPT-Image-1 [2], which was released around March, is what powers image generation used by ChatGPT, which they introduced as "4o Image Generation" [3], which suggests to me that GPT-Image-1 is a version of the old GPT-4o.
The GPT-5 System card also only mentions image analysis, not generation. [4]
In the OpenAI live stream they said as much. CNN could have checked and made it clear the features are from the much earlier release, but instead they lead with a misleading headline.
It's very true that OpenAI doesn't make it obvious how the image generation works though.
[1] https://openai.com/index/introducing-gpt-5/
[2] https://platform.openai.com/docs/models/gpt-image-1
[3] https://openai.com/index/introducing-4o-image-generation/
[4] https://cdn.openai.com/pdf/8124a3ce-ab78-4f06-96eb-49ea29ffb...