Do we allow for a matter of degree, rather than a binary, of "zero" vs" "complete" understanding?
That said, what if OpenAI shut down codex because it has dangerous possibilities and amoral “researchers” started figuring out how to exploit them? What if it was fundamentally buggy or encouraging misleading research? What if codex was accidentally leaking or distributing export-controlled or other illegal (copyright, etc.) information? I’m explicitly speculating on possibilities, while you’re making unstated assumptions, so entertain the question of whether OpenAI is already doing a public service by shutting it down.
You can't handwave and say go do your research on some micro-niche open source project that's way behind the SOTA and has nowhere near the same reach. That's not what "best practice" means here.
I get that it's difficult to define the line where that gets crossed. But the idea to provide a publicly funded trust that manages legacy versions of things like this is not a bad idea.
You can't handwave and say go do your research on some micro-niche open source project that's way behind the SOTA and has nowhere near the same reach. That's not what "best practice" means here.
It would be great if we all try to keep the tone respectful and avoid snarkiness to maintain a constructive discussion
By not highlighting what you found "snarky" your response is a definitional "shallow dismissal". I see you just "picked the most provocative thing to complain about". Not a lot of being "kind" either.
So you know what would also be great? If you held yourself to the standards you're keen to police around here.