It's a whitepaper release to share the STOTA research. This doesn't seem like an economically viable model, nor does it look polished enough to be practically usable.
We know how James Webb works and it's developed by an international consortium of researchers. One of our most trusted international institutions, and very verifiable.
We do not know how Genie works, it is unverifiable to non-Google researchers, and there are not enough technical details to move much external teams forward. Worst case, this page could be a total fabrication intended to derail competition by lying about what Google is _actually_ spending their time on.
We really don't know.
I don't say this to defend the other comment and say you're wrong, because I empathize with both points. But I do think that treating Google with total credulity would be a mistake, and the James Webb comparison is a disservice to the JW team.
The data never fits the graph. Real-world tables are messy and full of hidden junk, so you either spend weeks arguing over structure or give up the nice causal story.
DL stole the mind-share. A transformer is a one-liner with a mature tooling stack; hard to argue with that when deadlines loom.
That said, they’re not completely dead - reportedly Microsoft’s TrueSkill (Xbox ranking), a bunch of Google ops/diagnosis pipelines, some healthcare diagnosis tools by IBM Watson built on Infer.NET.
Anyone here actually shipped a PGM that beat a neural baseline? Would really love to appreciate your war stories.
Kind of like flow-based programming. I don't think there are any fundamental reason why it can't work, it just hasn't yet.
Google’s probably trying to stop goo.gl URLs from being used for phishing, but doesn’t want to admit that publicly.
It sounds like the problem is that nobody in the org ever writes down what the system does in the real implementation, and so the RFC becomes the default? That does sound frustrating, but it's also not the problem/solution pairing that the article tries to tackle. Also—that is explicitly what generated docs solve.
Documents should be unix-y (do one thing well), is maybe how I would rephrase this. If they're overloaded, that is genuinely a bad thing, but RFCs do have a time and place!
That said, I'd re-emphasize your perception slightly — "perceive it as not being _uniquely_ anti-AI" is more how I view it. I see similar sentiment on other social media too.