The other suggestions ignored seemed to be "if this is about security, then fund the OSS, project. Or swap to a newer safer library, or pull it into the JS sandbox and ensure support is maintained." Which were all mostly ignored.
And "if this is about adoption then listen to the constant community request to update the the newer XSLT 3.0 which has been out for years and world have much higher adoption due to tons of QoL improvements including handling JSON."
And the argument presented, which i don't know (but seems reasonable to me), is that XSLT supports the open web. Google tried to kill it a decade ago, the community pushed back and stopped it. So Google's plan was to refuse to do anything to support it, ignore community requests for simple improvements, try to make it wither then use that as justification for killing it at a later point.
Forcing this through when almost all feedback is against it seems to support that to me. Especially with XSLT suddenly/recebtly gaining a lot of popularity and it seems like they are trying to kill it before they have an open competitor in the web.
I am so floored that at least half of this community, usually skeptical to a fault, evangelizes LLMs so ardently. Truly blows my mind.
I’m open to them becoming more than a statistical token predictor, and I think it would be really neat to see that happen.
They’re nowhere close to anything other than a next-token-predictor.
I'm more shocked that so many people seem unable to come to grips with the fact that something can be a next token predictor and demonstrate intelligence. That's what blows my mind, people unable to see that something can be more than the sum of its parts. To them, if something is a token predictor clearly it can't be doing anything impressive - even while they watch it do I'm impressive things.
How about "behaving in a way that increases the probability of your particular adversaries making incorrect inferences about your situation"?
Is that better or worse than calling it deception?
I had no idea this many people were so attached to a LLM. This sounds absolutely terrible
Even though every NES in existence is a physical system, you don't physical level simulation to create and have a playable NES system via emulation.
It does sort of give me the vibe that the pure scaling maximalism really is dying off though. If the approach is on writing better routers, tooling, comboing specialized submodels on tasks, then it feels like there's a search for new ways to improve performance(and lower cost), suggesting the other established approaches weren't working. I could totally be wrong, but I feel like if just throwing more compute at the problem was working OpenAI probably wouldn't be spending much time on optimizing the user routing on currently existing strategies to get marginal improvements on average user interactions.
I've been pretty negative on the thesis of only needing more data/compute to achieve AGI with current techniques though, so perhaps I'm overly biased against it. If there's one thing that bothers me in general about the situation though, it's that it feels like we really have no clue what the actual status of these models is because of how closed off all the industry labs have become + the feeling of not being able to expect anything other than marketing language from the presentations. I suppose that's inevitable with the massive investments though. Maybe they've got some massive earthshattering model release coming out next, who knows.
If your expectations were any higher than that then, then it seems like you were caught up in hype. Doubling 2-3 times per year isn't leveling off my any means.
He created it intending to be +1 of APL. Accidentally came up with BQN instead of BQM. Sat with that for 1hr, really liked the name, then realized that it should be BQM which he hated, so he stuck with BQN.
That said, it's and incredibly designed language. I honestly have never read any language (especially not designed by a single person) with the level of creative thought as he put into BQN. Some really incredible insights and deep understanding. It's amazing reading his posts / documentation about it. The focus on ergonomics, brand new constructs and the consistency/coherence of how all of his decisions fit together is really impressive.