I know it was there the entire time, so what exactly was suppressing the attention towards it? Was it satisfied customers or the companies paying to deplatform the message?
This idea has been tried before and it failed not because the core concept is bad (it isn't), but because implementation details were wrong, and now we have better tools to execute it.
None of that means they’re getting worse though. They’re getting better; they’re just not as good as you want them to be.
When I give them the same task I tried to give them the day before, and the output gets noticeably worse than their last model version, is that better? When the day by day performance feels like it's degrading?
They are definitely not as good as I would like them to be but that's to be expected of professionals who beg for money hyping them up.
I get to tell myself that it's worth it because at least I'm "keeping up with the industry" but I honestly just don't get the hype train one bit. Maybe I'm too senior? Maybe the frameworks I use, despite being completely open source and available as training data for every model on the planet are too esoteric?
And then the top post today on the front page is telling me that my problem is that I'm bothering to supervise and that I should be writing an agent framework so that it can spew out the crap in record time..... But I need to know what is absolute garbage and what needs to be reverted. I will admit that my usual pattern has been to try and prompt it into better test coverage/specific feature additions/etc on the nights and weekends, and then I focus my daytime working hours on reviewing what was produced. About half the time I review it and have to heavily clean it up to make it usable, but more often than not, I revert the whole thing and just start on it myself from scratch. I don't see how this counts as "better".
Crypto's over, gaming isn't a large enough market to fill the hole, the only customers that could fill the demand would be military projects. Considering the arms race with China, and the many military applications of AI, that seems the most likely to me. That's not a pleasant thought, of course.
The alternative is a massive crash of the stock price, and considering the fact that NVIDIA makes up 8% of everyone's favorite index, that's not a very pleasant alternative either.
It seems to me that an ultra-financialized economy has trouble with controlled deceleration, once the hypetrain is on it's full-throttle until you hit a wall.
Have you been living under a rock?
You can start getting up to speed by how Amazon's CEO already laid out the company's plan.
https://www.thecooldown.com/green-business/amazon-generative...
> (...) AI is just a scapegoat to counteract the reckless overhiring due to (...)
That is your personal moralist scapegoat, and one that you made up to feel better about how jobs are being eliminated because someone somewhere screwed up.
In the meantime, you fool yourself and pretend that sudden astronomic productivity gains have no impact on demand.