By that measure every law is a bad law.
It’s also absolutely true that “agricultural usage dominating data center usage” is a dirty little secret that a lot of people are very, very incentivized to keep secret. Amazon can’t outright say that, because uh whutabuht mah poor farmers.
> “do not, and will not during the term of this financial assistance award, operate any programs that advance or promote DEI, or discriminatory equity ideology in violation of Federal anti-discrimination laws.”
Is that even legal to add such an arbitrary and opinionated reason to a government grant?
I applaud them for taking a stand, it seems to be more and more rare these days.
Sure it's good compared to like... C++. Is go actually competing with C++? From where I'm standing, no.
But compared to what you might actually use Go for... The tooling is bad. PHP has better tooling, dotnet has better tooling, Java has better tooling.
What we have seen the last few years is a conscious marketing effort to rebrand everything ML as AI and to use terms like "Reasoning", "Extended Thinking" and others that for many non technical people give the impression that it is doing far more than it is actually doing.
Many of us here can see his research and be like... well yeah we already knew this. But there is a very well funded effort to oversell what these systems can actually do and that is reaching the people that ultimately make the decisions at companies.
So the question is no longer will AI Agents be able to do most white collar work. They can probably fake it well enough to accomplish a few tasks and management will see that. But will the output actually be valuable long term vs short term gains.
Yes, which makes sense, because if there's a landscape of states that the model is traversing, and there are probablistically likely pathways between an initial state and the desired output, but there isn't a direct pathway, then training the the model to generate intermediate text in order to move across that landscape so it can reach the desired output state is a good idea.
Presumably LLM companies are aware that there is (in general) no relationship between the generated intermediate text and the output, and the point of the article is that by calling it a "chain of thought" rather than "essentially-meaningless intermediate text which increases the number of potential states the model can reach" users are misled into thinking that the model is reasoning, and may then make unwarranted assumptions, such as that the model could in general apply the same reasoning to similar problems, which is in general not true.
And Gemini has a note at the bottom about mistakes, and many people discuss this. Caveat emptor, as usual.
Most humans are unsophisticated simulators of reasoning-like text.
I just wholly disagree with the conclusion that this is a common situation brought by AI. AI coding simply isnt there to start replacing people with 20 years of experience unless your experience is obsolete or irrelevant in today’s market.
I’m about 10 years into my career and I constantly have to learn new technology to stay relevant. I’d be really curious what this person has spent the majority of their career working on, because something tells me it’d provide insight to whatever is going on here.
again not trying to be dismissive, but even with my fairly unimpressive resume I can get at least 1st round calls fairly easily, and my colleagues that write actual software all report similar. companies definitely are being more picky, but if your issue is that you’re not even being contacted, I’d seriously question your approach. They kind of get at the problem a little by stating they “wont use a ton of AI buzzwords.” Like, ok? But you can also be smart about knowing how these screeners work and play the game a little. Or you can do doordash. personally I’d prefer the former to the latter.
Also find it odd that 20 years of experience hasnt led to a bunch of connections that would assist in a job search - my meager network has been where I’ve found most of my work so far.
What fraction of positions require that ongoing learning, or at least to that degree?
Also, consider many other jobs, are they doing their job, and the doing of their job itself provides the experience that makes you a more valuable worker? Or is the doing of the job basically a necessary distraction from the actual task of preparing yourself for a future job? What fraction of humanity actually takes on two jobs, the paying job and the preparing-for-the-next-job? Might doing the latter get you fired from the former? Most importantly, is doing that latter job getting more important over time, that is, are our jobs less secure? If so, is this what is an improving economy, rising, as it were, with GDP?