Not that these two things are necessarily bad, we all need money to live and is nice to have confidence in one's ability, especially when that confidence is backed up by actual ability. The problem arises when we reduce learning to only its byproducts, when it's much more than that.
Learning, the process of understanding the world and, through that, of oneself, is a beautiful and rewarding pursuit on its own. What we get from it's useful, but it shouldn't be the ultimate reason we pursue it. It can be a motivator, many times I have learned things due to external reasons (for school or work), but its not the ultimate reason for my learning, it was simply a catalyst for something that I already had a deep desire for. Not the thing itself, the object of learning, but for the actual process of understanding it. It's akin to child's play. Children can acquire abilities while playing which can then be applied in other areas of their life to gain things. But that is not their ultimate reason for it, they play because is fun. I learn, and believe will all should learn, because is fun.
And to the poster's other reason for sadness, i.e that LLMs are getting smarter, remember that these robots don't learn by any means, unless one considers learning consuming information only to spit it back out partly digested. Learning is much more that, it requires life and, perhaps, even soul. A machine can't feel its soul soar upon realizing the limitless bounds of knowledge, it has no soul and it has no feelings.
While I won't knock the utility of LLMs (they obviously have a wide range of applications), this "ride or die" attitude seems to verge on religious zealotry. You don't need AI to code and, without reasonable proof of a real productivity increase (which the author only hints at, citing some internal studies that show a minimum increase of 30%, but without actually presenting the studies themselves), I will continue to doubt its ability at generating programs, even in the simplified, glue-it-together manner that the authors puts it.
One must remember what these LLMs truly are: statistical machines with no real contextual knowledge. It can't, therefore, be aware of the particulars of each implementation. What it generates needs to be analyzed and likely mangled by developers. So it begs the question, why not just code it directly? Perhaps it could lower the barrier of entry, after all, if I cannot code, at least the AI can give me a head start. But the novice developer won't even be able to discern between good and bad code, being forced to simply trust (and hope) that the generated program is not only correct, but that it fits well in the context of her application.
Further, what the machine can generate is wholly dependent on what is available for consumption (most of it stolen). Novelty, even in small amounts, will inevitably stump it. It already just performs educated guesswork, but, without basis, it's now only guesswork minus the education.
This is not to say that AI is useless, far from that, but is frequent that I see it be given this almost divine-like status. As if it were this inevitable, unstoppable thing, that will forever grow until it achieves perfection, to the woe the skeptics and the joy of the believers.
In my humble view, it remains a useful tool in a variety of situations, a occasionally better alternative to Stack Overflow for one, but I would never trust it to write code, at least good code that is.