Consider that you're coding a Linux device driver. If you ask for help from an LLM that has never seen the Linux kernel code, has never seen a Linux device driver and has never seen all of the documentation from the Linux kernel, you're going to need to provide all of this as context. And that's both going to be onerous on you, and it might not be feasible. But if the LLM has already seen all of that during training, you don't need to provide it as context. Your context may be as simple as "I am coding a Linux device driver" and show it some of your code.
Of course, because I am not new to the problem, whereas an LLM is new to it every new prompt. I am not really trying to find a fair comparison because I believe humans have an unfair advantage in this instance, and am trying to make that point, rather than compare like for like abilities. I think we'll find even with all the context clues from MCPs and history etc. they might still fail to have the insight to recall the right data into the context, but that's just a feeling I have from working with Claude Code for a while. Because I instruct it to do those things, like look through git log, check the documentation etc, and it sometimes finds a path through to an insight but it's just as likely to get lost.
I alluded to it somewhere else but my experience with massive context windows so far has just been that it distracts the LLM. We are usually guiding it down a path with each new prompt and have a specific subset of information to give it, and so pumping the context full of unrelated code at the start seems to derail it from that path. That's anecdotal, though I encourage you to try messing around with it.
As always, there's a good chance I will eat my hat some day.
That is true for the LLMs you have access to now. Now imagine if the LLM had been trained on your entire code base. And not just the code, but the entire commit history, commit messages and also all of your external design docs. And code and docs from all relevant projects. That LLM would not be new to the problem every prompt. Basically, imagine that you fine-tuned an LLM for your specific project. You will eventually have access to such an LLM.
A thing being great doesn’t mean it’s going to generate outsized levels of hype forever. Nobody gets hyped about “The Internet” anymore, because novel use cases aren’t being discovered at a rapid clip, and it has well and throughly integrated into the general milieu of society. Same with GPS, vaccines, docker containers, Rust, etc., but I mentioned the Internet first since it’s probably on a similar level of societal shift as is AI in the maximalist version of AI hype.
Once a thing becomes widespread and standardized, it becomes just another part of the world we live in, regardless of how incredible it is. It’s only exciting to be a hype man when you’ve got the weight of broad non-adoption to rail against.
Which brings me to the point I was originally trying to make, with a more well-defined set of terms: who cares if someone waits until the tooling is more widely adopted, easy to use, and somewhat standardized prior to jumping on the bandwagon? Not everyone needs to undergo the pain of being an early adopter, and if the tools become as good as everyone says they will, they will succeed on their merits, and not due to strident hype pieces.
I think some of the frustration the AI camp is dealing with right now is because y’all are the new Rust Evangelism Strike Force, just instead of “you’re a bad software engineer if you use a memory unsafe languages,” it’s “you’re a bad software engineer if you don’t use AI.”
It has the best performance I've ever seen by Kevin Bacon, and a solid performance from Christian Slater. Gary Oldman is a solid villian. R. L. Emery does his usual thing, but he's really good at that usual thing. I think about lines and ideas from it frequently. Granted, this is partly because the movie came out when I was 15 and I watched it a formative age with friends. But I've also watched it recently, and I think it holds up.
It's like GOTOs, but worse, because it's not as visible.
C++'s destructors feel like a better/more explicit way to handle these sorts of problems.
C++ destructors are great for this, but are not possible in C. Destructors require an object model that C does not have.
I think that is the biggest advantage of using a teaching language. You can give them curated, high-quality examples and make sure they not led astray by crap code they find on the internet.
And in the ChatGPT era it forces them to actually learn how to code instead of relying on AI.
The main problem is that student motivation is what drives learning success. Just because something is good for them and theoretically the best way to learn does not mean they will like this. It is difficult to sell young people long term benefits that you can only see after many years of experience.
It seems to me that the majority of students prefers "real world" languages to more elegant teaching languages. I guess it is a combination of suspected career benefits and also not wanting to be patronized with a teaching language. I wish people were more open to learning language for the fun of it but it is what it is. Teach them the languages they want to learn.
I'm not sure where I feel about languages made for teaching and whatnot, yet; but I would be remiss if I didn't encourage my kids to use https://scratch.mit.edu/ for their early programming. I remember early computers would boot into a BASIC prompt and I could transcribe some programs to make screensavers and games. LOGO was not uncommon to explore fractals and general path finding ideas.
Even beyond games and screensavers, MS Access (or any similar offering, FoxPro, as an example) was easily more valuable for learning to program interfaces to data than I'm used to seeing from many lower level offerings. Our industries shunning of interface builders has done more to make it difficult to get kids programming than I think we admit.
Edit to add: Honestly, I think my kids learned more about game programming from Mario Builder at early ages than makes sense.