But I couldn't get JRuby to package reliably. I'd fix the issues, it would work for a while, and then something would change.
Oh... because I wasn't doing it right. I have to rework a bunch of dependencies. And after a while, it breaks again. Why? Oh... I wasn't doing it right, I should be using this middleware instead...
So I said I'm done mucking around with JRuby. When I said this, I was told at RailsConf that was doing it wrong, and by implication, irresponsible with my clients' applications. That was I setting everything up for failure. Yet the applications that were working just fine on C Ruby. (I don't really hear much about JRuby any more - but I haven't been part of that world since George "strategery" Bush was president.)
And this was the shtick for conference speakers and YouTubers. You're doing it wrong. Do it this way to do it right. You're using Controllers wrong. They should be fat. They should be thin. They should be big boned. You should never use models. You should only use models. You should sit on two chairs and pair program with yourself when you develop. Only drink water when writing tests.... etc. etc. etc.
This left a bad taste in my mouth in what otherwise was a great community. I felt like a lot of the community wanted to do build great applications, quickly, cost-effectively, and with high quality. But that same impetus could be manipulated by folks in a way that's unhelpful. THAT part of Ruby I don't miss. RailsConf in Portland, eating VooDoo doughnuts, talking shop with other folks? That I miss.
With English, the meaning of a sentence is mostly self-contained. The words have inherent meaning, and if they’re not enough on their own, usually the surrounding sentences give enough context to infer the meaning.
Usually you don’t have to go looking back 4 chapters or look in another book to figure out the implications of the words you’re reading. When you DO need to do that (maybe reading a research paper for instance), the connected knowledge is all at the same level of abstraction.
But with code, despite it being very explicit at the token level, the “meaning” is all over the map, and depends a lot on the unwritten mental models the person was envisioning when they wrote it. Function names might be incorrect in subtle or not-so-subtle ways, and side effects and order of execution in one area could affect something in a whole other part of the system (not to mention across the network, but that seems like a separate case to worry about). There’s implicit assumptions about timing and such. I don’t know how we’d represent all this other than having extensive and accurate comments everywhere, or maybe some kind of execution graph, but it seems like an important challenge to tackle if we want LLMs to get better at reasoning about larger code bases.
You can have a book where in the last chapter you have a phrase "She was not his kid."
Knowing nothing else, you can only infer the self-contained details. But in the book context this could be the phrase which turns everything upside down, and it could refer to a lot of context.
AI will deduplicate all of this