Complex queries I write myself anyway, so Claude fills the 'ORM' gap for me, leaving an easily understood project.
I have aphantasia - I can't visualise/picture things in my mind, so I use pen and paper or whiteboards A LOT!
I create various ERDs, mind maps, sequence diagrams etc. I use a ReMarkable which makes it a bit easier to move stuff around and makes it more effective.
I get that some people might think it is 'pure romanticism', but pen and paper has been crucial for my success.
Empirically we observe that an LLM trained purely to predict a next token can do things like solve complex logic puzzles that it has never seen before. Skeptics claim that actually the network has seen at least analogous puzzles before and all it is doing is translating between them. However the novelty of what can be solved is very surprising.
Intuitively it makes sense that at some level, that intelligence itself becomes a compression algorithm. For example, you can learn separately how to solve every puzzle ever presented to mankind, but that would take a lot of space. At some point it's more efficient to just learn "intelligence" itself and then apply that to the problem of predicting the next token. Once you do that you can stop trying to store an infinite database of parallel heuristics and just focus the parameter space on learning "common heuristics" that apply broadly across the problem space, and then apply that to every problem.
The question is, at what parameter count and volume of training data does the situation flip to favoring "learning intelligence" rather than storing redundant domain specialised heuristics? And is it really happening? I would have thought just looking at the activation patterns could tell you a lot, because if common activations happen for entirely different problem spaces then you can argue that the network has to be learning common abstractions. If not, maybe it's just doing really large scale redundant storage of heurstics.
I've read that the 'surprise' factor is much reduced when you actually see just how much data these things are trained on - far more than a human mind can possibly hold and (almost) endlessly varied. I.e. there is 'probably' something in the training set close to what 'surprised' you.
I can only think it began with the philosophy for making better decisions, solving problems, and improving society - particularly in the allocation of its resources.
As so, I think a huge part of the problem with economics, including most economists, is that people equate value for wealth. Value certainly is subjective, I mean, do you value having $80k, or a nice car? How about a pool, or a boat? Do you value your time less than the cost of a flight? All very subjective. But I think it's flawed to conflate value for wealth. Wealth is something real, the things you are evaluating in trade. Time is wealth. Money is wealth. Oil is wealth. Land is wealth. Why don't we simply measure these things, and how they're allocated? IMO, it seems more practical and more noble of pursuit to figuring out if someone actually needs food in society, rather than what people with plenty of resources ascribe their values to.
Instead, use scaffolding tools, that give you a head start on creating a new project, using smaller, specialized libs.
Also, don’t use an ORM. Just write that SQL. Your future self will be thankful.
Github copilot is so good at writing CRUD db queries that it feels as easy as an ORM, but without the baggage, complexity, and the n+1 performance issues.
The degree turned out to have a lot of transferable skills - especially in researching and solving problems.
Just 25 years later I am a Principal Engineer in the Oz Telco industry writing Rust!
I don't regret the degree for a moment - although when I went through the degree was free, even at a top tier Australian university.
'I have a database table Foo, here is the DDL: <sql>, create CRUD end points at /v0/foo; and use the same coding conventions used for Bar.'
I find it copies existing code style pretty well.