[dancing] https://www.youtube.com/watch?v=fn3KWM1kuAw [cleaning clutter] https://www.youtube.com/watch?v=C8-w9eF24gU
[dancing] https://www.youtube.com/watch?v=fn3KWM1kuAw [cleaning clutter] https://www.youtube.com/watch?v=C8-w9eF24gU
1) ./make can be more portable e.g. you might use some GNU Make syntax that does not run on BSD and have to insall gmake just to run that
2) Make is not designed to be a command runner and have to manually add .PHONY to everything
3) case looks minimal and flexible enough (although I'm sure sh's confusing syntax can cause a lot of pain for non sh experts like esac wtf)
1) I suspect GNU Make will be installed on any system where node code will run.
2) You don’t have to .PHONY anything that will never be a real file.
3) My bet is the Makefile will be shorter and clearer, but I am of course biased since I’m used to the syntax.
"just" is a utility designed to execute programs: https://github.com/casey/just#just
For me, I've found that the results are best when you use a build system and you use it exactly the way the author intends. For example, Go's built-in build system is so good that you don't even notice it's there. It automatically updates its configuration as you write code. It does a perfect incremental build every time, including tests. The developer experience is basically perfect, because it has a complete understanding of the mapping between source files, dependencies, and outputs. But, it is not extendable, so you're screwed when your Go project also needs to webpack some frontend code, or you need to generate protocol buffers (which involves a C++ binary that runs a Go binary), etc. So, people bolt those features on, and the build system becomes more general, but not quite as good. (Then there's make, which is as good or as bad as you want it to be.)
I think super small projects often do get their Makefiles right. But you can manually build small projects with something like "gcc foo.c bar.c -o foobar -lbaz", and so you don't really benefit from any improvement over the bare essentials. (Nothing wrong with keeping your projects tiny in scope, of course!)
But, sometimes you don't have the luxury of a super small project, and the Makefiles become quickly unfixable. Like I said, I am most scarred by a buildroot project I worked on (that's embedded Linux, basically). It never built what I expected it to build, and to test anything reliably I either had to yolo my own incremental build or wait a while for a full build. My productivity was minimal. I could switch between client-side and server-side tasks on that project, and so I really only touched the client if it was absolutely necessary. I would never be productive enough to undertake a major project that truly added value with that kind of build system, so I let others that didn't have the server-side experience write the client-side stuff. In that case, the poor build system silently cost our team productivity in terms of artificially splitting the team between people who could tolerate a shitty developer experience and those who couldn't.
I don't think anyone has fixed the buildroot problem, either. If you want to build a Linux image today, you are stuck with these problems. Nothing else is general enough to build Linux and the associated random C binaries that you're going to want to run.
It kind of feels like the best trajectory is to start small with a general build system, and upgrade as needed? And then if you are confident the project will grow, starting with the specific build system fine too.
The idea of these build systems like Bazel is that the rules are correct so that you don't have to worry about writing correct rules, and you have a high probability of an incremental build producing a binary that's bit-for-bit identical to one from a full build. The result is that you don't do full builds anymore, and save the latency of waiting for things to build. (That latency shows up in the edit-build-test cycle, how long it takes to deploy software to production to fix an emergency bug, etc. So it's important!)
On that 30 minute note though: so, how big does the project need to be in order for Make not to be enough? And at that size, why wouldn’t the project invest the extra week it takes to get the Makefile correct?
Often a long word captures a nuance the short version can't. Its presence, by itself, calls the careful reader's attention to the distinction between it and the shorter word it displaced, without belaboring it.
Metaphors, similes, and figures of speech are the furniture of language. Most words, standing alone, embody one. Orwell certainly did not obey this stricture, or he would have been mute.
A word that could have been cut, but wasn't, calls attention to the choice made not to cut it, inviting curiosity why it wasn't, which you may then answer.
Foreign, technical, and jargon words tell the reader about your context. Substituting a word unfamiliar in that context generates confusion, and questions about what distinction you are trying to make by avoiding the usual word. Sometimes you are, in fact, making such a distinction.
Careful readers learn to recognize when writers are making their choices judiciously, and draw extra meaning from them.
So, better advice would tell you to put each such choice to work on the hard job of communicating.
I think the benefit of having one symbol exist in only one domain (e.g. “user_request” only showing up in the database-handling code, where it’s used 3 times, and not in the UI code, where it might’ve been used 30 times) reduces more cognitive load than is added by searching for 2 symbols instead of 1 common one.