In case you want to run Justfiles in places where you can't install the Just binary (for whatever reason), I wrote a compiler that transforms Justfiles into portable shell scripts that have byte-for-byte identical output in most cases.
Fantastic! This solves my big fear around getting used to such a tool.
My work primarily involves 'nix boxes that have to be very locked down and will be left in a place basically untouched for 20 years after I finish setting them up. Getting a reliable new binary of any sort on them is quite difficult, not least because we need to plan for things other far future people might be able to discover for troubleshooting down the line.
We love just and are using it in all projects now. So great. Our typical justfile has around ~20 rules. Here is an example rule (and helper) to illustrate how we use it in ci:
This example is a bit contrived, more typically we would have a rule like "just lint" and you might call it from "just ci".
One of the best features is that just always runs from the project root directory. Little things like that add up after you've spent years wrestling with bash scripts.
> Little things like that add up after you've spent years wrestling with bash scripts.
Can you please explain what you mean here? I looked at the GitHub examples and wondered why this would be preferable to Bash aliases and functions. I must be missing something.
Bash has a thousand pitfalls, and as you accumulate layers of scripting they start compounding. Little things like “what the hell directory is this command actually running from”, parsing input parameters, quoting rules, exit statuses, pipelining, etc.
Tools like just provide a very consistent and simple base to start with, and you can always still call a separate script, or drop directly into inline shell scripting.
For me, the niceties are in the built in functions[0]. Commands to manipulate paths(!!), get cpu counts, mess with environment variables, string processing, hashing, etc. All the gyrations a more sophisticated script is going to eventually require. Instead of having to hack on it in shell, you get cross-platform utilities which are not going to blow up because of something as wild as a space or quote mark.
I love the look of `just` and have been meaning to try it out, but this feels like one of those examples where Make's dependency management shines—it lets you specify that many of these commands only need to run when particular files change:
And as time goes on, I always end up wanting to parallelize the commands that can be parallelized (citest, lint, eslint), so I'll turn `make ci` (or `just ci`) into its own little script.
I've been using just at work and in personal projects for almost a year, and I like it a lot. In particular, its self documentation with `just --list` makes onboarding new folks easy. It's also just a nicer syntax than make.
Agreed. Is it that different than Make with `.PHONY` targets? Yes — it is Designed To Do Exactly What It Does, And It Does It Well. That counts for something in my book.
All my Justfiles start with this prelude to enable positional arguments, and a "default" target to print all the possible commands when you run `just` with no target name:
# this setting will allow passing arguments through to tasks, see the docs here
# https://just.systems/man/en/chapter_24.html#positional-arguments
set positional-arguments
# print all available commands by default
default:
@just --list
in mise you wouldn't need that preamble. `set positional-arguments` is just how it behaves normally and `mise run` doesn't just show available commands—it's also a selector UI
I don't have my work laptop to hand to compare, but I usually run "just" to get a list of commands and what they do, rather than "just --list". Hope that saves you 7 key presses going forwards.
This is one of the most important pieces of software in my development stack that "just" gets out of the way and does what it's supposed to do. Also has excellent Windows[1] support so I can take it everywhere!
> I get that your project is Windows-only, but many projects aren't.
Nit: At this point you're better off starting a separate comment thread since you yourself already know that what you are about to talk about is not what my comment is talking about.
> Wait, by "has excellent Windows support" you mean you have to set it to use Powershell or hope `sh` is installed on
I don't get what the problem is here? Do you protest against shebangs too? Why does a build script for a Windows only app need to use sh instead of powershell? I think you're interpreting "excellent windows support" to mean cross platform, and that's not what it means.
> So not only do you need just installed, which is yet another dependency,
Yeah if you want to use some software, your computer needs that software. That's not a dependency. So we're talking zero dependencies, or one of you absolutely need sh.
You can use the usual cmd (I do). You're not limited to Powershell. Also, you do understand that if a tool has first class support for Windows, that does mean it prioritizes Windows tools, right? Imagine I made a command runner, and said it has "excellent Linux support", and then someone comes along and complains that you have to install Powershell on Linux to use Windows recipes.
You can have Windows only recipes and Linux only recipes.
Furthermore, if you have bash installed on Windows (e.g. via git bash), you can put a shebang in your recipes to use bash.
We develop in Windows and deploy in Linux. Most of our recipes work in both OS's - either we use bash or Python for the recipe. The few that don't - we just mark as Windows only or Linux only so they're not available in the wrong OS.
> So not only do you need just installed, which is yet another dependency,
You do realize that Windows by default comes with almost no development tools, right? So yes, you do actually need to install things to get work done. The horror.
I'll also note that while you complain about just, you provide no alternative.
You can keep your commands simple enough so that they can be executed by both `sh` and `cmd.exe`. If you need anything more complex than invoking other programs, `&&`, `|` and `>`, it's time to rewrite your build script in a real programming language anyway.
I'm not a fan. It works well for what it is, but what it is is an additional language to know in a place where you probably already have one lying around.
Also, like make, it encourages an imperative mode for project tooling and I think we should distance ourselves from that a bit further. It's nice that everybody is on the same page about which verbs are available, but those verbs likely change filesystem state among your .gitignored files. And since they're starting from an unknown state you end up with each Just command prefixed by other commands which prepare to run the actual command, so now you're sort of freestyling a package manager around each command in an ad-hoc way when maybe it's automation that deserves to be handled without depending on unspecified state in the project dir.
None of this is Just's fault. This is people using Just poorly. But I do think it (and make) sort of place you on a slippery slope. Wherever possible I'd prefer to reframe whatever needs doing as a build and use something like nix which is less friendly up front, but less surprising later on because you know you're not depending on the outputs of some command that was run once and forgotten about--suddenly a problem because the new guy can't get it to work and nobody else remembers why it works on theirs.
I find declarative build systems end up pretty frustrating in practice. What I want from a build often isn't the artifacts, but the side effects of producing the artifacts like build output or compilation time. You get this "for free" from an imperative tool, but represents a significant feature in a declarative system that's usually implemented badly if it's implemented at all. The problem gets worse the smarter your tool is.
Logs emitted during the build, or test results, or metrics captured during the build (such as how long it took)... these can all themselves be build outputs.
I've got one where "deploying" means updating a few version strings and image reverences in a different repo. The "build" clones that repo and makes the changes in the necessary spots and makes a commit. Yes, the side effect I want is that the commit gets pushed--which requires my ssh key which is not a build input--but I sort of prefer doing that bit by hand.
I agree, but `Just` as an incremental improvement is a much easier sell to teams than asking them to think about their builds completely differently and rewrite everything to fit that.
Offering a cave man a flashlight is probably more helpful than offering them a lightbulb and asking them to wire up the cave to power it :D
I did that for 10+ years and got fed up with having to remember which names I gave to my scripts that month. I gradually evolved my views and that got reflected with the names of the scripts.
`just` helped me finally move away from that. Now I have i.e. `just check` in projects in different languages that all do the same thing -- check types and/or run various linters. I can go in a directory and run `just check` and I know I have taken care to have all checks that I want in place. Similarly I can run `just test` and I know I'll have the test suite ran, again regardless of the programming language or framework.
Absolutely nothing wrong with a directory full of scripts but I lost patience for having to scan what each does and moved away from them.
(For those who haven't used it, fzf is a fuzzy-searchable menu for the command line. You pipe lines of input to it, and it shows them in a menu. You start typing and it fuzzy searches the menu and selects the best match. Then you press Enter to pipe that out, or Tab for multi-select. It's fantastic.)
I have convenience functions in my profile script that pipe different things to fzf...scripts, paths in the current directory to copy to the clipboard, etc. It's indispensable.
Bonus: progressive enhancement. If someone doesn't have fzf/those convenience functions, it's just a directory with shell scripts, so they don't have to do anything special to use them.
That works too. I've done both and I currently use Just because it collects the entrypoints to the project into a single file. This can provide an advantage where there's a bit of interdependence across your entrypoints.
E.g: You have a docker container, you might be `run`ning it, `exec`ing it etc. from the same compose-file. So Just gives you the ability to link those shared commands within the same file. Once the entrypoints get too numerous you can either break them into scripts (I do this partially depending on the level of behavioral complexity in the script) or partition your justfiles and import them into a single master.
Well, for one, your recipes can be in another language (e.g. Python).
You can build complex recipes out of simpler ones. Sure, you could do that by creating a new shell script that calls other shell scripts, but then you're reinventing just.
You don't need to be in the directory to run those scripts.
I think a better question for you: What's the benefit of putting .PHONY recipes in Makefiles, when you could just have a directory full of shell scripts. If you find yourself using .PHONY recipes, then you already have a reason to use just.
It's a different approach, none is better or worse, people simply have preferences.
And all other features aside, it seems to be able to call commands from any subdirectory in a project, which is actually nice compared with a normal shell. I mean, you can replicate this with some lines of shellscripting, but not everyone seems to maintain an elaborated $BIN of personal tools.
1. The language is extremely simple and is consistent.
2. I agree on having to move away from imperative and go for declarative (if the latter was what you had in mind) -- any ideas for a better tool that does that and is just as easy to learn?
3. RE: cobbling together stuff with and around `just` is relatively trivial to fix f.ex. I have my own `just` recipes to bring up the entire set of dev dependencies for the project at hand, and then to tear them down. It's a very small investment and you get a lot of ROI.
4. RE: Nix, nah, if that's your sales pitch for it and against `just` then I'll just strongly disagree. Nix is a mess, has confusing cutesy naming terminology, has a big learning curve and a terrible language. All of that would be fine, mind you, and I could muscle through it easily but the moment I received several cryptic error messages that absolutely did not tell me what I did wrong and I had to go to forums and get yelled at, is the moment I gave up. `just` is simply much easier and I am not worried about not having Nix-like environments for my projects. Docker + compose work very well for this.
Finally, your point about an obscure single command that people forget about in the future applies to literally any and all task runners and dependency managers, Nix included. That's not a valid criticism towards `just` IMO.
1. It's a fine language but I have all kinds of "works on my machine" problems with it because it has no associated dependency manager. Other languages solve this with lockfiles and such, and it's likely that you're already doing that with one of those same languages in the same project. So just... Use the main language for whatever it is.
2. No, nothing's so easy, but you can get more if you're willing to work for it, and I think the juice is worth the squeeze.
3. For runtime state, I find that using just as a wrapper around Tilt or docker-compose or k3d or whatever just hides the perfectly adequate interfaces that those tools have. The wrapper discourages deeper tinkering with those tools. It's not a particularly difficult layer of abstraction to pierce, but it doesn't buy you enough to justify having an additional layer at all.
4. In the case I'm thinking of, the whole team was working happily because they had used a Just recipe to download a file from a different repo, and then somebody removed the recipe, but everyone (except the new guy) had the file from months ago, which worked. Nix wouldn't have let us accidentally get into a broken state and not know it. It would have broken as soon as we removed the derivation for the necessary file. I sent him the file through slack and then he was able to work, and only discovered later how it got there on my machine. That kind of uncertainty leads to expensive problems eventually.
I never get this criticism. Nix is a pretty nice, small, functional programming language and lazy evaluation makes it really powerful (see e.g. using fixed-points for overlays). I wonder if this criticism comes from people who have never done any functional programming?
When I first started with Nix six years ago, the language was one of the things I immediately liked a lot. What I didn't like was the lack of documentation for all the functions, hooks, etc. in nixpkgs, though it certainly got better with time.
Maybe because it's been many years since I used C or C++ for anything serious, but I don't get that impression from using make in the first place. I haven't seen it used for setting up a build environment per se, so there aren't any "packages" for it to manage. When I've written a Makefile, I saw it as describing the structure of cache files used by the project's build process. And it felt much more declarative than the actual code. At the leaves, you don't tell it to check file timestamps; you tell it which files' timestamps need to be up to date, and let it infer which timestamps need to be compared and what the results of those comparisons need to be in order to trigger a rule. Similarly, a rule feels composed of other rules, more than it feels implemented by invoking them.
> like make, it encourages an imperative mode for project tooling and I think we should distance ourselves from that a bit further.
Um, what? `make` is arguably the most common declarative tool in existence ...
Whenever people complain about Make in detail, it's almost always either because they're violating Paul's Rules of Makefiles or because they're actually complaining about autotools (or occasionally cmake).
It's quite easy to accidentally write makefiles that build something different when you run them a second time, or when some server that used to be reliable suddenly goes down. Or when the user upgrades something that you wouldn't think is related.
It does no validation of inputs. So suppose you're bisecting your way towards the cause of a failure related to the compiler version. Ideally there would be a commit which changed the compiler version, so your bisect would find a nice neat boundary in version history where the problem began. Make, by contrast, is just picking up whatever it finds on the PATH and hoping for the best. So the best you can do is exclude the code as the source of the bug and start scratching your head about the environment.
That willingness to just pick up whatever it finds and make changes wherever it wants with no regard to whether dependency created by these state changes is made explicit and transparent to the user is what I mean by "imperative".
Make isn't, at all, declarative. It's almost entirely based on you writing out what to invoke, as opposed to what should exist and having the build system "figure that out".
That is, in make you say `$(CC) -c foo.c -o foo.o`, which is telling you, ultimately, how to compile the thing, while declarative build systems (bazel/nix/etc.) you say "this is a cc_binary" or "this is a cc_library" and you let it figure the rest out for you.
If you don't want to buy into the whole Nix philosophy, you can also use something like 'shake' (https://shakebuild.com/) to build your own buildsystem-like command line tooling.
I love just. The main benefit for me at work is that it's much easier to convince others to use, unlike make.
I like make just fine, and it's useful to learn, but it's also a very opaque language to someone who may not even have very much shell experience. I've frequently found Makefiles scattered around a repo – which do still work, to be clear – with no known ownership, the knowledge of their creation lost with the person who wrote them, and subsequently left.
I'm hoping for this effect, as more and more I work with people who don't consider `make` the default (or, more often, have never heard of it).
But I think the hard part -- for any build system -- is achieving the ubiquity `make` had back in the day. You could "just" type "make" and you'd either build the project, or get fast feedback on how much that project cared about developers.
I've used Just at a workplace on a project I didn't start. It seemed slightly simpler than make when putting together task dependencies. But I couldn't figure out what justifies using it over make.
For me, it's a fit-for-purpose issue. Make is great when you're creating artifacts and want to rebuild based on changes. Just is a task runner, so while there's a notion of dependent tasks, there's no notion of dependent artifacts. If you're using a lot of .PHONY targets in a Makefile, you're mostly using it as a task runner -- it works, but it's not ergonomic.
I like that just will search upward for the nearest justfile, and run the command with its directory as the working directory (optional -- https://just.systems/man/en/attributes.html -- with fallback available -- https://just.systems/man/en/fallback-to-parent-justfiles.htm...). For example, I might use something like `just devserver` or `just testfe` to trigger commands, or `just upload` to push some assets -- these commands work from anywhere within the project.
My life wouldn't be that different if I just had to use Make (and I still use Make for some tasks), but I like having a language-agnostic, more ergonomic task runner.
Just a quick note for interested readers: you don't need to explicitly mark things as .PHONY in make, unless your Makefile lives next to files/folders with the same name as your targets. So unless you had some file called "install" in the same folder, you wouldn't need to have something like ".PHONY: install".
make is a build system and has a lot of complexity in it to make it optimal (or at least attempt to) for that use case.
just is a "command runner" and functionally the equivalent of packing up a folder full of short scripts into a single file with a little bit of sugar on top. (E.g., by default every script is executed with the CWD being the folder the justfile is in so you don't need to go search for that stackoverflow answer about getting the script's folder and paste that in the top of every script.)
If you use just as a build system, you're going to end up reimplementing half of make. If you try and use make as a command runner, you end up fighting it in many ways because you're not "building" things.
I've generally found the most value in just in situations where shell is a good way to implement whatever I"m doing but it's grown large enough that it could benefit from some greater organization.
Being able to write your recipes in another language.
Not having to be in the directory where the Makefile resides.
Being able to call a recipe after the current recipe with && syntax.
Overall lower mental burden than make. make is very complex. just is very simple. If you know neither of the two, you'll get going much faster with just.
> You can disable this behavior for specific targets using make’s built-in .PHONY target name, but the syntax is verbose and can be hard to remember.
I think this is overstating things a bit. I first read `.PHONY` in a Makefile while i was a teenager and i figured out what it does just by looking at it in practice.
Makefiles do have some weirdness (e.g. tab being part of the syntax) but `.PHONY` is not one of them.
The manual states that "just is a command runner, not a build system," and mentions "no need for .PHONY recipes!" This seems to suggest that there's no way to prevent Just from rebuilding targets, even if they are up-to-date. For me, one of the key advantages of using Make is its support for incremental builds, which is a major distinction from using a plain shell script to run some commands.
For me is not needing to chain a lot of commands with && to ensure that it fails with the first command that fails. With just, if one of the commands of the recipe fails, it stops.
I saw many projects like this a while ago, and, although they all seemed great, I kept wondering why do I need such a complex thing just to save/run a bunch of scripts?
I ended up building my own script runner, fj.sh [1]. It's dead simple, you write your scripts using regular shell functions that accept arguments, and add your script files to your repos. Run with "fj myfunc myarg ...". Installation is basically downloading an executable shell script (fj.sh) and adding it to your PATH. Uninstall by removing it. That's all.
I'm not saying 'just' is bad—it is an awesome, very powerful tool, but you don't always need that much power, so keep an eye on your use case, as always.
'Just' too was simple at the beginning [1], but with time and usage things always become more complex that some script you do for your own specific use-case.
https://github.com/jstrieb/just.sh
Previous HN discussion: https://news.ycombinator.com/item?id=38772039
My work primarily involves 'nix boxes that have to be very locked down and will be left in a place basically untouched for 20 years after I finish setting them up. Getting a reliable new binary of any sort on them is quite difficult, not least because we need to plan for things other far future people might be able to discover for troubleshooting down the line.
One of the best features is that just always runs from the project root directory. Little things like that add up after you've spent years wrestling with bash scripts.
Can you please explain what you mean here? I looked at the GitHub examples and wondered why this would be preferable to Bash aliases and functions. I must be missing something.
Tools like just provide a very consistent and simple base to start with, and you can always still call a separate script, or drop directly into inline shell scripting.
[0] https://just.systems/man/en/functions.html
All my Justfiles start with this prelude to enable positional arguments, and a "default" target to print all the possible commands when you run `just` with no target name:
[0] https://news.ycombinator.com/item?id=30137254
Deleted Comment
The self documenting aspect is what puts jt above a folder of shell scripts for me
[1]: https://github.com/LGUG2Z/komorebi/blob/master/justfile example justfile on my biggest and most active Windows project- might not seem like a lot but this has probably cumulatively saved me months of time
Nit: At this point you're better off starting a separate comment thread since you yourself already know that what you are about to talk about is not what my comment is talking about.
From the docs
Few things work seamlessly across platforms, and that does not seem like a huge burden.I don't get what the problem is here? Do you protest against shebangs too? Why does a build script for a Windows only app need to use sh instead of powershell? I think you're interpreting "excellent windows support" to mean cross platform, and that's not what it means.
> So not only do you need just installed, which is yet another dependency,
Yeah if you want to use some software, your computer needs that software. That's not a dependency. So we're talking zero dependencies, or one of you absolutely need sh.
You can have Windows only recipes and Linux only recipes.
Furthermore, if you have bash installed on Windows (e.g. via git bash), you can put a shebang in your recipes to use bash.
We develop in Windows and deploy in Linux. Most of our recipes work in both OS's - either we use bash or Python for the recipe. The few that don't - we just mark as Windows only or Linux only so they're not available in the wrong OS.
> So not only do you need just installed, which is yet another dependency,
You do realize that Windows by default comes with almost no development tools, right? So yes, you do actually need to install things to get work done. The horror.
I'll also note that while you complain about just, you provide no alternative.
Weirdest rant ever.
Also, like make, it encourages an imperative mode for project tooling and I think we should distance ourselves from that a bit further. It's nice that everybody is on the same page about which verbs are available, but those verbs likely change filesystem state among your .gitignored files. And since they're starting from an unknown state you end up with each Just command prefixed by other commands which prepare to run the actual command, so now you're sort of freestyling a package manager around each command in an ad-hoc way when maybe it's automation that deserves to be handled without depending on unspecified state in the project dir.
None of this is Just's fault. This is people using Just poorly. But I do think it (and make) sort of place you on a slippery slope. Wherever possible I'd prefer to reframe whatever needs doing as a build and use something like nix which is less friendly up front, but less surprising later on because you know you're not depending on the outputs of some command that was run once and forgotten about--suddenly a problem because the new guy can't get it to work and nobody else remembers why it works on theirs.
I've got one where "deploying" means updating a few version strings and image reverences in a different repo. The "build" clones that repo and makes the changes in the necessary spots and makes a commit. Yes, the side effect I want is that the commit gets pushed--which requires my ssh key which is not a build input--but I sort of prefer doing that bit by hand.
You frequently build things not to get binaries but to spend time compiling?
Offering a cave man a flashlight is probably more helpful than offering them a lightbulb and asking them to wire up the cave to power it :D
I did that for 10+ years and got fed up with having to remember which names I gave to my scripts that month. I gradually evolved my views and that got reflected with the names of the scripts.
`just` helped me finally move away from that. Now I have i.e. `just check` in projects in different languages that all do the same thing -- check types and/or run various linters. I can go in a directory and run `just check` and I know I have taken care to have all checks that I want in place. Similarly I can run `just test` and I know I'll have the test suite ran, again regardless of the programming language or framework.
Absolutely nothing wrong with a directory full of scripts but I lost patience for having to scan what each does and moved away from them.
(For those who haven't used it, fzf is a fuzzy-searchable menu for the command line. You pipe lines of input to it, and it shows them in a menu. You start typing and it fuzzy searches the menu and selects the best match. Then you press Enter to pipe that out, or Tab for multi-select. It's fantastic.)
I have convenience functions in my profile script that pipe different things to fzf...scripts, paths in the current directory to copy to the clipboard, etc. It's indispensable.
Bonus: progressive enhancement. If someone doesn't have fzf/those convenience functions, it's just a directory with shell scripts, so they don't have to do anything special to use them.
E.g: You have a docker container, you might be `run`ning it, `exec`ing it etc. from the same compose-file. So Just gives you the ability to link those shared commands within the same file. Once the entrypoints get too numerous you can either break them into scripts (I do this partially depending on the level of behavioral complexity in the script) or partition your justfiles and import them into a single master.
You can build complex recipes out of simpler ones. Sure, you could do that by creating a new shell script that calls other shell scripts, but then you're reinventing just.
You don't need to be in the directory to run those scripts.
I think a better question for you: What's the benefit of putting .PHONY recipes in Makefiles, when you could just have a directory full of shell scripts. If you find yourself using .PHONY recipes, then you already have a reason to use just.
And all other features aside, it seems to be able to call commands from any subdirectory in a project, which is actually nice compared with a normal shell. I mean, you can replicate this with some lines of shellscripting, but not everyone seems to maintain an elaborated $BIN of personal tools.
2. I agree on having to move away from imperative and go for declarative (if the latter was what you had in mind) -- any ideas for a better tool that does that and is just as easy to learn?
3. RE: cobbling together stuff with and around `just` is relatively trivial to fix f.ex. I have my own `just` recipes to bring up the entire set of dev dependencies for the project at hand, and then to tear them down. It's a very small investment and you get a lot of ROI.
4. RE: Nix, nah, if that's your sales pitch for it and against `just` then I'll just strongly disagree. Nix is a mess, has confusing cutesy naming terminology, has a big learning curve and a terrible language. All of that would be fine, mind you, and I could muscle through it easily but the moment I received several cryptic error messages that absolutely did not tell me what I did wrong and I had to go to forums and get yelled at, is the moment I gave up. `just` is simply much easier and I am not worried about not having Nix-like environments for my projects. Docker + compose work very well for this.
Finally, your point about an obscure single command that people forget about in the future applies to literally any and all task runners and dependency managers, Nix included. That's not a valid criticism towards `just` IMO.
2. No, nothing's so easy, but you can get more if you're willing to work for it, and I think the juice is worth the squeeze.
3. For runtime state, I find that using just as a wrapper around Tilt or docker-compose or k3d or whatever just hides the perfectly adequate interfaces that those tools have. The wrapper discourages deeper tinkering with those tools. It's not a particularly difficult layer of abstraction to pierce, but it doesn't buy you enough to justify having an additional layer at all.
4. In the case I'm thinking of, the whole team was working happily because they had used a Just recipe to download a file from a different repo, and then somebody removed the recipe, but everyone (except the new guy) had the file from months ago, which worked. Nix wouldn't have let us accidentally get into a broken state and not know it. It would have broken as soon as we removed the derivation for the necessary file. I sent him the file through slack and then he was able to work, and only discovered later how it got there on my machine. That kind of uncertainty leads to expensive problems eventually.
I never get this criticism. Nix is a pretty nice, small, functional programming language and lazy evaluation makes it really powerful (see e.g. using fixed-points for overlays). I wonder if this criticism comes from people who have never done any functional programming?
When I first started with Nix six years ago, the language was one of the things I immediately liked a lot. What I didn't like was the lack of documentation for all the functions, hooks, etc. in nixpkgs, though it certainly got better with time.
Um, what? `make` is arguably the most common declarative tool in existence ...
Whenever people complain about Make in detail, it's almost always either because they're violating Paul's Rules of Makefiles or because they're actually complaining about autotools (or occasionally cmake).
It does no validation of inputs. So suppose you're bisecting your way towards the cause of a failure related to the compiler version. Ideally there would be a commit which changed the compiler version, so your bisect would find a nice neat boundary in version history where the problem began. Make, by contrast, is just picking up whatever it finds on the PATH and hoping for the best. So the best you can do is exclude the code as the source of the bug and start scratching your head about the environment.
That willingness to just pick up whatever it finds and make changes wherever it wants with no regard to whether dependency created by these state changes is made explicit and transparent to the user is what I mean by "imperative".
That is, in make you say `$(CC) -c foo.c -o foo.o`, which is telling you, ultimately, how to compile the thing, while declarative build systems (bazel/nix/etc.) you say "this is a cc_binary" or "this is a cc_library" and you let it figure the rest out for you.
make is a build system: it has targets, it has file deps, a dag resolver, etc.
But a task runner is basically a fancy aliaser with task deps and arg parsing/proxing.
And just is good at being that. Although I agree I not a fan of adding yet another DSL.
I like make just fine, and it's useful to learn, but it's also a very opaque language to someone who may not even have very much shell experience. I've frequently found Makefiles scattered around a repo – which do still work, to be clear – with no known ownership, the knowledge of their creation lost with the person who wrote them, and subsequently left.
But I think the hard part -- for any build system -- is achieving the ubiquity `make` had back in the day. You could "just" type "make" and you'd either build the project, or get fast feedback on how much that project cared about developers.
I like that just will search upward for the nearest justfile, and run the command with its directory as the working directory (optional -- https://just.systems/man/en/attributes.html -- with fallback available -- https://just.systems/man/en/fallback-to-parent-justfiles.htm...). For example, I might use something like `just devserver` or `just testfe` to trigger commands, or `just upload` to push some assets -- these commands work from anywhere within the project.
My life wouldn't be that different if I just had to use Make (and I still use Make for some tasks), but I like having a language-agnostic, more ergonomic task runner.
just is a "command runner" and functionally the equivalent of packing up a folder full of short scripts into a single file with a little bit of sugar on top. (E.g., by default every script is executed with the CWD being the folder the justfile is in so you don't need to go search for that stackoverflow answer about getting the script's folder and paste that in the top of every script.)
If you use just as a build system, you're going to end up reimplementing half of make. If you try and use make as a command runner, you end up fighting it in many ways because you're not "building" things.
I've generally found the most value in just in situations where shell is a good way to implement whatever I"m doing but it's grown large enough that it could benefit from some greater organization.
Ah, a fellow Person of Culture.
Being able to write your recipes in another language.
Not having to be in the directory where the Makefile resides.
Being able to call a recipe after the current recipe with && syntax.
Overall lower mental burden than make. make is very complex. just is very simple. If you know neither of the two, you'll get going much faster with just.
The question is if those reasons are convincing to someone. The big advantage of Make is that it is probably already installed.
I think this is overstating things a bit. I first read `.PHONY` in a Makefile while i was a teenager and i figured out what it does just by looking at it in practice.
Makefiles do have some weirdness (e.g. tab being part of the syntax) but `.PHONY` is not one of them.
...unless you're on Windows, like me!
I ended up building my own script runner, fj.sh [1]. It's dead simple, you write your scripts using regular shell functions that accept arguments, and add your script files to your repos. Run with "fj myfunc myarg ...". Installation is basically downloading an executable shell script (fj.sh) and adding it to your PATH. Uninstall by removing it. That's all.
I'm not saying 'just' is bad—it is an awesome, very powerful tool, but you don't always need that much power, so keep an eye on your use case, as always.
[1] github.com/gutomotta/fj.sh
[1]: https://github.com/casey/just/tree/v0.2.23