Imagine you get a lengthy help description which then you pipe to less.. and you only get (END) in your terminal. Turns out the author decided to print help message to stderr instead of stdout. I assume newcomers will be as confused as I was when it happened to me for the first time. GNU utils use stdout for help texts, and so should you.
(Of course, an alternative argument is that commands should fail silently but emit a nonzero return value.)
When invoked directly, as with '-h' '--help', etc., help output should write to stdout, and not stderr.
StackOverflow has tackled this question, 2nd response follows the course I suggest:
<https://stackoverflow.com/questions/1068020/app-help-should-...>
And in this case, the first response:
<https://stackoverflow.com/questions/2199624/should-the-comma...>
I'm looking for any specific guidance from, e.g., GNU but am not finding any.
More than justifiable, I'd say it's the correct thing to do in that case. Otherwise, the caller (which can be another script) may end up working with the help message thinking it was the output it expected.
The whole rule should be something like "Print to stdout if it's part of what's asked by the caller. Print to stderr if it wasn't asked but the user should know about it." So outputting it to stdout should happen when it's asked via --help, and outputting it to stderr should happen when it's part of an error.
Chatter on success reads like a cheesy sci fi script.
A message like "incorrect arguments, use --help" can itself go to stderr. Not --help itself though.Some GNU guidance is in the GNU Coding Standards:
https://www.gnu.org/prep/standards/standards.html#Command_00...
That does say that --help and --version should go to standard output.
The document also gives a list of common options; i.e. don't invent your own name for an option, if something matches in this list.
ESR's a somewhat less reliable narrator on many topics these days, but his TAOUP remains useful, and indeed suggests "Rule of Repair: Repair what you can — but when you must fail, fail noisily and as soon as possible."
<https://www.catb.org/~esr/writings/taoup/html/ch01s06.html>
Not part of it, but not against it. It's useful to stay quiet when the program is meant for conditions and failure is normal. For example: `test`/`[`, `false`, `grep` (when no matches are found), etc. Also when the program is meant as a sort of wrapper to other programs, like `ssh localhost false`, `script -qec false /dev/null`, `true | xargs false`, etc.
> A message like "incorrect arguments, use --help" can itself go to stderr. Not --help itself though.
I don't agree that it's incorrect to save the user the step of calling --help, when it's obvious they need to see that info from an incorrect call. Once decided that including the --help message in an error is right, I don't think it's correct to include it in stdout when it's not expected.
This isn't an odd behavior either, including the --help message (or at least just the synopsis) in stderr on incorrect options is the behavior I'm seeing in utilities like GNU's `bash`, `grep`, and OpenBSD's `netcat`, for example.
https://www.gnu.org/prep/standards/html_node/_002d_002dhelp....
> The standard --help option should output brief documentation for how to invoke the program, on standard output, then exit successfully.
Since you can nest functions in Bash (did you know?), I usually have a help function within the main function that is called from both logic branches and just outputs to the right file descriptor.
Yes, but they are not scoped to the parent:
Probably the most profitable use for this is for individual functions to override some callback.But without even dynamic scope, you have no nice way of restoring the previous one, which could be one that some intermediate caller installed for itself.
It could be used to delay the definition of functions. Say that for whatever reason, we put a large number of functions into a file and don't need them all to be defined at once. There can be functions which, when invoked, define sets of functions.
A module could be written in which certain functions are intended to be clobbered by the user with its own implementation. A function which defines those functions to their original state would be useful to recover from a bad redefinition, without reloading that module.
There’s a “Share” link under each answer that you can use to link directly to them. In this case it’s impossible to know which answer you mean because we don’t know what’s your “Sorted by” option. But even then, the order changes over time.
The GNU Project has published tools of varying quality, based on who was around to write the tool, debug it, give feedback, etc. It is not the exemplar of high quality software. (But it's far from crap.) The important bit about GNU (and any other software) is that it was written to adhere to their uses. Other people have different requirements. Telling people to "write your software like GNU writes their software" is to misunderstand personal agency and one of the major points of open source software.
Your comments sound like you're saying "Software freedom means you're free to write software the way I want you to write software."
No thank you.
The word you’re looking for is “condescending”.
Regardless, there is no argument about software freedom to be made here.
You’re allowed, be it open or close source, to write and publish software that defies common, well-established conventions.
You can pretend that it’s some sort of first amendment right to do so if you like, and attempt to deflect your unwillingness to write software that behaves properly as incompetence on the users’ end.
But whether anyone will be convinced by that is a separate question, and those who aren’t convinced certainly have the right to tell you, in turn, that your software sucks. This does not inflige on your rights to write broken software.
"Exceptional event" is not a useful or well defined concept. A better concept is "error" or "unexpected result".
Asking for help is a request for information. The normal, non-error, expected result is that a bunch of text will show up on the output. It is entirely reasonable that the "next command in the pipe" might want to do something with that expected output.
I shouldn't have to guess whether or not you think the output I specifically requested is "exceptional", so it's entirely reasonable to expect that programs in general consistently put user-requested help on stdout.
You are of course free to write your software any way you want. And I'm free to think it's stupid, and to not use your software.
Yes. You're unlikely to like my software. I don't recommend you use it.
You really do not need to be such a grumpy elitist. People are not born with Unix knowledge already in their heads. Asking questions, raising doubts, and getting answers from more knowledgeable users is a very effective way of learning new things!
With that said, can you make an example of a legitimate use of `command1 --help | command2`, where `command2` does something useful and is not `less`?
The problem isn't that you get a usage message when you ask for it. It's that you get a usage message (written to STDOUT) when you don't. Many commands will print out the usage message when command line options specify a condition that can't be met. I find this frequently when ssh'ing into busybox based systems. Busybox's find command is much less "refined" than comparable desktop OS finds (BSD & GNU/Linux).
So if I do something like:
And expect output that looks something like a sorted line of hashes, I will be sorely disappointed.Yup, sounds pretty snotty.
I use the shell almost daily, and have shipped industry-leading products.
2>&1 is a vague memory because I'm not sure if I've ever done that; and I certainly shouldn't have to know some arcane shell trick to read the manual.
I use much more complicated pipe tricks than that interactively on a daily basis, and I definitely don't think of them as "arcane". As somebody who does that, it's useful to me to know which channel the data I want to pipe are going to come out on. Which is why help, which is normal requested output, should obviously go to stdout.
Usage messages issued in response to actual user errors are different, of course.
Also, if you needed to use it every day, I suspect it would be more familiar than a vague memory.
On the other hand, a system that has been designed coherently is much nicer to use.
But the flip side of this is, yes, cruft.
Uh-huh. And then you get developers that do this (from inside a f#@<ing library, of all things):
If it's not an error, it is not exceptional, so it should go to stdout, right? Yay for personal agency!Is there no value in following a convention?
This is sort of my hot button issue. After years of working on BSAFE, OpenSSL, firefox and libnss, I hate that people say "Hey. Great software. Here's a list of things you must add to it. Of course I'm not going to pay you."
Why should I change my code to adhere to someone else's conventions when they're in opposition to existing conventions?
Many tools, for consistency or for laziness always print usage to stderr. But it is better than always printing it to stdout. Errors should never go to stdout, and paging stderr can easily be done with 2>&1.
Edit: and maybe, if your --help output is several pages long, consider leaving out the details to a manpage.
But it's also really useful to be able to get full synopsis of all the options even if all you have available is the binary. Some programs have a lot of options. The "--help" output for rsync on my machine is 184 lines, and is actually pretty terse.
... and there truly is no agreed-upon idea of what constitutes a "page". Even the VT100 screen size was never dominant enough to always count. And nowadays people's windows may be of almost any size.
But yeah, agree. It's way more preferable to have surprise output on stderr than surprises mingling with stdout, and it's good to be prepared for that.
Frankly, I nearly quit piping to a pager when GUI terminal backscroll became easy and infinite.
80x25
> I set all my terminals to 22x23.
You are a very silly man and silliness should not be catered for.
> Will --help run a few quick ioctls to calculate screen size?
Your terminal can wrap text around just fine. If it can't, ask for refund.
--help, when used correctly, is almost always interactive, where stdout/stderr and exit status don't matter at all. The few noninteractive uses like help2man or zsh auto parsing can trivially handle a redirect. Sure, a noob piping --help to less may be confused for a first time, that's rare and it's a good chance for them to learn about streams and redirection.
That leaves accidental noninteractive usage. Sooner or later someone will call your program with dynamic arguments from another program, and if your command accepts filenames/IDs there's always a chance to encounter one that starts with '-' and contains an 'h' (a practical example: YouTube video IDs). It's very easy to forget to add -- before the unsafe argument(s), so that it's accidentally interpreted as flags. Nonempty stdout, empty stderr and zero exit status makes it way too easy to accidentally accept the output as valid, only to discover much later.
This is not a theoretical concern, I've made this mistake myself and had it masked by -h behavior. A noob only need to learn redirection once, in a totally harmless setting. Meanwhile, even the most seasoned expert could forget -- in a posix_spawn.
At the end of the day, this is not a big deal, but as I said, if you have to make a conscious choice, make the one that makes accidental mistakes more obvious, because humans do make mistakes. This principle applies everywhere.
However, if you used the tool incorrectly (passed the wrong args) and you expected the usage information to go to stdout rather than stderr, I would disagree vehemently. stdout is (generally) for parseable information, whereas stderr is kind of a garbage bin of everything else.
I think you meant stdout here, not stderr.
For example, many programs will print usage/help when used incorrectly. Imagine you upgrade the "read_reactor" tool, and your usage in your "control_reactor" becomes invalid - suddenly you're piping help message data to the control rods. By sending it to stderr instead, no bogus data would be piped and, as a bonus, you would see the help message after invoking your script because (as you have experienced) stderr is not piped by default.
If you want to send it to less: read_reactor -h 2>&1 | less
If you're following such and such standards that says it should go to stdout, it should go to stdout though. I don't take that as a given.
I agree with this remaining open after all of these years: https://github.com/commandlineparser/commandline/issues/399 OP should add 2>&1 before the pipe or replace the pipe with |& (bash) or &| (fish)
As others have noted, that's an error which shouldn't go to stdout.
But help text is not an error. It's arguably the expected and primay output of the help function.
These sound useful, in any event.
I'd also like to be able to pipe both STDOUT and STDERR to the next in a sequence of pipes, but eh
But stderr was designed to be seen on terminal regardless of piping or logging. It’s its purpose, so that a pipe user could see what’s wrong or what’s up. There may be a programmatic need to read stderr, but mixing it with stdout is only needed with programs that use these descriptors incorrectly.
Deleted Comment
The output of --help is not an error message, it's the legitimate and expected output of the program when invoked with that argument.