Readit News logoReadit News
jacob2161 · 2 days ago
Stylistically, I much prefer

  #!/bin/bash
  set -o errexit
  set -o nounset
  set -o pipefail
It reminds me when I wrote a lot of Perl:

  #!/usr/bin/perl
  use strict;
  use warnings;
I also prefer --long --args everywhere possible and a comment on any short flag unless it's incredibly common.

I've been writing all my Bash scripts like this for ~15 years and it's definitely the way to go.

Waterluvian · 2 days ago
Yeah! I feel like long args very strongly fits with the overall idea of this blog post: when you are running a script you care about different things and want different behaviours than when you’re working interactively.

The short args are great for when you’re typing into a terminal. But when you’re writing and saving a script, be descriptive! One of the most irritating things about bash (and Linux, I guess?) is how magical all the -FEU -lh 3 -B 1 incantations are. They give off a vibe I call “nerd snobbish” where it’s a sort of “oh you don’t know what that all means? How fretful!”

Intralexical · 2 days ago
Long args are less portable, unfortunately. IIRC they're not POSIX at all, and also likelier to be different across GNU, BusyBox, BSD, and Mac tools.
burnt-resistor · 2 days ago
Too damn verbose and you're assuming bash is at /bin. This will cause problems in nix and other environments where bash should be found on the path instead.

Always, always unless the script is guaranteed to not be portable to other platforms:

    #!/usr/bin/env bash
    ...

jacob2161 · 2 days ago
You're assuming env is in /usr/bin?

There are real environments where it's at /bin/env.

Running a script with

  bash script.sh
Works on any system with bash in the path, doesn't require that it be executable, and is usually the ideal way to execute them.

I don't really believe in trying to write portable Bash scripts. There's a lot more to it than just getting the shebang right. Systems and commands tend to have subtle differences and flags are often incompatible. Lots of branching logic makes them hideous quickly.

I prefer to write a script for each platform or just write a small portable Go program and compile it for each platform.

I'm almost always writing Bash scripts for Linux that run in Debian-based containers, where using /bin/bash is entirely correct and safe.

Regarding verbosity, most people come to appreciate a little verbosity in exchange for clarity with more experience. Clever and compact code is almost never what you want in production code.

kayson · 2 days ago
Or use shellcheck: https://www.shellcheck.net/
burnt-resistor · 2 days ago
Tl;dr: Use both because they aren't mutex.

Shellcheck isn't a complete solution, and running -e mode is essential to smaller bash files. Shellcheck even knows if a script is in -e mode or not.

heybrendan · 2 days ago
From 2014 [1].

This seems to be posted once per year (at least); however, hardly a complaint as the discussion tends to be high quality.

My personal mantra is if it's over 10~20 lines, I should arguably be using another language, like Python (and perhaps correspondingly, subprocess [2], if I'm in a hurry).

[1] https://web.archive.org/web/20140523002853/http://redsymbol....

[2] https://docs.python.org/3/library/subprocess.html

esafak · 2 days ago
It's time to stop using these archaic shells, and make sure the newer ones are packaged for and included with popular operating systems. Better scripting languages have been offered in shells for a long time: https://en.wikipedia.org/wiki/Scsh

These days I've settled on https://www.nushell.sh/

tux1968 · 2 days ago
I think https://oils.pub/ has a good shot at being the eventual replacement because it has a very strong transition story. Being backward compatible, while allowing you to progressively embrace a modern replacement, is pretty powerful.
ZYbCRq22HbJ2y7 · 2 days ago
i don't think nushell is better than bash for day to day things. it is nice when you need it though, and then you can just run it, like with any shell.
vivzkestrel · 2 days ago
Lots of arguments against not using set -euo pipefail https://www.reddit.com/r/commandline/comments/g1vsxk/comment... anything you wanna say about this?
jacob2161 · 2 days ago
That post is quite nitpicky, pointing to edge cases and old version behavior. The essence of it is that writing reliable Bash scripts requires significant domain knowledge, which is true.

Bash scripts are great if you've been doing it for a very long time or you're doing something simple and small (<50 LOC). If it's complicated or large, you should just write it in a proper programming language anyway.

heresie-dabord · 2 days ago
> something simple and small (<50 LOC). If it's complicated or large, you should just write it in a proper programming language anyway.

Regardless of LOC, eject from bash/awk as soon as you need a data structure, and choose a better language.

awestroke · 2 days ago
euo pipefail has been the one good thing with bash. I'll start looking at alternatives now
gorgoiler · 2 days ago
The IFS part is misguided. If the author used double quotes around the array reference then words are kept intact:

  vs=("a b" "c d")
  for v in "${vs[@]}"
  do      #^♥      ♥^
    echo "$v"
  done
  #= a b
  #= c d
Whereas in their (counter-)example, with missing quotes:

  vs=("a b" "c d")
  for v in  ${vs[@]}
  do      #^!      !^
    echo "$v"
  done
  #= a
  #= b
  #= c
  #= d
To paraphrase the manual: Any element of an array may be referenced using ${name[subscript]}. If subscript is @ the word expands to all members of name. If the word is double-quoted, "${name[@]}" expands each element of name to a separate word.

degamad · 2 days ago
The author addresses that in footnote 2:

    > [2] Another approach: instead of altering IFS, begin the loop with for arg in "$@" - double quoting the iteration variable. This changes loop semantics to produces the nicer behavior, and even handles a few edge cases better. The big problem is maintainability. It's easy for even experienced developers to forget to put in the double quotes. Even if the original author has managed to impeccably ingrain the habit, it's foolish to expect that all future maintainers will. In short, relying on quotes has a high risk of introducing subtle time-bomb bugs. Setting IFS renders this impossible.

gorgoiler · 2 days ago
Thanks, I didn’t see that.

I still think the advice is misguided. Double-quote semantics are a fundamental and important part of getting shell scripting right. Trying to bend the default settings so that they are more forgiving of mistakes feels worse than simply fixing those mistakes.

In terms of maintainability, fiddling with IFS feels awkward. It’s definitely something you’ll need to teach to anyone unfamiliar with your code. Teach them how "" and @ work, instead!

(I agree about maintenance being hard. sh and execve() are a core part of the UNIX API but, as another comment here suggests, for anything complex and long-lived it’s important to get up to a higher level language as soon as you can.)

chubot · 2 days ago
I use essentially this, but I think this post is over 10 years old (needs a date), and it's now INCOMPLETE.

bash introduced an option to respect rather than ignore errors within command sub processes years ago. So if you want to be safer, do something like:

    #!/bin/bash
    set -euo pipefail
    shopt -s inherit_errexit 
That works as-is in OSH, which is part of https://oils.pub/

(edit: the last time this came up was a year ago, and here's a more concrete example - https://lobste.rs/s/1wohaz/posix_2024_changes#c_9oo1av )

---

But that's STILL incomplete because POSIX mandates that errors be LOST. That is, it mandates broken error handling.

For example, there what I call the "if myfunc" pitfall

    set -e

    my-deploy-func  # errors respected

    if ! my-deploy-func; then   # errors lost
      echo failed
    fi
    my-deploy-func || echo fail  # errors lost
But even if you fix that, it's still not enough.

---

I describe all the problems in this doc, e.g. waiting for process subs:

YSH Fixes Shell's Error Handling (errexit) - https://oils.pub/release/latest/doc/error-handling.html

Summary: YSH fixes all shell error handling issues. This was surprisingly hard and required many iterations, but it has stood up to scrutiny.

For contrast, here is a recent attempt at fixing bash, which is also incomplete, and I argue is a horrible language design: https://lobste.rs/s/kidktn/bash_patch_add_shopt_for_implicit...

xelxebar · 2 days ago
I kind of feel like set -o errexit (i.e. set -e) provides enough unexpected semantics that explicit error handling makes more sense. One thing that often trips people up is this:

    set -e
    [ -f nonexistent ] && do_something
    echo 'this line runs'
but

    set -e
    f(){ [ -f nonexistent ] && do_something; }
    f
    echo 'This line does not run'
modulo some version differences.

chubot · 2 days ago
Yup, that is pitfall 8 here - https://oils.pub/release/latest/doc/error-handling.html#list...

I think I got that from the Wooledge Wiki

Explicit error handling seems fine in theory, and of course you can use that style with OSH if you want

But in practice, set -e seems more common. For example, Alpine Linux abuild is a big production shell script, and they gradually switched to set -e

(really, you're damned if you do, and damned if you don't, so that is a big reason YSH exists)

aidenn0 · 2 days ago
What are your thoughts on pipefail in bash? I know ysh (and maybe osh too?) are smart about a -13 result from a pipe, but I stopped using pipefail since the return value can be based on a race, causing things to randomly not work at some point in the future.

[edit]

Per the link you posted osh treats a -13 result as success with sigpipe_status_ok, but I'm still interested in your thoughts on if pipefail is better or worse to recommend as "always on" for bash.

chubot · 2 days ago
All my scripts use pipefail, and I haven't run into problems

I think `head` tends to cause SIGPIPE in practice, and that is what sigpipe_status_ok is for

but I stopped using pipefail since the return value can be based on a race, causing things to randomly not work at some point in the future.

Hm I'd be interested in an example of this

The order that processes in a pipeline finish is non-deterministic, but that should NOT affect the status, with either pipefail or not.

That is, the status should be deterministic ... it's possible that SIGPIPE is an exception to that, though that wouldn't affect YSH.

cendyne · 2 days ago
If you ever try something like this with another bash-hackery tool like NVM, you will have a very bad time.
vivzkestrel · 2 days ago
Again another great read about why you should NOT use this https://mywiki.wooledge.org/BashFAQ/105