Readit News logoReadit News
pedrovhb · 2 years ago
For an actually intentional, non-cursed version of this, see the nix-shell shebang [0]:

> #! /usr/bin/env nix-shell > #! nix-shell -i python3 -p python3Packages.pillow python3Packages.ansicolor > > # scale image by 50% > import sys, PIL.Image, ansicolor > path = sys.argv[1] > image = PIL.Image.open(path) > factor = 0.5 > image = image.resize((round(image.width * factor), round(image.height * factor))) > path = path + ".s50.jpg" > image.save(path) > print(ansicolor.green(f"done {path}"))

Just `chmod +x` and you have an executable with all dependencies you specify!

[0] https://nixos.wiki/wiki/Nix-shell_shebang

nmz · 2 years ago
There's a 256byte limit for #! so this shouldn't work at all.

EDIT: Now I see its badly formatted, Either way, be careful with #! size limits.

pronoiac · 2 years ago
Ah. I think two leading spaces fix this? I'll try:

  #! /usr/bin/env nix-shell
  #! nix-shell -i python3 -p python3Packages.pillow python3Packages.ansicolor
  
  # scale image by 50%
  import sys, PIL.Image, ansicolor
  path = sys.argv[1]
  image = PIL.Image.open(path)
  factor = 0.5
  image = image.resize((round(image.width \* factor), round(image.height \* factor)))
  path = path + ".s50.jpg"
  image.save(path)
  print(ansicolor.green(f"done {path}"))

zopa · 2 years ago
I use nix-shell, and mostly I love it. But it’s important to be aware that the above means “Go get the latest(*) versions of python, pillow and ansicolor and run this code in an environment where they’re available.” It doesn’t do any version-pinning of your dependencies. That might be what you want, but maybe not: it’s frustrating when a script that worked yesterday won’t work today, or will only work after some big download.

My own rule of thumb is that nix-shell is great for quick one-offs and for sharing environments. For local tools and anything else I’m sharing with my future self, it’s usually better to write a nix expression and install it, which gives me access to Nix’s (excellent) rollback system, and lets me upgrade on my schedule, not upstream’s.

* - ‘Latest’ according to whatever Nix channel checkout currently applies. Which you can change, of course, but the point is it’s external to the script.

YoshiRulz · 2 years ago
You can "pin" Nixpkgs with this style of invocation as well, see https://nixos.wiki/wiki/Nix-shell_shebang#Pinning_nixpkgs. But I agree that if you're writing a shell script (or small Python/Ruby scripts) that you'll be running often, it's better to package it (e.g. with writeShellScriptBin) and install to profile.
d0mine · 2 years ago
There are pip-run, pipx run, etc for Python-specific use-cases.
adastra22 · 2 years ago
I'm pretty sure TFA is “intentional” too. Isn't this the whole point of shebang?
miduil · 2 years ago
Totally, some practical use of that here as well:

https://dpc.pw/posts/nix-users-you-can-start-using-rust-scri...

jdxcode · 2 years ago
I documented how to do it with mise-en-place: https://mise.jdx.dev/tips-and-tricks.html#shebang
tomberek · 2 years ago
I'm a huge fan of the Nix shebang, and we now have a variant of it for the new CLI. Feedback on it would be appreciated.
pushedx · 2 years ago
The readme compares it to the cross-architecture Cosmopolitan libc, but Docker is anything but cross-platform. On any other platform besides Linux it requires a Linux VM.

Linux containers are great (and I run Linux as my desktop OS), just pointing out the not-so-efficient nature of considering this cross-platform.

doctorpangloss · 2 years ago
OCI image manifests can specify platforms and architectures. From the end user’s point of view it can be all the same invocation.

Docker natively supports Windows, and it is low lift to make native Windows images for many common programming environments.

Does anyone use it? No not really. It makes a lot of sense if you need Windows stack stuff that is superior to Linux, like DirectX, but maybe not so much for regular applications.

There is also macOS containers, a project that has a decent proof of concept of a containerd fork that runs macOS container images. In principle there is a shorter path of work for so called host process containers, but fully isolated exists for macOS, it could work with e.g. Kubernetes, and people want it and it makes sense, and it sort of does exist.

The difference between cross-platform and “cross-platform” as you’re talking about it is really having some absolutely gigantic company, like Amazon or Google, literally top 10 in the world, putting this stuff into a social media zeitgeist.

cowboyscott · 2 years ago
I really like what this script is doing - it's specifying system level dependencies, a database schema, an interpreter, the code that runs on that interpreter, the data (on disk!) required by that code, and an invocation to execute the code all in one script. That's amazing, and this is an excellent model for sharing a stand alone applications with non-trivial dependencies!

However, Docker is an OS-level virtualization. Docker natively supports Windows in the sense that there is a native app. That native app spins up Linux virtual machines, so the container is "native" to my Intel CPU with their virtualization extensions, but it is not native to Windows. I use it, which I say with no animus toward your original message.

edit: I was ignorant of native windows containers. I'm old and my brain still maps docker to lxc I guess. Apologies to OP - the DirectX line should have caught my attention.

duped · 2 years ago
chroot requires disabling SIP on MacOS, so any kind of "container" that shares the kernel but has a mostly isolated userspace is never going to happen on MacOS. If you want an isolated host environment on MacOS the bespoke approach is to use VZVirtualMachine. The whole point of containerization is to not require virtualization, so it kind of defeats the purpose.

I really think people who "want" containers on MacOS don't understand containers or what problem they solve, and if they think they need them should consider why they aren't already running their dev environment in Linux.

tomjen3 · 2 years ago
The main problem, I think, with Windows containers is that they are only really supported on Windows Server - which most developers don't have access to.

You can run them through Docker Desktop, but then why not just run the same containers you will be deploying on you server (which is most likely going to be linux based?).

I would love for MS to make containers the way to deploy programs to Windows, but that requires them to make the runtime part of the default install and to make it available on all the OSs.

pjmlp · 2 years ago
Plenty of Windows shops use Windows containers, from my side you can count with 5 projects delivered into production using Windows Containers.

Many App deployments in Azure also use Windows containers.

bionhoward · 2 years ago
Is directx superior to vulkan? Serious question from a graphics noob (who dislikes windows development)

Deleted Comment

Dead Comment

8organicbits · 2 years ago
I explored the idea of using the scratch image with a cosmopolitan binary to get something more cross-architecture, but you need a shell to run those binaries. I'd love to see cross architecture Docker images, if someone else can figure out a trick to make it work.
alganet · 2 years ago
Just use redbean and provide a init lua file. Or use a http://cosmo.zip provided interpreter (like python, maybe even bash).

Each ape file is also a valid zip file. Add your dependencies as if the ape was an archive:

    zip -ur myape.com mydependency.anything
Also add a `.args` file:

    zip -ur myape.com .args
For this .args file, put one argument per line. This will run on start. You can use `/zip/mydepencency.anything` to read from files, but if you have an executable dependency you'll need to extract it first (I use the host shell or host powershell for this).

You can do this with any software you can compile with comsocc, by adding a call to LoadZipArgs[1] in the main function.

It's easy to get started, your ideas will branch out as soon as you start playing with it.

[1]: https://github.com/jart/cosmopolitan/blob/master/tool/args/a...

t0astbread · 2 years ago
I think parent was pointing out that you need Linux to run Docker (since it doesn't run natively on any other OS) which is different from what Cosmopolitan provides.

Edit: Ok, apparently it natively supports Windows for Windows containers and for everything else there's a Hyper-V integration. Not sure if you can write a portable Dockerfile script like that though.

Deleted Comment

willio58 · 2 years ago
Makes me wonder if containerization is even possible without a VM for non-Linux machines.
edgyquant · 2 years ago
I do believe so but only for the host OS. Eg Mac containers work for Mac etc
erik_seaberg · 2 years ago
Doesn’t Cosmopolitan rely on QEMU to emulate an x86_64 CPU when running on any other platform?
leonheld · 2 years ago
No, it doesn't. You're probably thinking of binfmt https://docs.kernel.org/admin-guide/binfmt-misc.html.
HumanOstrich · 2 years ago
No
Arch-TK · 2 years ago
Not to mention the non-standard -S flag to env which makes the shebang work.
pjmlp · 2 years ago
Not on Windows when using Windows containers.

Deleted Comment

adastra22 · 2 years ago
Doesn’t windows use WSL?
chx · 2 years ago
Docker Desktop runs either with Hyper-V or with WSL. https://docs.docker.com/desktop/install/windows-install/
voxic11 · 2 years ago
Not for windows containers. But no one really uses those anyways.
ric2b · 2 years ago
WSL is a Linux VM
riffic · 2 years ago
that's not necessarily true
noname120 · 2 years ago
The -S / --split-string option[1] of /usr/bin/env is a relatively recent addition to GNU Coreutils. It's available starting from GNU Coreutils 8.30[2], released on 2018-07-01.

Beware of portability: it relies on a non-standard behavior from some operating systems. It only works on OSs that treat all the text after the first space as argument(s) to the shebanged executable; rather than just treating the whole string as an executable path (that can happen to contain spaces).

Fortunately this non-standard behavior is more the norm than the exception: it works at least on modern GNU/Linux, BSDs, and macOS.

[1] https://www.gnu.org/software/coreutils/manual/html_node/env-...

[2] https://github.com/coreutils/coreutils/blob/b09dc6306e7affaf...

riedel · 2 years ago
There is some some ways of doing this in a more portable way on Unix like systems [0]

[0] https://unix.stackexchange.com/questions/399690/multiple-arg...

habitue · 2 years ago
Not to be negative, but is this warning of non-standardness for like, AT&T unix or something? Beyond Linux, macos, and BSDs, I'm assuming you're running an ancient mainframe or something and are not worried about trying a cool docker shebang hack (probably because docker doesn't exist on your platform anyway)
bionhoward · 2 years ago
This is genius and I love how this is a whole app meta-seed in a single file! I think I have docker trauma, why did we reach a point where we need computers inside our computers just for normal stuff to work?

Container packing is cool, but is it just a security thing preventing us from using our normal hardware? Or versioning (NixOS)? Is wasm capable of doing this and is wasm still alive? I just feel like needing to run tests inception style inside and outside docker gets complicated and annoying and always try to just use Linux directly these days.

petercooper · 2 years ago
There are many reasons, but the simple idea of "containing" is a big part of it. You could run several versions of Python, database systems, etc. on a single machine, but it rapidly becomes confusing in most cases with dependency clashes, losing track of where everything is, etc. Anyone who worked on multiple projects ~20 years ago and didn't use VMs might remember how it felt.

It's like if you have a workshop and you diligently organize all of the different parts into different trays in different units so it's easier to do all the types of work you need to do. You could just have a giant box in the corner where you chuck absolutely everything.. far less complex, but it'd make your day to day work a nightmare.

zarzavat · 2 years ago
I'd argue that running all of those things inside docker containers also rapidly becomes confusing. The confusion is inherent to the complexity of the things you are running.

I don't hate docker, but I find that it's just not that useful until you reach a certain scale. I stopped using it for personal projects and am much happier for it.

layer8 · 2 years ago
Executable files (and OS processes) used to be that. Then came shared libraries, configuration files, multi-executable applications, and whatnot. It would have been nicer to extend the executable formats and OS process sandboxing, IMO.

Next thing we’ll define a new format and runtime to package and run a collection of docker images with associated configuration.

d0mine · 2 years ago
Docker containers (in practice) can be considered to be an extreme form of distributing static binaries (snaps, flat-packs, nix, fat go binaries, pyinstaller, etc).

It is less about security and more about having several applications on the same hardware without full blown VMs.

throwaway290 · 2 years ago
I know people who use Nix for this... May or may not be another level of confusing though. Also, I heard it's a bad choice for JS ecosystem.
photonthug · 2 years ago
The single file aspect is cool for distribution but of course not for editing.. a similar thing that is still maniacal/clever but somewhat easier to scale could use i.e. makeself
teknopaul · 2 years ago
Many people share your concern. Hence users dislike of snap
bornfreddy · 2 years ago
Well snap has other problems too. For me a big one is that it is pushed heavily by a single company which may or may not still exist in 10 years. Or which might decide to capitalize on its investment once enough people are locked into its ecosystem.
throwaway892238 · 2 years ago
Cute trick, but it's not actually what the title claims.

Since this is actually env calling bash first, not docker, this should just be a Bash script. You can still feed the Dockerfile to docker build via STDIN. But you'd gain the ability to shellcheck the Bash, the code would be easier to read, write, maintain, add comments to, etc. You could keep the filename the same, run it the same way, etc. The way they've done it here is just unnecessarily difficult.

chii · 2 years ago
> You can still feed the Dockerfile to docker build via STDIN.

but you'd then have to work out how to "filter out" the bash commands inside this bash script to make it a valid docker file.

Unless of course, you entirely store the docker file contents inside heredocs. That works fine, but it's not as "cool" as "executing" dockerfiles as a script.

notso411 · 2 years ago
You can say it is wrong without being insufferably condescending
chubot · 2 years ago
Something like this should definitely exist, just not with Docker!

Podman is better but it's also a bit coupled to a distro - https://news.ycombinator.com/item?id=38981844

The problem is the Linux kernel container primitives are a bit of a mess

bubblewrap is a lot closer, although last I heard it's not in some distros for security reasons - https://news.ycombinator.com/item?id=30823164

a-dub · 2 years ago
i think the kernel primitives are fine, unshare and namespaces make perfect sense to me. docker, podman, buildah, buildx, whatever... all these things with cutesy names and fatal flaws seem like the mess to me.
ksjskskskkk · 2 years ago
the feature IS the fatal flaw. after unsharing namespace you still want your network to "just work". the "quality"of the solution is directly proportional to how bad the security is.

the scale is non virtualized qemu all the way to docker which will even screw your iptables rules for your convenience. hn crowd falling in the middle as the Goldie locks we all are.

nickstinemates · 2 years ago
another docker post filled with podman propaganda. despite it all, still no one uses it.
65a · 2 years ago
I haven't used docker since ~2017. My clusters run on cri-o, builds are with kaniko, and some of my systems just call runc with OCI container definitions. Docker (especially its API) is a giant mess, and the sooner it's replaced by smaller tools and clear standards the better.
c0balt · 2 years ago
Can attest from $Job that there are podman users. Podman is awesome for some of our RHEL-based systems and we will continue to use it. You are just gonna hear about it a lot, because it's just a runtime.
supriyo-biswas · 2 years ago
https://news.ycombinator.com/newsguidelines.html

> Please don't fulminate. Please don't sneer, including at the rest of the community.

ekianjo · 2 years ago
Still a useful alternative to docker and can be packaged in distros
rtpg · 2 years ago
I mean I know several people who run their infra with podman. But it's for personal things, I don't know if there is any level of usage at the enterprise level.
christophilus · 2 years ago
I use it and love it. YMMV.
orhmeh09 · 2 years ago
No love for Apptainer/Singularity?
chubot · 2 years ago
What's that? What's good about it? :)
kevincox · 2 years ago
This is cool hacking but I really don't get this obsession with "single file". Directories exist and can contain self-contained applications without the need to pack everything into some ugly script. They are into the slightest bit more difficult to ship around to different machines.
da39a3ee · 2 years ago
I think maybe it helps to think from the point of view of a developer for whom these single-file things are tools in their workshop.

- Easier to grep a collection of single files

- Easier to see what you've got in your collection in a directory listing (whether via a shell or in a web UI such as GitHib)

- Easier to view the contents quickly (`cat`)

- General philosophy that flat is better than nested

lwneal · 2 years ago
You can create this type of thing (a self-contained single-file project) for any language or infrastructure, with or without a clever shebang. All you need are heredocs.

For example, here's the same app but packaged as a regular bash script:

https://gist.github.com/lwneal/a24ba363d9cc9f7a02282c3621afa...

adtac · 2 years ago
Of course! Bash script is Turing complete so it should be possible to implement everything in it :)

The only upside to having an executable Dockerfile is that it's still a valid Dockerfile that you can use with docker build, docker-compose, etc. in addition to being able to execute it.

bcjordan · 2 years ago
Yes I love this approach, I use this exact format as a way to get ChatGPT to work with an entire multi file programming project in a single idempotent bootstrapping script. Then ask for changes to be given as the entire file again
richdougherty · 2 years ago
Agree, nesting files with

    cat >Dockerfile <<'EOF'
and having a basic bash script seems way nicer than putting all the shell logic on the #! line.