Readit News logoReadit News
neilv · 2 months ago
One reason to use CGI is legacy systems. A large, complex, and important system that I inherited was still using CGI (and it worked, because a rare "10x genuinely more productive" developer built it). Many years later, to reduce peak resource usage, and speed up a few things, I made an almost drop-in replacement library, to permit it to also run with SCGI (and back out easily to CGI if there was a problem in production). https://docs.racket-lang.org/scgi/

Another reason to use CGI is if you have a very small and simple system. Say, a Web UI on a small home router or appliance. You're not going to want the 200 NPM packages, transpilers and build tools and 'environment' managers, Linux containers, Kubernetes, and 4 different observability platforms. (Resume-driven-development aside.)

A disheartening thing about most my recent Web full-stack project was that I'd put a lot of work into wrangling it the way Svelte and SvelteKit wanted, but upon finishing, wasn't happy with the complicated and surprisingly inefficient runtime execution. I realized that I could've done it in a fraction of the time and complexity -- in any language with convenient HTML generation, a SQL DB library, and an HTTP/CGI/SCGI-ish library, plus a little client-side JS).

ptsneves · 2 months ago
I found that ChatGPT revived vanilla javascript and jquery for me.

Most of the chore part is done by chatgpt and the mental model of understanding what it wrote is very light and often single file. It is also easily embedded in static file generators.

On the contrary Vue/React have a lot of context required to understand and mentally parse. On react the useCallback/useEffect/useMemo make me need to manually manage dependencies. This really reminds me of manual memory management in C, with perhaps even more pitfalls. On vue the difference between computed, props and vanilla variables. I am amazed that the supposed more approachable part of tech is actually more complex than regular library/script programming.

KronisLV · 2 months ago
> I found that ChatGPT revived vanilla javascript and jquery for me.

I used jQuery in a project recently where I just needed some interactivity for an internal dashboard/testing solution. I didn't have a bunch of time to setup a whole toolchain for Vue (and Pinia, Vue Router, PrimeVue, PrimeIcons, PrimeFlex and the automated component imports) because while I like using all of them and the developer experience is quite nice, the setup still takes a bit of time unless you have a nice up to date boilerplate project that's ready to go.

Not even having a build step was also really pleasant, didn't need to do complex multi-stage builds or worry that copying assets would somehow slow down a Maven build for the back end (relevant for those cases when you package your front end and back end together in the same container and use the back end to serve the front end assets, vs two separate containers where one is just a web server).

Only problem was that jQuery doesn't compose as nice, I missed the ability to nest a bunch of components. Might just have to look at Lit or something.

maxwell · 2 months ago
I've had a similar experience. Generating Vue/React scaffolding is nice, but yeah debugging and refactoring require the additional context you described. I've been using web components lately on personal projects, nice to jump into comprehensible vanilla JS/HTML/CSS when needed.

Dead Comment

Dead Comment

simonw · 2 months ago
I really like the code that accompanies this as an example of how to build the same SQLite powered guestbook across Bash, Python, Perl, Rust, Go, JavaScript and C: https://github.com/Jacob2161/cgi-bin
masklinn · 2 months ago
Checked the rust version, it has a toctou error right at the start, which likely would not happen in a non-cgi system because you’d do your db setup on load and only then would accept requests. I assume the others are similar.

This neatly demonstrates one of the issues with CGI: they add synchronisation issues while removing synchronisation tooling.

Deleted Comment

simonw · 2 months ago
Had to look that up: Time-Of-Check-to-Time-Of-Use

Here's that code:

  let new = !Path::new(DB_PATH).exists();
  let conn = Connection::open(DB_PATH).expect("open db");

  // ...
  if new {
      conn.execute_batch(
          r#"
          CREATE TABLE guestbook(
So the bug here would occur only the very first time the script is executed, IF two processes run it at the same time such that one of them creates the file while the other one assumes the file did not exist yet and then tries to create the tables.

That's pretty unlikely. In this case the losing script would return a 500 error to that single user when the CREATE TABLE fails.

Honestly if this was my code I wouldn't even bother fixing that.

(If I did fix it I'd switch to "CREATE TABLE IF NOT EXISTS...")

... but yeah, it's a good illustration of the point you're making about CGI introducing synchronization errors that wouldn't exist in app servers.

Bluestein · 2 months ago
This is a veritable Rosetta stone of a repo. Wow.-
pkal · 2 months ago
I have recently been writing CGI scripts for the web server of our universities computer lab in Go, and it has been a nice experience. In my case, the Guestbook doesn't use SQLite but I just encode the list of entries using Go's native https://pkg.go.dev/encoding/gob format, and it worked out well -- and critically frees me from using CGO to use SQLite!

But in the end efficiency isn't my concern, as I have almost not visitors, what turns out to be more important is that Go has a lot of useful stuff in the standard library, especially the HTML templates, that allow me to write safe code easily. To test the statement, I'll even provide the link and invite anyone to try and break it: https://wwwcip.cs.fau.de/~oj14ozun/guestbook.cgi (the worst I anticipate happening is that someone could use up my storage quota, but even that should take a while).

kragen · 2 months ago
How do you protect against concurrency bugs when two visitors make guestbook entries at the same time? With a lockfile? Are you sure you won't write an empty guestbook if the machine gets unexpectedly powered down during a write? To me, that's one of the biggest benefits of using something like SQLite.
pkal · 2 months ago
That is exactly what I do, and it works well enough because if the power-loss were to happen, I wouldn't have lost anything of crucial value. But that is admittedly a very instance-specific advantage I have.
kragen · 2 months ago
This is a followup to Gold's previous post that served 200 million requests per day with CGI, which Simon Willison wrote a post about, which we had a thread about three days ago at https://news.ycombinator.com/item?id=44476716. It addresses some of the misconceptions that were common in that thread.

Summary:

- 60 virtual AMD Genoa CPUs with 240 GB (!!!) of RAM

- bash guestbook CGI: 40 requests per second (and a warning not to do such a thing)

- Perl guestbook CGI: 500 requests per second

- JS (Node) guestbook CGI: 600 requests per second

- Python guestbook CGI: 700 requests per second

- Golang guestbook CGI: 3400 requests per second

- Rust guestbook CGI: 5700 requests per second

- C guestbook CGI: 5800 requests per second

https://github.com/Jacob2161/cgi-bin

I wonder if the gohttpd web server he was using was actually the bottleneck for the Rust and C versions?

antoineleclair · 2 months ago
We used CGI to add support for extensions in Disco (https://disco.cloud/).

It's so simple and it can run anything, and it was also relatively easy to have the CGI script run inside a Docker container provided by the extension.

In other words, it's so flexible that it means the extension developers would be able to use any language they want and wouldn't have to learn much about Disco.

I would probably not push to use it to serve big production sites, but I definitely think there's still a place for CGI.

In case anyone is curious, it's happening mostly here: https://github.com/letsdiscodev/disco-daemon/blob/main/disco...

kragen · 2 months ago
This is an interesting idea!
shrubble · 2 months ago
In a corporate environment, for internal use, I often see egregiously specced VMs or machines for sites that have very low requests per second. There's a commercial monitoring app that runs on K8s, 3 VMs of 128GB RAM each, to monitor 600 systems; using 500MB per system, basically, just to poll it each 5 minutes, do some pretty graphs, etc. Of course it has a complex app server integrated into the web server and so forth.
RedShift1 · 2 months ago
Yep. ERP vendors are the worst offenders. Last deployment for 40-ish users "needed" an 22 CPU cores and 44 GB of RAM. After long back and forths I negotiated down to 8 CPU cores and 32 GB. Looking at the usage statistics, it's 10% MAX... And it's cloud infra so paying a lot for RAM and CPU sitting unused.
ted537 · 2 months ago
Haha yes -- like what do you mean this CRUD app needs 20 GB of RAM and half an hour to startup?
0xbadcafebee · 2 months ago
> No one should ever run a Bash script under CGI. It’s almost impossible to do so securely, and performance is terrible.

Actually shell scripting is the perfect language for CGI on embedded devices. Bash is ~500k and other shells are 10x smaller. It can output headers and html just fine, you can call other programs to do complex stuff. Obviously the source compresses down to a tiny size too, and since it's a script you can edit it or upload new versions on the fly. Performance is good enough for basic work. Just don't let the internet or unauthenticated requests at it (use an embedded web server with basic http auth).

kragen · 2 months ago
Easy uploading of new versions is a good point, and I agree that the likely security holes in the bash script are less of a concern if only trusted users have access to it. However, about 99% of embedded devices lack an MMU, much less 50K of storage, which makes it hard to run Unix shells on them.
0xbadcafebee · 2 months ago
Busybox runs MMU-less and has ash built in. It also has a web server! It can be a little chonky but you can remove unneeded components. Things like wireless routers and other devices that have a decent amount of storage are a good platform for it
jchw · 2 months ago
Honestly, I'm just trying to understand why people want to return to CGI. It's cool that you can fork+exec 5000 times per second, but if you don't have to, isn't that significantly better? Plus, with FastCGI, it's trivial to have separate privileges for the application server and the webserver. The CGI model may still work fine, but it is an outdated execution model that we left behind for more than one reason, not just security or performance. I can absolutely see the appeal in a world where a lot of people are using cPanel shared hosting and stuff like that, but in the modern era when many are using unmanaged Linux VPSes you may as well just set up another service for your application server.

Plus, honestly, even if you are relatively careful and configure everything perfectly correct, having the web server execute stuff in a specific folder inside the document root just seems like a recipe for problems.

zokier · 2 months ago
Having completely isolated ephemeral request handlers with no shared state and no persistent runtime makes very clean and nice programming model. It also makes deployments simple because there is no graceful shutdown or service management to worry about; in simplest case you can just drop in new executables and they will be automatically taken into use without any service interruption. Fundamentally CGI model allows leveraging lot of tools that Linux/UNIX has to offer.
inetknght · 2 months ago
> there is no ... service management to worry about

Service management:

    systemctl start service.app
or

    docker run --restart=unless-stopped --name=myservice myservice:version
If it isn't written as a service, then it doesn't need management. If it is written as a service, then service management tools make managing it easy.

> there is no graceful shutdown ... to worry about

Graceful shutdown:

    kill -9
or

    docker kill -9 myservice
If your app/service can't handle that, then it's designed poorly.

0x000xca0xfe · 2 months ago
I guess multiprocessing got a bad reputation because it used to be slow and simple so it got looked down upon as a primitive tool for less capable developers.

But the world has changed. Modern systems are excellent for multiprocessing, CPUs are fast, cores are plentiful and memory bandwidth just continues getting better and better. Single thread performance has stalled.

It really is time to reconsider the old mantras. Setting up highly complicated containerized environments to manage a fleet of anemic VMs because NodeJS' single threaded event loop chokes on real traffic is not the future.

ben-schaaf · 2 months ago
That really has nothing to do with the choice to use CGI. You can just as well use rust with Axum or Actix and get a fully threaded web server without having to fork for every request.
jchw · 2 months ago
I feel it necessary to clarify that I am not suggesting we should use single-threaded servers. My go-to approach for one-offs is Go HTTP servers and reverse proxying. This will do quite well to utilize multiple CPU cores, although admittedly Go is still far from optimal.

Still, even when people run single-thread event loop servers, you can run an instance per CPU core; I recall this being common for WSGI/Python.

taeric · 2 months ago
I thought the general view was that leaving the CGI model was not necessarily better for most people? In particular, I know I was at a bigger company that tried and failed many times to replace essentially a CGI model with a JVM based solution. Most of the benefits that they were supposed to see from not having the outdated execution model, as you call it, typically turned into liabilities and actually kept them from hitting the performance they claimed they would get to.

And, sadly, there is no getting around the "configure everything perfectly" problem. :(

kragen · 2 months ago
Serverless is a marketing term for CGI, and you can observe that serverless is very popular.

A couple of years ago my (now) wife and I wrote a single-event Evite clone for our wedding invitations, using Django and SQLite. We used FastCGI to hook it up to the nginx on the server. When we pushed changes, we had to not just run the migrations (if any) but also remember to restart the FastCGI server, or we would waste time debugging why the problem we'd just fixed wasn't fixed. I forget what was supposed to start the FastCGI process, but it's not running now. I wish we'd used CGI, because it's not working right now, so I can't go back and check the wedding invitations until I can relogin to the server. I know that password is around here somewhere...

A VPS would barely have simplified any of these problems, and would have added other things to worry about keeping patched. Our wedding invitation RSVP did need its own database, but it didn't need its own IPv4 address or its own installation of Alpine Linux.

It probably handled less than 1000 total requests over the months that we were using it, so, no, it was not significantly better to not fork+exec for each page load.

You say "outdated", I say "boring". Boring is good. There's no need to make things more complicated and fragile than they need to be, certainly not in order to save 500 milliseconds of CPU time over months.

jchw · 2 months ago
> Serverless is a marketing term for CGI, and you can observe that serverless is very popular.

No, it's not.

CGI is Common Gateway Interface, a specific technology and protocol implemented by web servers and applications/scripts. The fact that you do a fork+exec for each request is part of the implementation.

"Serverless" is a marketing term for a fully managed offering where you give a PaaS some executable code and it executes it per-request for you in isolation. What it does per request is not defined since there is no standard and everything is fully managed. Usually, rather than processes, serverless platforms usually operate on the level of containers or micro VMs, and can "pre-warm" them to try to eliminate latency, but obviously in case of serverless the user gets a programming model and not a protocol. (It could obviously be CGI under the hood, but when none of the major platforms actually do that, how fair is it to call serverless a "marketing term for CGI"?)

CGI and serverless are only similar in exactly one way: your application is written "as-if" the process is spawned each time there is a request. Beyond that, they are entirely unrelated.

> A couple of years ago my (now) wife and I wrote a single-event Evite clone for our wedding invitations, using Django and SQLite. We used FastCGI to hook it up to the nginx on the server. When we pushed changes, we had to not just run the migrations (if any) but also remember to restart the FastCGI server, or we would waste time debugging why the problem we'd just fixed wasn't fixed. I forget what was supposed to start the FastCGI process, but it's not running now. I wish we'd used CGI, because it's not working right now, so I can't go back and check the wedding invitations until I can relogin to the server. I know that password is around here somewhere...

> A VPS would barely have simplified any of these problems, and would have added other things to worry about keeping patched. Our wedding invitation RSVP did need its own database, but it didn't need its own IPv4 address or its own installation of Alpine Linux.

> It probably handled less than 1000 total requests over the months that we were using it, so, no, it was not significantly better to not fork+exec for each page load.

> You say "outdated", I say "boring". Boring is good. There's no need to make things more complicated and fragile than they need to be, certainly not in order to save 500 milliseconds of CPU time over months.

To be completely honest with you, I actually agree with your conclusion in this case. CGI would've been better than Django/FastCGI/etc.

Hell, I'd go as far as to say that in that specific case a simple PHP-FPM setup seems like it would've been more than sufficient. Of course, that's FastCGI, but it has the programming model that you get with CGI for the most part.

But that's kind of the thing. I'm saying "why would you want to fork+exec 5000 times per second" and you're saying "why do I care about fork+exec'ing 1000 times in the total lifespan of my application". I don't think we're disagreeing in the way that you think we are disagreeing...

rajaravivarma_r · 2 months ago
I'm wondering the same, but honestly I have a soft corner for the old way of doing things as well, and I think it stems from it.

The performance numbers seem to show how bad it is in real world.

For testing I converted the CGI script into a FastAPI script and benchmarked it on my MacBookPro M3. I'm getting super impressive performance numbers,

Read ``` Statistics Avg Stdev Max Reqs/sec 2019.54 1021.75 10578.27 Latency 123.45ms 173.88ms 1.95s HTTP codes: 1xx - 0, 2xx - 30488, 3xx - 0, 4xx - 0, 5xx - 0 others - 0 Throughput: 30.29MB/s ``` Write (shown in the graph of the OP) ``` Statistics Avg Stdev Max Reqs/sec 931.72 340.79 3654.80 Latency 267.53ms 443.02ms 2.02s HTTP codes: 1xx - 0, 2xx - 0, 3xx - 13441, 4xx - 0, 5xx - 215 others - 572 Errors: timeout - 572 Throughput: 270.54KB/s ```

At this point, the contention might be the single SQL database. Throwing a beefy server like in the original post would increase the read performance numbers pretty significantly, but wouldn't do much on the write path.

I'm also thinking that at this age, one needs to go out of their way to do something with CGI. All macro, micro web frameworks comes with a HTTP server and there are plenty of options. I wouldn't do this for anything apart from fun.

FastAPI-guestbook.py https://gist.github.com/rajaravivarma-r/afc81344873791cb52f3...

Nzen · 2 months ago
My personal interest in CGI stems from my website host offering it as a means of responding to requests [0] in addition to static assets.

[0] https://www.nearlyfreespeech.net/help/faq#CGISupport

p2detar · 2 months ago
For smaller things, and I mean single-script stuff, I pretty much always use php-fpm. It’s fast, it scales, it’s low effort to run on a VPS. Shipped a side-project with a couple of PHP scripts a couple of years ago. It works to this day.
jchw · 2 months ago
php-fpm does work surprisingly well. Though, on the other hand, traditional PHP using php-fpm kinda does follow the CGI model of executing stuff in the document root.
UK-Al05 · 2 months ago
It's very unix. A single process executable to handle a request then shuts down.
9rx · 2 months ago
I suppose because they can. While there were other good reasons leave CGI behind, performance was really the only reason it got left behind. Now that performance isn't the same concern it once was...
monkeyelite · 2 months ago
Think about all the problems associated with process life cycle - is a process stalled? How often should I restart a crashed process? Why is that process using so much memory? How should my process count change with demand? All of those go away when the lifecycle is tied to the request.

It’s also more secure because each request is isolated at the process level. Long lived processes leak information to other requests.

I would turn it around and say it’s the ideal model for many applications. The only concern is performance. So it makes sense that we revisit this question given that we make all kinds of other performance tradeoffs and have better hardware.

Or you know not every site is about scaling requests. It’s another way you can simplify.

> but it is an outdated execution model

Not an argument.

The opposite trend of ignoring OS level security and hoping your language lib does it right seems like the wrong direction.

jchw · 2 months ago
> Think about all the problems associated with process life cycle - is a process stalled? Should I restart it? Why is that process using so much memory? How should my process count change with demand? All of those go away when the lifecycle is tied to the request.

So the upshot of writing CGI scripts is that you can... ship broken, buggy code that leaks memory to your webserver and have it work mostly alright. I mean look, everyone makes mistakes, but if you are routinely running into problems shipping basic FastCGI or HTTP servers in the modern era you really need to introspect what's going wrong. I am no stranger to writing one-off Go servers for things and this is not a serious concern.

Plus, realistically, this only gives a little bit of insulation anyway. You can definitely still write CGI scripts that explode violently if you want to. The only way you can really prevent that is by having complete isolation between processes, which is not something you traditionally do with CGI.

> It’s also more secure because each request is isolated at the process level. Long lived processes leak information to other requests.

What information does this leak, and why should I be concerned?

> Or you know not every site is about scaling requests. It’s another way you can simplify.

> > but it is an outdated execution model

> Not an argument.

Correct. That's not the argument, it's the conclusion.

For some reason you ignored the imperative parts,

> It's cool that you can fork+exec 5000 times per second, but if you don't have to, isn't that significantly better?

> Plus, with FastCGI, it's trivial to have separate privileges for the application server and the webserver.

> [Having] the web server execute stuff in a specific folder inside the document root just seems like a recipe for problems.

Those are the primary reasons why I believe the CGI model of execution is outdated.

> The opposite trend of ignoring OS level security and hoping your language lib does it right seems like the wrong direction.

CGI is in the opposite direction, though. With CGI, the default behavior is that your CGI process is going to run with similar privileges to the web server itself, under the same user. On a modern Linux server it's relatively easy to set up a separate user with more specifically-tuned privileges and with various isolation options and resource limits (e.g. cgroups.)

g-mork · 2 months ago
processless is the new serverless, it lets you fit infinite jobs in RAM thus enabling impressive economies of scale. only dinosaurs run their own processes
0xbadcafebee · 2 months ago
It's the same reason people are using SQLite for their startup's production database, or why they self-host their own e-mail server. They're tech hipsters. Old stuff is cool, man. Now if you'll excuse me, I need to typewrite a letter and OCR it into Markdown so I can save it in CVS and e-mail my editor an ar'd version of the repo so they can edit the new pages of my upcoming book ("Antique Tech: Escaping Techno-Feudalism with Old Solutions to New Problems")