Readit News logoReadit News
jameshart · 2 years ago
People who decry graphical admin interfaces in favor of command line are missing the wood for the trees.

Sure, clickops is no way to run a server - but neither, if we’re honest, is ssh.

For a working machine, server state should be reproducible from scratch. Install an OS, add software, apply configuration, leave well alone. If you’re going in with ssh or cockpit you’re just going to screw something up.

So the only reason you should be working on a server directly is because you’re doing something exploratory. And in that case gui vs command line isn’t as clearcut as people want to make it. GUIs emphasize discoverability and visibility which can be helpful in that experimental phase when you’re trying to figure out how to get something set up right.

tremon · 2 years ago
clickops is no way to run a server

server state should be reproducible from scratch

Why? I'm not necessarily disagreeing, but too often are these kinds of statements thrown about without any qualification, as if they are self-evident truths. But they're not -- there are engineering trade-offs behind any choice, and it's no different here. So, in order to guide this discussion away from dogmatic platitudes: why should server state be reproducible from scratch? What does "from scratch" mean? Why is clickops no way to run a server?

Install an OS, add software, apply configuration

Do you think this captures "server state" completely? Software patch levels are not part of server state? What about application data? User data?

So here's my counterstatement: for any working machine, I can reproduce the server state exactly by performing a restore from backup. Backup/restore is perfectly compatible with clickops, and it's faster and more reliable than reinstalling an OS, adding software and applying configuration -- even when the software and configuration are scripted. And if your server stores non-volatile data, as is often the case in clickops environments, you will need to have a backup system anyway to restore the user data after deploying a new server.

mbreese · 2 years ago
> too often are these kinds of statements thrown about without any qualification, as if they are self-evident truths

It's because different people think in different levels of abstraction. One admin might be thinking about a handful of servers and another an entire fleet of VMs. The way you manage each is very different. Clickops can work well for a small number of servers and a full orchestration setup can be over engineering.

But your real issue is that blanket statements never work in such scenarios. However, I think it's pretty well established that reproducible server state is a best-practice. How you get there is up to you.

But as an argument against backup/restore -- you can't use backup/restore to generate new servers from an existing template without some kind of extra scripting (if for no other reason to avoid address/naming conflicts). And if you're already scripting that...

fnordpiglet · 2 years ago
There are a lot of reasons we arrived here over the decades of struggling to keep servers in good working order in a sea of change. One is that backup and restore is inherently fragile, and we have many instances where restorability degrades for many reasons over a long life. Backup restore verification is not a regular part of hygiene because it’s intrusive, tedious, and slow. If ever done it’s usually done once. Reproducible builds allows for automated verification and testing offline.

Changes done are only captured at snapshot intervals and are no coherent and atomic, so you can easily miss changes that are crucial but capture destructive changes in between deltas. Worse are flaws that are introduced but not observed for a long time and are now hopelessly intermixed with other changes. Reproducible build systems allow you to use a revision control system to manage change and cherry pick changesets to resolve intermixed flaws, and even if they’re deeply intermixed you can resolve in an offline server until it’s healthy to rebuild your online server.

The issue with reproducible build systems isn’t they aren’t superior to backup and restore in every way. It’s the interfaces we provide today are overly complex compared to the simple interface of “backup and restore,” which despite its promised interface always works in the backup part but often fails in the restore. These ideas of hermetic server builds are relatively new and the tooling hasn’t matured.

I would say actually click ops is an ideal way to solve that issue. Click ops that serializes resiliently to a metadata store that drives the build and is revision controlled solves that usability issue. If the metadata store is text configs and can be modified directly without breaking the user interfaces would be necessary to deal with the tedium for complex changes in a UI, while providing a nice rendering of state for simple exploratory changes. Backup and restore would be only necessary for stateful changes, but since the stateful changes aren’t at the OS layer, you won’t end up with a bricked server.

belthesar · 2 years ago
This assumes that you're running in an environment where your servers are cattle and not pets, and in all fairness, not everyone is running large scale web platforms on some orchestration platform. I don't disagree that, even in a pets world one should know how to restore/rebuild a system, because without that, you don't have a sound BDR strategy.
marginalia_nu · 2 years ago
Arguably, about 80% of those running their app on a cattle farm should really have gone with a pet cafe instead. Resumes would certainly be a lot less impressive, but they'd also have a lot less fires to put out and a significantly smaller infra bill.

But regarding the topic at hand, I don't think being able to manage these things with a graphical interface is necessarily a bad thing. It's basically user-space iDRAC/IPMI.

xupybd · 2 years ago
I maintain 3 servers. It's not worth automating the deployment.

I'll spend less time just setting them up by hand.

The company will survive a few hours of downtime.

berkes · 2 years ago
Are there any tools that allow you to manage a server like a pet, yet ensure it can be restored/rebuild?

And, while with the analogy of pets, when you are on holiday, allow your neighbors to look after your pets?

HankB99 · 2 years ago
> For a working machine, server state should be reproducible from scratch. Install an OS, add software, apply configuration, leave well alone.

I'm curious if you have a specific tool or tools in mind. I've been using Ansible in my home lab, particularly for configuring Raspberry Pis. The OS install part (only?) works because it involves a bitwise copy of the image to the boot media (and some optional configuration.)

jameshart · 2 years ago
Ansible is a good choice.

When I say ‘working server’ though, I typically mean one that is doing a job - providing a critical business service.

A ‘home lab’ of raspberry pis is a different beast.

jefurii · 2 years ago
I'd like to see a tool, maybe a Cocpit-like or a wrapper around SSH, that would build Ansible playbooks for you as you clicked around or typed commands.
tiffanyh · 2 years ago
> “For a working machine, server state should be reproducible from scratch. Install an OS, add software, apply configuration, leave well alone.”

I presume you only run NixOS then?

2OEH8eoCRo0 · 2 years ago
Both are good for different reasons. I prefer working in a terminal but I didn't think it was controversial that a GUI is better for visualization.
barosl · 2 years ago
The cool thing about this project is that as it uses systemd's socket activation, it requires no server processes at all. There is no waste of resources when Cockpit is not being used. Accessing a page is literally the same as invoking a command-line tool (and quitting it). No more, no less. What a beautiful design.
arghwhat · 2 years ago
To be fair, we've had this since BSD4.3 (1986) through inetd - which worked slightly differently, but same overall idea. Once popular, it fell out of fashion because... Well, there isn't really any reason for it.

A good server process is idle when nothing is happening, and should be using miniscule real memory that should be easy to swap out. If the server in question uses significant memory for your use-case, you also don't want it starting on demand and triggering sporadic memory pressure.

It does make it easier to avoid blocking on service start in early boot though, which is a common cause of poor boot performance.

dale_glass · 2 years ago
There's good reasons for it though!

One is boot performance. Another is zero cost for a rarely used tool, which may be particularly important on a VPS or a small computer like a Raspberry Pi where you don't want to add costs for something that may only rarely be needed.

I think a nice benefit for an administrative tool is the ability to update it, and reload the updated version. You don't need the tool to have its own "re-exec myself" code that's rarely used, and that could fail at an inconvenient time.

The reason why inetd didn't stick is because it's a pain to use -- it's separated from SysV init, so it needs to be very intentionally set up. Plus there was the inetd/xinetd disagreement.

Tying in init, inetd and monit into a single system that can do all those things IMO made things much nicer.

bityard · 2 years ago
Around the time I was first learning Linux, I recall reading that there were two ways to run a service:

1. Start the daemon on boot and have it running all the time, like some undereducated neanderthal.

2. Configure your system to run a daemon which monitors a port/socket and starts up only when there is traffic, like a civilized person.

I believe which one of these to use is highly dependent on your resources, usage, and deployment model. For services that are fast and cheap to start but are rarely used, #1 makes more sense. If you have a server or VM which only does one thing (very much the norm, these days), then running just keeping that service running all the time is easier and better for performance.

whartung · 2 years ago
Actually I think what killed inetd is, partially, http. At the time, http was connectionless. Open socket, send packet, read response, close. Out of the box inetd would support that, for sure, but it would be constantly forking new http processes to do it.

FTP, SMTP were all stateful, so living under inetd worked OK. One process per overall session rather than individual messages within a session.

Obviously, inetd could have been hammered on to basically consume the pre-forking model then dominant in something like Apache, caching server processes, etc.

But it wasn't. Then databases became the other dominant server process, and they didn't run behind inetd either.

Apache + CGI was the "inetd" of the web age.

tanelpoder · 2 years ago
I ended up reading more about this and looks like SSHD in Ubuntu 22.10 and later also uses systemd socket activation. So there should be no sshd process(es) started until someone SSHs in!

https://discourse.ubuntu.com/t/sshd-now-uses-socket-based-ac...

talent_deprived · 2 years ago
This is messed up, totally messed up:

"On upgrades from Ubuntu 22.04 LTS, users who had configured Port settings or a ListenAddress setting in /etc/ssh/sshd_config will find these settings migrated to /etc/systemd/system/ssh.socket.d/addresses.conf."

It's like Canonical is doing 1960's quality acid.

At least the garbage can be disabled:

"it is still possible to revert to the previous non-socket-activated behavior"

With having to remove snapd then mark it to not be installed and in the next Ubuntu having to fix ssh back to the current behavior, it might be easier to migrate my servers back to Debian, or look for a solid non-systemd OS.

smetj · 2 years ago
Certainly for SSH I find this a bad idea. If you need to ssh into a troubled machine then it might very well be it cannot be started.
itsTyrion · 2 years ago
Ew
mrweasel · 2 years ago
I should really spend more time learning systemd. The more I look into it, the more cool and useful features I discover.
bityard · 2 years ago
If you have anything at all to do with OS administration, management, or software packaging, it's worth it.

If I could offer a little advice: The systemd man pages are useful as a reference, but are terrible to learn from. Part of this is because there are parts of systemd that everyone uses, and there are parts that almost nobody uses and it's hard to guess which these are at first. Also, the man pages are dry and long and quite often fail to describe things in a way that would make any sense whatsoever to someone who isn't already intimately familiar with systemd.

Most of my systemd learning came from random blog articles and of course the excellent Arch wiki.

ramses0 · 2 years ago
Also, it's 99% "not different than doing it via command line", and also comes with a little js terminal gui, uses native users + passwords, has some lightweight monitoring history, lets you browse a bunch of configuration that you usually would have to remember byzantine systemd command lines for... it's awesome for what it is!

I'm happy to run it (aka: have it installed) on all my little raspberry pi's, because sometimes I'm not at a terminal when I want to scope them out, and/or if I'm at "just a web browser", being able to "natively ssh into them" via a web server (and then run `curl ...etc...` from a "real" command prompt) is super helpful!

winter_blue · 2 years ago
Just want to clarify: there's still a server process running to serve the Cockpit web app's static HTML/JS assets, right?

Do you essentially mean that systemd socket activation is used basically only if/when the Cockpit web app end-user/client sends a REST/GQL/etc/? request for logs, for example?

sleepybrett · 2 years ago
I thought the cool thing was all the rookies who install this thing in a way that it's publicly accessible. How many stories have I heard about people who accidentally configure phpMyAdmin to be publicly accessible... Now you might not JUST leak your whole customer DB!
severino · 2 years ago
Interesting, I always thought socket activation meant defer launching a process until somebody tries to access it through the network, but... does it also finish the web server process (or whatever is used here) as well after the request is serviced?
diggan · 2 years ago
No, it doesn't automatically close the process. Two options I can think of: Application exit when it's done with its thing or RuntimeMaxSec to make it close after a while.

systemd passes the socket on to the application so I don't think it has any reference to it anymore, so it wouldn't be able to know when the socket closes.

notpushkin · 2 years ago
systemd-cgi :^)
wongarsu · 2 years ago
Everything old is new again.

The next big thing will be a web server where you don't need to use the command line to deploy your project, just sync your workspace folder and it will automatically execute the file matching the URL.

darkwater · 2 years ago
It was/is inetd[1] actually

[1] https://en.wikipedia.org/wiki/Inetd

codedokode · 2 years ago
Socket activation means that every application must be modified so that it can run both with activation or without. So you need to patch every application for compatibility with systemd. And if tomorrow there will be an alternative system daemon, you will have to patch everything again?
leetrout · 2 years ago
There is value in "porcelain"[0]

I have watched startups fold for not pushing product development further into UI/UX with off the shelf backends. At one company I worked at I showed how our backend (completely custom container orchestrator) could be replaced in a weekend with AWS Lambda and ECS. But our UI/UX and workflow tools would take much, much longer. Yet we continued to waste money and time on "building a new raft based cluster". In the mean time I was handed "add batch processing" and we already used Go so I just used Nomad under the hood and moved on.

I like working on teams that ship features not JUST tech for tech's sake.

https://git-scm.com/book/en/v2/Git-Internals-Plumbing-and-Po...

aitchnyu · 2 years ago
Hope all tools in this space have a giant banner saying your disk space is running out. This is somehow not common knowledge for those debugging servers.
snoman · 2 years ago
What’s up with that btw? Noticed the same myself.
Semaphor · 2 years ago
starfallg · 2 years ago
That's kinda expected as the project matures and more people know about it.
Semaphor · 2 years ago
I don’t post these as some kind of statement, but for people to check older discussions about a project or article.
fs0c13ty00 · 2 years ago
I can't imagine myself using this. One more port open, one more attack vector for those restless bots to scan for vulnerabilities, one more service I need to keep up-to-date. But I understand it would help Linux servers become more approachable, especially people that are switching away from PHP-based shared hosting to a full-featured VPS, don't have much knowledge about servers, and want something similar to cPanel or DirectAdmin.
Cthulhu_ · 2 years ago
You don't have to open up a port, you can use a VPN or SSH tunnel (I don't know what the difference is) instead.
jelly1 · 2 years ago
With Cockpit Client this is not even required it will do the SSH magic for you.

https://flathub.org/apps/org.cockpit_project.CockpitClient

TwoNineFive · 2 years ago
I'm an actual RHCE. This thread has to be some big Red Hatter click farm or something. The artificial positivity is striking. Is Red Hat threatening to pull funding for this project or something? Just weird.

Cockpit is okay but it's basically Red Hat's equivalent to the Windows Server Manager tool, and I have no doubt it was directly inspired by Server Manager. It's development and improvement over the years has been painfully slow.

Nobody who is comfortable with an ssh session uses Cockpit, except maybe to create new VMs, and even then all of these comments comparing it to Proxmox are just whack because it doesn't have a quarter of the features the Proxmox UI offers. The utility for managing VMs is a recent development and even then I still prefer the Virtual Machine Manager tool because I don't want to deal with the latency increase and other limitations of working through a browser.

But anyway, there's a ton of things you can't do with Cockpit, and never will be able to do. It's for people who want to point and click, can't do a bash for/while loop, don't understand pipe chaining commands, and don't like using vim.

Like happyweasel said, it's basically webmin for Red Hat.

It's kinda cool, but it's so old now and development has been so slow and it's been so over-hyped that I don't pay attention to it at all and I've never used it except what was required to get certified.

oli-g · 2 years ago
> It's for people who want to point and click, can't do a bash for/while loop, don't understand pipe chaining commands, and don't like using vim

"Instagram filters are for people who don't know how to work with Photoshop layers, don't understand basic color blending operations, and who just want to swipe."

I mean, yes.

distcs · 2 years ago
I have no problem viewing pictures shared by those who don't understand basic color blending and just want to swipe.

But I may have a problem with people who can't do a bash for/while loop or understand pipe chaining commands be responsible for administrating my company servers.

I don't see how the comparison between adminstrating servers and sharing pictures on social media is a useful comparison.

rafaelmn · 2 years ago
Your comparison implies that the web UI is faster than SSH once you know these tools ?

You could have godlike Photoshop skills and it will take orders of magnitude more effort to get results. With SSH and shell scripts you'll likely be faster than the web UI once you're skilled enough. And it's easy to automate.

Kiro · 2 years ago
> Please don't post insinuations about astroturfing, shilling, brigading, foreign agents, and the like. It degrades discussion and is usually mistaken. If you're worried about abuse, email hn@ycombinator.com and we'll look at the data.

https://news.ycombinator.com/newsguidelines.html

Karrot_Kream · 2 years ago
> Nobody who is comfortable with an ssh session uses Cockpit

> It's for people who want to point and click, can't do a bash for/while loop, don't understand pipe chaining commands, and don't like using vim.

Lol! Are you ready to deploy an ssh-capable terminal emulator at all times? What's wrong with making simple tasks simple?

I run multiple Raspberry Pi cameras (with nicer camera modules) to watch the pets if the family travels. The RTSP camera streams run in a systemd unit on their boxes. I have some healthchecks to make sure packets are being streamed as other systemd units. Each camera gets its own private IP on a ZeroTier network I manage. Since copilot is only run on demand, it's a no brainer to have around for administration.

Sometimes one of the cameras just starts streaming out blank frames. I'd much rather manage this through a copilot web interface on my phone when I'm on vacation than find a keyboard to use SSH with and restart the camera stream unit. I mean sure, I could write a healthcheck which checks whether blank frames are being emitted, but it's just so much easier to restart it via copilot than it is to write that healthcheck and it only ever happens a few times a year. Shrug.

arghwhat · 2 years ago
> Lol! Are you ready to deploy an ssh-capable terminal emulator at all times?

What terminal emulator isn't ssh-capable? Where would you not be able to open a terminal emulator? I am so confused.

> What's wrong with making simple tasks simple?

The limited tasks exposed by cockpit are also simple (or depending on the individual, simpler) in a terminal, but if you want a point-and-click UI for just a few things, go ahead.

That cockpit is very limited and seemingly has no future does not mean you can't like what it does now. Just might be worth considering if there are better-supported alternatives.

minimaul · 2 years ago
It's a very useful tool to manage libvirt + KVM remotely without trawling through poorly documented XML, it's accessible from any platform - even an iPad, and it requires next to no setup (basically install the package and add a cert and you're done).

I consider these big pluses, I use Cockpit on Debian on my servers that run VMs rather than something like Proxmox, because 1. it's much less invasive, 2. the machines tend to run other things too, like docker containers.

Have been using it for this since ~2019.

The stats views are useful too, but I wouldn't install it for that on it's own.

edit: and honestly, there's not another good (maintained!) option that fits the niche of 'let me create libvirt VMs from a web browser on a single machine without taking over my whole system'.

dig1 · 2 years ago
I'd say it is half-baked webmin. You can only use it with NetworkManager, and if you have an even remotely complex network setup for VMs, NetworkManager usually must be turned off, which makes Cockpit practically unusable. virt-manager [1] is way more powerful for those who like managing VMs with GUI.

[1] https://virt-manager.org/

minimaul · 2 years ago
I've not noticed any dependencies on NetworkManager when using Cockpit for VMs on Debian? My servers configure their networking using Debian's usual ifupdown, and NetworkManager isn't even installed!
talent_deprived · 2 years ago
Agree, all of it, like the term Red Hatter. This cockpit-project thing came up on Reddit yesterday as well. It feels like the podman astroturfing that was so strong last year. It also feels like Red Hat hired some of Jetbrains' hyper PR astroturfers who troll the Java and webdev forums on various sites, extolling the extreme virtues of all Jetbrains' products.
cuddlyogre · 2 years ago
Your post implies there is something obviously better.

Genuine question.

What would that be? I'm always on the lookout for better tools.

notabee · 2 years ago
I tend to not ever interact with /r/linux for this reason. It always seems overrun with corporate mouthpieces. I would really love to see a platform take this problem seriously, but I think for most of them (even this one) that would threaten the money supply either directly or indirectly. I'm tired of the "just don't talk about it" decorum when it's such a huge problem.
KronisLV · 2 years ago
> Like happyweasel said, it's basically webmin for Red Hat.

Seems pretty cool to me, "meet your users where they are" and all that.

I actually wonder what other options for this sort of web based management panel there are out there, maybe more DEB oriented ones.

brancz · 2 years ago
I was architect leading all things Observability at Red Hat until 3 years ago there was an absurd amount of support for this project internally I never understood it either. But there were huge amounts of customer support, sales and engineers who adore this thing, I genuinely don't understand the appeal when we had next level cluster-wide Observability supported on and off OpenShift.

Even being in a leadership position and basically competing within Red Hat against this, I found no answer to your question.

saynay · 2 years ago
I've been using it because we deliver application servers based on Red Hat / CentOS to customers that are unfamiliar with Linux. 99% of the time, they do not need to log in to the command-line for anything. When they do, Cockpit has been a lot easier for them to understand than navigating ssh and bash.
INTPenis · 2 years ago
A friend of mine actually found it very useful, and I think it helped him get into selfhosting on Fedora and RHEL. Now he mostly uses Ansible but I remember hearing a lot about cockpit during the start of his journey. He would use it to manage VMs, containers, and get a better grasp of SElinux denials.
worksonmine · 2 years ago
> It's for people who want to point and click, can't do a bash for/while loop, don't understand pipe chaining commands, and don't like using vim.

I hate the mouse so much that I've got a script to move it off-screen (it follows my focus), and I usually live in my terminal. But trying Fedora on one of my Pis I tried cockpit since it was installed by default and I'm surprised how much I like it.

I think the lack of features is a good thing, there's not much bloat and it recommends extra packages I could like. While i love my terminal cockpit has been nice to get a quick glance. So far the only thing I'm missing is support for doas.

Every tool does not have to be able to do all the things.

axus · 2 years ago
This is a good analogy. Maybe once every few years I'll set up Windows Servers. Being able to use the GUI and not keep a bunch of PowerShell on-hand means I can do it without help, and get on with my day.

Meanwhile I'm happier doing everything on command line on Linux, understanding and learning all its features has been worthwhile. But I can imagine some people just want a server set up and to get on with their day.

bonney_io · 2 years ago
> It's for people who want to point and click, can't do a bash for/while loop, don't understand pipe chaining commands, and don't like using vim.

This sounds like the dream.

JAlexoid · 2 years ago
It's "meh" level of quality, though. Useful for a very small subset and I would avoid it, if you're running a home server. (Cockpit's file server interface plugin is old and bad)

I don't really know what you'd use it for? Maybe to do minor monitoring, but it's not great to admin.

mekster · 2 years ago
Exactly. No idea why RH is endorsing the project. There's no practical use. Listing bunch of systemd services isn't going to be any more helpful than CLI output listing everything.