What irritates me with regard to Wayland is the assumption that all rendering is local. The great thing about X was on a LAN you could bang up an ssh session somewhere, carry across your $DISPLAY and have the exact same application running somewhere else but rendered to your screen. It doesn't sound like much but there were times when it was incredibly useful.
Thin terminals brought this to the logical conclusion of not needing a desktop at all, but unfortunately X's protocol is too chatty and sensitive to latency to really shine across a WAN. These days you can work around all of that by opening a session via Guacamole or the like but that's still one tab in a browser per graphical session. There's no just using SSH in a for loop and opening up a raft of xterms to a bunch of machines in Wayland, AFAIK.
From the Wayland FAQ
Is Wayland network transparent / does it support remote rendering?
No, that is outside the scope of Wayland. To support remote rendering you need to define a rendering API, which is something I've been very careful to avoid doing. The reason Wayland is so simple and feasible at all is that I'm sidestepping this big task and pushing it to the clients. It's an interesting challenge, a very big task and it's hard to get right, but essentially orthogonal to what Wayland tries to achieve.
To be honest, I think remote rendering is unnecessary and is mismatched for actualy application workloads. The amount of data and IO needed by the renderer in many applications outstrips that needed to send the rendered result to a display. I.e. the textures, models, or font curves loaded into GPU-local RAM and then resampled or reinterpreted to draw frames. This stuff is the heart of a graphical application, and doesn't really make sense to split across a network.
When the application host is really resource-starved and does not need to supply much content to a renderer, we are better off using more abstract network interfaces. On one hand, HTML+CSS+js are the remote rendering protocol to the browser which becomes the remote renderer. On the other hand, sensor/data interfaces become the remote data source for a GUI application that can be hosted somewhere more appropriate than on the resource-starved embedded device.
What I will miss from SSH X-forwarding is not remote rendering. It is per-window remote display. I would be perfectly happy with remote applications doing their rendering within their own host, and just sending their window contents to my display server, where my local compositor puts the window content onto my screen. Protocols more like VNC or RDP could be used for each of these windows, adaptively switching between damage-based blitting, lossless and lossy image codecs, or even streaming video codecs. What is needed is the remote window-manager and display protocol to securely negotiate and multiplex multiple windows opening and closing rather than one whole desktop.
What I won't miss from SSH X-forwarding is window and application lifecycle tied to the network connection. Let me disconnect and reconnect to my remote application and its windows. Let me share such a connection to have the same windows displayed on my desktop and my laptop, or to window-share with a coworker down the hall...
That's pretty typical. "I don't need it, so therefore nobody should need it, so therefore I will remove it".
Unix is different things to different people and to remove the network capability from X is a step back, not a step forward. There are a whole pile of neat things that you can do with networked graphics, so much, in fact that Plan 9, in many ways a successor of Unix used that theme as the core of many of the system services. All of them work locally as well as remote.
I think remote rendering is unnecessary and is mismatched for actualy application workloads
Nowadays, perhaps, but that's software bloat in action, software getting slower faster than computers gaining in speed. In the last millennium if one had a login one could connect to the computer in Daresbury, UK that had a copy of the Cambridge Structural Database and search crystal structures. Many people at British universities did that. It would render graphics on an X server. Once I tried it on holiday at the East Coast of the US, and it wasn't a painful experience at all. Then again, it was the old version that still had PLUTO, which was a program from the computing stone ages.
If X was maintainable then this would be an extremely reasonable complaint. However, the reason that Wayland is about to get rammed down our throats is that the only developers willing to work on X.Org are those who are paid to do so and they are all working as hard as they can to depreciate it for something that fits their needs.
The idea that a desktop compositor should be network aware by default is not really a defensible design decision. Why should the compositor be responsible for sending a compressed stream of OpenGl instructions over a network? This is an activity that is Someone Else's responsibility; making a graphics developer responsible for decisions about latency and compression is just going to reduce the number of people who can understand the codebase.
The reason for X was that in 2008 the driver manufacturers wouldn't support anything else. AMD and Intel have open source drivers now, so the reason isn't good enough to justify X as a near universal standard.
At 18m30, he mentions that everyone uses SHM and DRI2, which don't work over the network. He mentions X isn't "network transparent" anymore, it's "network capable". I'm not 100% sure what the difference is tbh.
At 40m10, he mentions that rendering on the "server" then compressing it to transfer over the network to display locally is basically VNC.
That's interesting. I didn't realize the SHM extension was used that widely. Do most GUI toolkits use it under the hood somehow, in order to render more efficiently?
When I first realized you could do this I was blown away. Just set `DISPLAY` in your environment to an X server (any X server, anywhere) and any applications you run will appear there.
Of course, there are immense problems with this. For one thing, it's usually crippled by latency. Even if the client X application is in a Docker container on the same machine, applications are only halfway usable. Use a local Ethernet network, and things start to fall apart. Going over the Internet isn't really worth trying.
The other major problem is that there's no security. If you have an open X server, anyone can connect to it and run random applications on your display. But in addition to opening you up to annoyance from cyber-hooligans, allowing an X application to run gives it access to all your keystrokes, which may not be something you want. Beyond that, all the X data is going out over the network unencrypted. There's just no security model here.
That said, it would be cool if there were some kind of plugin for Wayland that would let you use this feature (or even something better, if someone wanted to implement that). I've wondered---can you set up XWayland to be an open X server and just let random X applications connect to it? It would be interesting to hear from Wayland experts on this point.
- X is burdened by this abstraction layer for networking
- Since people are using ssh as an app to do it, a case could be made that the networking is not really necessary and a dedicated graphics specific app might work just as well.
It irks me too, but I can't really argue with the decision. X11's model of remote rendering is flawed; try starting up Firefox over a tunneled SSH connection sometime.
In order to justify network rendering at the windowing system level, I think you need a model that could reasonably give good performance to both local and remote applications without application developers having to care.
Sun's early windowing system was NeWS, which had a PostScript interpreter running on the display terminal. I think it was the better idea in the long run, but at the time X was easier to work with and wasn't patent encumbered...
That's what is generally bothering me about this whole Wayland thing: it does a (small) fraction of the things that Xorg does and calls itself a better alternative.
It doesnt work the same. On Ubuntu I have found that you have to log in first on the machine so that the vnc server starts before you can use a vnc client to connect. It kind of sucks to have to go tot he machine to login each time it is restarted and leave the machine logged in.
We have a better remote terminal client today than anything X ever achieved: the web browser.
It's not as good as a local display for resource intensive stuff, e.g. games, but X never was either (it was abysmal, in fact). It's a hard problem, but given that price/performance of computational power has mostly run ahead of network speed for most of computing history, and latency is a (mostly) insoluble problem for long distance networking, Web Assembly is often a better solution for pushing even resource intensive applications out to the edges.
I have a (IMO) next little thing that runs a docker image of my "ideal" workstation and then XForwards the GUI to the host machine - this way I can have the very same immutable workstation on different machines.
It totally depends on X. I don't even know how to run wayland.
I am surprised that X was / is so under funded and under supported.
fyi, there are two ways of using X across the network. The X forwarding via an ssh tunnel that you refer to, is technically rendered on your machine where you forward the DISPLAY from. The X client on the remote side is a 'client' and connects to your forwarded (local) display. The rendering happens in that display server (local) that dumps the result into your graphics device framebuffer/onto your screen. The instructions though come from your client on that remote machine, incl. any drawing primitives like drawing a line etc. But still the actual resources that client uses are local to your X server.
The other approach is (and that was like in the older days), when you avoid any tunneling but let X clients connect to open X servers across the network, like if your X display server listens on all NICs for clients. This setup is still technically possible but pretty much discouraged for security concerns. But with this setup you could also run a local xterm or whatever X client against a remote X server (like your neighbours ;), assumed it would accept your connection.
Uggh. RedHat has a long history now of replacing working stuff with half baked, regressed-by-design replacements.
I’d really like to have a working Unix environment, and Linux stopped being that for me years ago.
More conservative alternative distros keep bit rotting due to upstream (usually RedHat induced) breakage. (SystemD, Wayland, DBUS, PulseAudio, Gnome 3, kernel API brain damage, etc)
Does anyone know of any alternatives? Is there a distro/foundation that will actually support open source Unix moving forward (BSD, maybe?)
I use OpenBSD on my Thinkpad T410i and Void on an older AMD box. You can use vmm to run alpine images to run docker apps or Linux stuff. My only gripe is firefox runs slowish on OpenBSD though chrome doesn't suffer as such. I use both and only chrome if FF cant handle the site im on.
9front's vmx is mature enough to run an alpine VM and then native x client so the vm runs without an x server and the linux applications run in individual windows giving you a seamless 9front/linux desktop that can run a full browser. This route is only for the truly enlightened ones.
Unix philosophy isn't about keeping the cruft around but building useful systems using simple tools. The idea is do not duplicate functionality if it already exists. I see so many programs attempt to sidestep dependency hell by baking in functionality. This is the overall communities fault and the fault of modern software development which somehow lost the art of pragmatism. The simple part has been long lost in a sea of complexity driven by ignorance and fueled by greed.
OpenBSD is a modern, innovative Operating System that has retained classic Unix sensibilities. I also think that, even as a lover of Linux, a Linux monoculture is bad for the world.
The simplicity of OpenBSD makes it much easier to learn deeply than Linux. Where Linux has tended to create magical veneers over complexity, OpenBSD has tended to simplify the underlying systems.
Also, the OpenBSD project is shouldering the weight of the world and deserves more credit for this. So much of OpenBSD has made it into other parts of the Internet ecosystem.
OpenBSD isn't performant enough to be a daily driver on a laptop, unless your laptop spends all its time plugged in.
The battery life on my X220 was a solid hour less under OpenBSD than under Debian, and it ran some 10 degrees hotter. Yes, this is post-apmd improvements.
I want to like OpenBSD but code correctness just isn't enough for me.
No. There are the xBSD derivatives which continue to aspire to being "unix" like rather than "windows" like.
This is probably controversial but my guess here is this; The bulk of the developer cohort doing most of the work in Linux userland these days cut their teeth on Windows, it is their internal standard of a "good developer OS experience" so when they build new things, they have that model as "good" in their mind.
The challenge continues to be software. If you are using the packages in FreeBSD (or other xBSD derivatives) it is not uncommon to see "package xyz does not have an active maintainer so it may break, if you're interested in becoming the maintainer go here ..." Not enough people to cover all the things that are pumped into the Linux ecosystem everyday.
And it is a bit too much work to re-create the old Sun Microsystems of old where the kernel and the core UNIX user land tools from BSD were combined with a bespoke window system, compiler suite, and hardware specific system libraries to make a product. Granted it was fewer than 1200 software people all told, when SunOS 4.x came out but they were all working for market salaries on the project full time. That's like 10 - 20 million dollars a year, not something you're going to do "for free in your spare time."
Sure, FreeBSD. The whole "braindamage" is still available, of course, but you can use anything else if you wish. Everything is conveniently available as both ports and packages, so you don't need to mess with installing things by hand.
I've been experimenting with OpenBSD, and it's been pretty nice. The code is beautiful, and a lot of stuff just works. Not sure how it will behave on a laptop though.
I bought a ThinkPad specifically to install OpenBSD. It's a great combination. I find OpenBSD much more straight forward and coherent than Linux. There are some bells and whistles that I wish it had, but in terms of the foundation it is wonderful.
Only reason why hybrid kernels and Linux is popular is the stable driver API, otherwise recompilation is necessary (which was new to me when I discovered it).
OpenBSD is suited for simple workstations or as a tablet substitute, but I don't think the OpenBSD desktop will quite take off, unless driver developers will return to releasing obfuscated code.
I think it would be great on the right laptop. Most of the OpenBSD developers use thinkpads, from what I can tell.
It wouldn’t fly at work for me at the moment, and I don’t do much computing at home, so I played around with it a bit on a laptop, but didn’t switch. The results were encouraging, though.
My biggest beef with it right now is the upgrade story, but that’s because I need to update my router (through a serial console!) and am afraid of what happens if I brick it.
The thing is, it takes a lot of resources to do this well and there is not a lot of financial incentive to invest in this area. Canonical gave it a really good shot with Unity and Mir, and received lots of criticism for doing so. If we're going to criticize anyone who does something differently, then we need to accept that the project sponsored by Red Hat is the one that everyone will end up being used.
If you want the most BSD-like distro, it's probably gotta be Slackware. It changes little between releases (just newer versions of the software it ships with mostly).
I’m worried that, with core dependencies like X11 being abandoned, the writing is on the wall for the hold-out distributions (I run Devuan and OpenBSD mostly).
I get that it doesn’t make business sense for RedHat to maintain parts of the open source ecosystem that will never drive consulting or support revenue, but that doesn’t change the fact that I want a computing environment that just works (including software that’s been stable for a decade, and isn’t constantly ported to the new shiny).
Slackware's release cycle is way too slow and it gets slower with each release. The current stable version (14.2) was released three years ago and it doesn't support modern hardware. Additionally, due to a bug in the installer, it's impossible to install it on NVMe drives.
Also, I don't understand why does it ship with KDE4 instead of Plasma 5. Even in -current it's still the case.
I applaud the design and ideas in GuixSD, but the lack of support for non-free software is a deal-breaker for me. I wish they'd relax this requirement and added an optional official non-free repository with a limited selection of software. A fork with support for non-free repositories would also be great.
So far I've seen individual projects that have done this, but seem abandoned[1] or overwhelming[2] for a Lisp newcomer.
Just to clarify, these complaints are primarily about Fedora specifically -- at least in my experience RHEL (and CentOS) have been very solid. I just wish that Fedora had a model similar to Ubuntu, where periodic releases are targeted for longer term support (i.e., get patches for 2 - 3 years), and all the potentially breaking items go into the inbetween releases.
> I’d really like to have a working Unix environment, and Linux stopped being that for me years ago.
That ship sailed very, very long ago. The moment the GNU/Linux community decided to implement their own desktops and not copy the standard Unix desktop - CDE, the die was cast. From then on we lost any possibility of having a unified standard desktop and instead we have two competing GUI toolkits neither of which can truly be called native.
I am still optimistic that perhaps Wayland will raise the bar for making and maintaining your own DE so high that due to practical considerations we will end up with just one de-facto desktop API.
It's about time. A 10+ year transition from X.org to Wayland is long enough.
X.org is open source, it is does not belong to Red Hat. It's a truly desirable technology, anyone else is welcome to contribute to the maintenance of it or fork it.
When no one else wants to maintain it favor of newer technologies, that's a good sign it's ready for retirement.
In those ten years, clusters in remote datacenters went mainstream, but Wayland has never delivered network transparency except by embedding X. Where are these users who are still running everything on one computer under their desk?
I think the demand for remote GUI apps went down, for several reasons:
- Web apps got better and more common
- Windows (and MS SQL Server etc), which are some of the big OS-level reason for using remote GUI management tools, has improved its command-line management story and added UI-less server editions
- Network improvements (more bandwidth, more clouds and POPs) made existing 'remote desktop' protocols more tolerable on the Internet, all the way back to the venerable VNC and RDP
What kind of remote UI app/workload do you see as common today?
Do you commonly encounter people who use networked X for scenarios that would not be better solved by other protocols? Cause I have basically never seen that in the last decade.
Not true. Chrome OS uses Wayland and has 40% of the Linux desktop market. (Source: https://www.statista.com/statistics/218089/global-market-sha... 1.1% Chrome OS vs 1.6% Linux ). Given trends, Chrome OS alone could make Wayland the dominant Linux display server in a couple years.
Sad. It's probably unpopular but wayland is not ready yet IMHO and lacking on a conceptual level. Yet another few years/months until most bugs are fixed and more broken functionality...
This just means that development of _new_ features and research should take place on Wayland where it belongs. From what I've read X has been a long evolution into a hodepodge of historical but effectively dead interfaces with newer ones crammed along side... it doesn't make sense to continue the tradition of cramming more features into X, but that doesn't stop people developing new things on top of it while Wayland matures.
You mean _parity_ features? SSHing into a server and launching a small GUI tool is still dead. It was killed by Wayland intentionally and willfully as a fundamental design idea. In fact, the feature of doing that is considered "not part of wayland" by Wayland designers.
The root problem is that wayland replaced half of x.org and left the rest for someone else to figure out.
It's sad that maintenance mode != dead. They should have put X-Windows out of its misery decades ago!
What's actually sad is that its replacement, Wayland, didn't learn any of the lessons of NeWS (or what we now call AJAX) and Emacs.
They could at least throw a JavaScript engine in as an afterthought. But it's WAY too late to actually design the entire thing AROUND an extension language, like NeWS and Emacs and TCL/Tk, which is how it should have been in the first place.
NeWS was architecturally similar to what is now called AJAX, except that NeWS coherently:
+ used PostScript code instead of JavaScript for programming.
+ used PostScript graphics instead of DHTML and CSS for rendering.
+ used PostScript data instead of XML and JSON for data representation.
Designing a system around an extension language from day one is a WHOLE lot better than nailing an extension language onto the side of something that was designed without one (and thus suffers from Greenspun's tenth rule). This isn't rocket surgery, people.
>If the designers of X-Windows built cars, there would be no fewer than five steering wheels hidden about the cockpit, none of which followed the same principles — but you’d be able to shift gears with your car stereo. Useful feature, that. - Marus J. Ranum, Digital Equipment Corporation
Wayland is literally never going to be ready unless it’s forced to be. How long has Nvidia promised to support it now, and still unless you’re using Nouveau it’s basically not usable on Nvidia cards.
Is that still the case? I'm running FC29 with the nVidia closed-source drivers and I haven't had issues. (I have had issues on a laptop with switchable graphics, though, and Nouveau was the solution there.)
Last time I checked screen sharing applications couldn't work with Wayland (meet, slack, Skype, etc). All of them work only with X11. I need them to work with my customers so I can't use Wayland until this problem is solved.
I'm wondering if all the resources that went to Wayland for the past TEN years were directed to slowly evolve X11 to the same direction we would be in a worse or better situation.
It seems to me very similar to the Python2/Python3 debacle.
What pisses me off more than anything else is having to replace a lot of tools there are working perfectly with xorg but are not compatible with wayland.
Yes, I know that I'm using an unconventional setup, but the freedom to do that was one of the reasons that attracted me to a unix environment. Redhat is working very hard to change all of that for a "year of the linux Desktop" that (in my opinion) will never come.
>I'm wondering if all the resources that went to Wayland for the past TEN years were directed to slowly evolve X11 to the same direction we would be in a worse or better situation. It seems to me very similar to the Python2/Python3 debacle.
Are you arguing that a decade of constant breaking changes to a huge number, if not the majority, of desktop linux programs, and the accompanying maintenance challenges (as versions of window managers, Gtk and the like would all be tightly coupled to specific Xorg versions) would have been a better approach than a rewrite + compatibility shim (XWayland) that allows for a fair amount of backwards compatibility?
With the coming IBM transition, I'm expecting Red Hat itself to go into "Hard Maintenance Mode" fairly quickly. Another comment here estimates 10 years of RH support, but I'm confident that X.org will outlast RH.
If wayland gets abandoned it will have to be re-invented. It solves real problems with X11. That doesn't mean Wayland itself doesn't have problems though. Over time they will get solved. If people or companies need them to get solved sooner I m sure Wayland devs will welcome any contributions.
Oh yeah! Look at IBM's previous acquisitions especially over the last decade. They're one of the worst tech employers too, for anyone remotely senior. I give it a year max before the 1st RH layoff wave (in IBMese, a "Resource Action" or RA).
The Wayland developers have a security model [-1] that is hostile to "power-users" (those who like to use the Unix (or other OS) programming environment to its full potential) and the visually impaired (eg., blind). See [0] [1] [2] to see what features I am talking about.
It is possible to implement some of those features on a per-compositor basis, but the result of that will be graphical API fragmentation, as programs that interact with GUIs will need to have separate code for each compositor. And the work is not done even for Gnome (more precisely Gnome's Wayland compositor and the Gnome applications that use it) yet.
On the other hand one could say, eg. "Why not make a compositor accessibility protocol on top of Wayland?". End result of that, it is easy to guess, would be something worse than X Windows (because of even more layers of abstraction, and possibly even more incompatible standards/APIs/protocols), which the Wayland people were supposedly trying to escape from.
Edit: Another thing that makes Wayland (at least without an extension ...) unsuitable to replace X Windows is forced compositing. This means unavoidable double buffering and thus worse video performance (especially noticeable for interactive stuff like video games).
[-1] I prefer calling it security theater, because it does not bring any real security improvement in practice.
The X Window System has had a great run; a whole generation. And it's still going, even if it's trailing edge tech now. Thanks Bob Scheifler, Jim Gettys, and the whole crew.
Thin terminals brought this to the logical conclusion of not needing a desktop at all, but unfortunately X's protocol is too chatty and sensitive to latency to really shine across a WAN. These days you can work around all of that by opening a session via Guacamole or the like but that's still one tab in a browser per graphical session. There's no just using SSH in a for loop and opening up a raft of xterms to a bunch of machines in Wayland, AFAIK.
From the Wayland FAQ
Is Wayland network transparent / does it support remote rendering?
No, that is outside the scope of Wayland. To support remote rendering you need to define a rendering API, which is something I've been very careful to avoid doing. The reason Wayland is so simple and feasible at all is that I'm sidestepping this big task and pushing it to the clients. It's an interesting challenge, a very big task and it's hard to get right, but essentially orthogonal to what Wayland tries to achieve.
When the application host is really resource-starved and does not need to supply much content to a renderer, we are better off using more abstract network interfaces. On one hand, HTML+CSS+js are the remote rendering protocol to the browser which becomes the remote renderer. On the other hand, sensor/data interfaces become the remote data source for a GUI application that can be hosted somewhere more appropriate than on the resource-starved embedded device.
What I will miss from SSH X-forwarding is not remote rendering. It is per-window remote display. I would be perfectly happy with remote applications doing their rendering within their own host, and just sending their window contents to my display server, where my local compositor puts the window content onto my screen. Protocols more like VNC or RDP could be used for each of these windows, adaptively switching between damage-based blitting, lossless and lossy image codecs, or even streaming video codecs. What is needed is the remote window-manager and display protocol to securely negotiate and multiplex multiple windows opening and closing rather than one whole desktop.
What I won't miss from SSH X-forwarding is window and application lifecycle tied to the network connection. Let me disconnect and reconnect to my remote application and its windows. Let me share such a connection to have the same windows displayed on my desktop and my laptop, or to window-share with a coworker down the hall...
Unix is different things to different people and to remove the network capability from X is a step back, not a step forward. There are a whole pile of neat things that you can do with networked graphics, so much, in fact that Plan 9, in many ways a successor of Unix used that theme as the core of many of the system services. All of them work locally as well as remote.
Nowadays, perhaps, but that's software bloat in action, software getting slower faster than computers gaining in speed. In the last millennium if one had a login one could connect to the computer in Daresbury, UK that had a copy of the Cambridge Structural Database and search crystal structures. Many people at British universities did that. It would render graphics on an X server. Once I tried it on holiday at the East Coast of the US, and it wasn't a painful experience at all. Then again, it was the old version that still had PLUTO, which was a program from the computing stone ages.
What do you think of products like Google Stadia? https://store.google.com/us/product/stadia_founders_edition?...
The idea that a desktop compositor should be network aware by default is not really a defensible design decision. Why should the compositor be responsible for sending a compressed stream of OpenGl instructions over a network? This is an activity that is Someone Else's responsibility; making a graphics developer responsible for decisions about latency and compression is just going to reduce the number of people who can understand the codebase.
The reason for X was that in 2008 the driver manufacturers wouldn't support anything else. AMD and Intel have open source drivers now, so the reason isn't good enough to justify X as a near universal standard.
At 18m30, he mentions that everyone uses SHM and DRI2, which don't work over the network. He mentions X isn't "network transparent" anymore, it's "network capable". I'm not 100% sure what the difference is tbh.
At 40m10, he mentions that rendering on the "server" then compressing it to transfer over the network to display locally is basically VNC.
Of course, there are immense problems with this. For one thing, it's usually crippled by latency. Even if the client X application is in a Docker container on the same machine, applications are only halfway usable. Use a local Ethernet network, and things start to fall apart. Going over the Internet isn't really worth trying.
The other major problem is that there's no security. If you have an open X server, anyone can connect to it and run random applications on your display. But in addition to opening you up to annoyance from cyber-hooligans, allowing an X application to run gives it access to all your keystrokes, which may not be something you want. Beyond that, all the X data is going out over the network unencrypted. There's just no security model here.
That said, it would be cool if there were some kind of plugin for Wayland that would let you use this feature (or even something better, if someone wanted to implement that). I've wondered---can you set up XWayland to be an open X server and just let random X applications connect to it? It would be interesting to hear from Wayland experts on this point.
That said:
- X is burdened by this abstraction layer for networking
- Since people are using ssh as an app to do it, a case could be made that the networking is not really necessary and a dedicated graphics specific app might work just as well.
In order to justify network rendering at the windowing system level, I think you need a model that could reasonably give good performance to both local and remote applications without application developers having to care.
Sun's early windowing system was NeWS, which had a PostScript interpreter running on the display terminal. I think it was the better idea in the long run, but at the time X was easier to work with and wasn't patent encumbered...
It's not as good as a local display for resource intensive stuff, e.g. games, but X never was either (it was abysmal, in fact). It's a hard problem, but given that price/performance of computational power has mostly run ahead of network speed for most of computing history, and latency is a (mostly) insoluble problem for long distance networking, Web Assembly is often a better solution for pushing even resource intensive applications out to the edges.
It totally depends on X. I don't even know how to run wayland.
I am surprised that X was / is so under funded and under supported.
https://github.com/mikadosoftware/workstation
The other approach is (and that was like in the older days), when you avoid any tunneling but let X clients connect to open X servers across the network, like if your X display server listens on all NICs for clients. This setup is still technically possible but pretty much discouraged for security concerns. But with this setup you could also run a local xterm or whatever X client against a remote X server (like your neighbours ;), assumed it would accept your connection.
Like: DISPLAY=yourneighbour.host:0 xterm
BR, garbeam
I’d really like to have a working Unix environment, and Linux stopped being that for me years ago.
More conservative alternative distros keep bit rotting due to upstream (usually RedHat induced) breakage. (SystemD, Wayland, DBUS, PulseAudio, Gnome 3, kernel API brain damage, etc)
Does anyone know of any alternatives? Is there a distro/foundation that will actually support open source Unix moving forward (BSD, maybe?)
Alpine Linux
Void Linux (+ Nix package manager)
9front (militant esoterica)
I use OpenBSD on my Thinkpad T410i and Void on an older AMD box. You can use vmm to run alpine images to run docker apps or Linux stuff. My only gripe is firefox runs slowish on OpenBSD though chrome doesn't suffer as such. I use both and only chrome if FF cant handle the site im on.
9front's vmx is mature enough to run an alpine VM and then native x client so the vm runs without an x server and the linux applications run in individual windows giving you a seamless 9front/linux desktop that can run a full browser. This route is only for the truly enlightened ones.
Unix philosophy isn't about keeping the cruft around but building useful systems using simple tools. The idea is do not duplicate functionality if it already exists. I see so many programs attempt to sidestep dependency hell by baking in functionality. This is the overall communities fault and the fault of modern software development which somehow lost the art of pragmatism. The simple part has been long lost in a sea of complexity driven by ignorance and fueled by greed.
The simplicity of OpenBSD makes it much easier to learn deeply than Linux. Where Linux has tended to create magical veneers over complexity, OpenBSD has tended to simplify the underlying systems.
Also, the OpenBSD project is shouldering the weight of the world and deserves more credit for this. So much of OpenBSD has made it into other parts of the Internet ecosystem.
The battery life on my X220 was a solid hour less under OpenBSD than under Debian, and it ran some 10 degrees hotter. Yes, this is post-apmd improvements.
I want to like OpenBSD but code correctness just isn't enough for me.
This is probably controversial but my guess here is this; The bulk of the developer cohort doing most of the work in Linux userland these days cut their teeth on Windows, it is their internal standard of a "good developer OS experience" so when they build new things, they have that model as "good" in their mind.
The challenge continues to be software. If you are using the packages in FreeBSD (or other xBSD derivatives) it is not uncommon to see "package xyz does not have an active maintainer so it may break, if you're interested in becoming the maintainer go here ..." Not enough people to cover all the things that are pumped into the Linux ecosystem everyday.
And it is a bit too much work to re-create the old Sun Microsystems of old where the kernel and the core UNIX user land tools from BSD were combined with a bespoke window system, compiler suite, and hardware specific system libraries to make a product. Granted it was fewer than 1200 software people all told, when SunOS 4.x came out but they were all working for market salaries on the project full time. That's like 10 - 20 million dollars a year, not something you're going to do "for free in your spare time."
I'm not really sure how it resolves over time.
Not so much controversial as I don't understand what it means. Maybe you could give an example of what that would be?
OpenBSD is suited for simple workstations or as a tablet substitute, but I don't think the OpenBSD desktop will quite take off, unless driver developers will return to releasing obfuscated code.
It wouldn’t fly at work for me at the moment, and I don’t do much computing at home, so I played around with it a bit on a laptop, but didn’t switch. The results were encouraging, though.
My biggest beef with it right now is the upgrade story, but that’s because I need to update my router (through a serial console!) and am afraid of what happens if I brick it.
OBSD works fine on a laptop
I enjoy using arch linux.
I get that it doesn’t make business sense for RedHat to maintain parts of the open source ecosystem that will never drive consulting or support revenue, but that doesn’t change the fact that I want a computing environment that just works (including software that’s been stable for a decade, and isn’t constantly ported to the new shiny).
Also, I don't understand why does it ship with KDE4 instead of Plasma 5. Even in -current it's still the case.
As someone who's used OpenBSD for more than a decade.. no, I do not find Slackware to be BSD-like at all.
I agree things are getting way less modular, and lots of the components you mention are becoming hard dependencies, which is not good.
So far I've seen individual projects that have done this, but seem abandoned[1] or overwhelming[2] for a Lisp newcomer.
[1]: https://github.com/guix-users/guix-nonfree
[2]: https://github.com/alezost/guix-config
Dead Comment
X.org is open source, it is does not belong to Red Hat. It's a truly desirable technology, anyone else is welcome to contribute to the maintenance of it or fork it.
When no one else wants to maintain it favor of newer technologies, that's a good sign it's ready for retirement.
Two sort of non-obvious humble things have eliminated my need for X:
- ssh
- emacs + tramp
and there's always been VNC, but all I actually ever used that for was to get to PCs anyway.
- Web apps got better and more common
- Windows (and MS SQL Server etc), which are some of the big OS-level reason for using remote GUI management tools, has improved its command-line management story and added UI-less server editions
- Network improvements (more bandwidth, more clouds and POPs) made existing 'remote desktop' protocols more tolerable on the Internet, all the way back to the venerable VNC and RDP
What kind of remote UI app/workload do you see as common today?
This just means that development of _new_ features and research should take place on Wayland where it belongs. From what I've read X has been a long evolution into a hodepodge of historical but effectively dead interfaces with newer ones crammed along side... it doesn't make sense to continue the tradition of cramming more features into X, but that doesn't stop people developing new things on top of it while Wayland matures.
[1] https://en.wikipedia.org/wiki/Maintenance_mode
The root problem is that wayland replaced half of x.org and left the rest for someone else to figure out.
What's actually sad is that its replacement, Wayland, didn't learn any of the lessons of NeWS (or what we now call AJAX) and Emacs.
They could at least throw a JavaScript engine in as an afterthought. But it's WAY too late to actually design the entire thing AROUND an extension language, like NeWS and Emacs and TCL/Tk, which is how it should have been in the first place.
https://en.wikipedia.org/wiki/NeWS
NeWS was architecturally similar to what is now called AJAX, except that NeWS coherently:
+ used PostScript code instead of JavaScript for programming.
+ used PostScript graphics instead of DHTML and CSS for rendering.
+ used PostScript data instead of XML and JSON for data representation.
Designing a system around an extension language from day one is a WHOLE lot better than nailing an extension language onto the side of something that was designed without one (and thus suffers from Greenspun's tenth rule). This isn't rocket surgery, people.
https://en.wikipedia.org/wiki/Greenspun%27s_tenth_rule
>Any sufficiently complicated C or Fortran program contains an ad-hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp.
https://medium.com/@donhopkins/the-x-windows-disaster-128d39...
>If the designers of X-Windows built cars, there would be no fewer than five steering wheels hidden about the cockpit, none of which followed the same principles — but you’d be able to shift gears with your car stereo. Useful feature, that. - Marus J. Ranum, Digital Equipment Corporation
What pisses me off more than anything else is having to replace a lot of tools there are working perfectly with xorg but are not compatible with wayland.
A few example
* Window manager (of course)
* taking screenshots
* setting desktop background
* Intercepting multimedia key (e.g. volume handling)
* using redshift to set the screen temperature
* ...
Yes, I know that I'm using an unconventional setup, but the freedom to do that was one of the reasons that attracted me to a unix environment. Redhat is working very hard to change all of that for a "year of the linux Desktop" that (in my opinion) will never come.
Are you arguing that a decade of constant breaking changes to a huge number, if not the majority, of desktop linux programs, and the accompanying maintenance challenges (as versions of window managers, Gtk and the like would all be tightly coupled to specific Xorg versions) would have been a better approach than a rewrite + compatibility shim (XWayland) that allows for a fair amount of backwards compatibility?
Needless to say, I disagree.
Pretty annoying IMO. XMonad works exactly how I want it to.
It is possible to implement some of those features on a per-compositor basis, but the result of that will be graphical API fragmentation, as programs that interact with GUIs will need to have separate code for each compositor. And the work is not done even for Gnome (more precisely Gnome's Wayland compositor and the Gnome applications that use it) yet.
On the other hand one could say, eg. "Why not make a compositor accessibility protocol on top of Wayland?". End result of that, it is easy to guess, would be something worse than X Windows (because of even more layers of abstraction, and possibly even more incompatible standards/APIs/protocols), which the Wayland people were supposedly trying to escape from.
Edit: Another thing that makes Wayland (at least without an extension ...) unsuitable to replace X Windows is forced compositing. This means unavoidable double buffering and thus worse video performance (especially noticeable for interactive stuff like video games).
[-1] I prefer calling it security theater, because it does not bring any real security improvement in practice.
[0] https://news.ycombinator.com/item?id=20308011
[1] https://wiki.gnome.org/Accessibility/Wayland
[2] https://www.freedesktop.org/wiki/Accessibility/Wayland/