Readit News logoReadit News
MarcusE1W · 2 years ago
Is this something where it would be helpful if the Linux (environment) developers worked together? Like the (graphics) kernel, GTK, KDE, Wayland, … guys all in one room (or video conference) to discuss requirements iron out one graphics architecture that is efficient and transparent?

I think it’s good that different graphics systems exist but it feels unnecessary that every team has to make their own discoveries how to handle the existing pieces.

If work were coordinated at least on the requirements and architecture level then I think a lot of synergies could be achieved. After that everyone can implement the architecture the way that works best for their use case, but on some common elements could be relied on.

BearOso · 2 years ago
That's exactly what happened. This is the original intent for subsurfaces. A bunch of Wayland developers got together and wrote the spec a long time ago. The only thing happening now is Gtk making use of them transparently in the toolkit.

Subsurfaces didn't have bug-free implementations for a while, so maybe some people avoided them. But I know some of us emulator programmers have been using them for output (especially because they can update asynchronously from the parent surface), and I think a couple media players do, too. It's not something that most applications really need.

jdub · 2 years ago
They do, it's just not hugely visible. Two great conferences where some of that work happened were linux.conf.au and the Linux Plumbers Conference.
dontlaugh · 2 years ago
That’s exactly how Wayland came to be.
neurostimulant · 2 years ago
Rounded corners seems like a feature that has unexpectedly high performance penalty but the ui designers refused to let it go.
DonHopkins · 2 years ago
It's not like crazy out of control avant garde different thinking UI designers haven't and totally ruined the user interface of a simple video player ever before!

Interface Hall of Shame - QuickTime 4.0 Player (1999):

http://hallofshame.gp.co.at/qtime.htm

anthk · 2 years ago
And then you have smplayer (Multiplatform) and mpc-hc as a good balance between usability and features.

I might write some UI for mplayer/mpv on Motif some day, is not rocket science, you basically talk to both with sockets and sending commands. Kinda like mpc/mpd.

Deleted Comment

solarkraft · 2 years ago
It's something I as a user would also refuse to let go, given that the performance penalty is reasonably small (I think it is).
torginus · 2 years ago
I think the point is that it's not - rather than just copying a rectangular area to the screen, you have to go through the intermediate step of rendering everything to a temporary buffer, and compositing the results via a shader.
bsder · 2 years ago
Professional designers mostly cut their teeth on physical objects and physical objects almost never have sharp corners.

This then got driven into the ground with the "Fisher-Price GUI" that is the norm on mobile because you can't do anything with precision since you don't have a mouse.

I would actually really like to see a UI with just rectangles. Really. It's okay, designers. Take a deep breath and say: "GUIs aren't bound by the physical". BeOS and MacOS used to be very rectangular. Give us a nice nostaligia wave of fad design with rectangles, please.

Animations and drop shadows are another thing I'd like to see disappear.

twoodfin · 2 years ago
Rounded corners for windows have been in the Macintosh operating system since the beginning.

https://www.folklore.org/StoryView.py?story=Round_Rects_Are_...

anthk · 2 years ago
My main setup it's cwm+uxterm+tmux+sacc+mpv/mocp/mupdf... and such.

But, from time to time, I sometimes use emwm with xfile and a bunch of light Motif apps with some Solaris9-themed GTK2/3 ones when there's no Motif alternative. Usable, with contrast and every menu it's sticky, so good enough for a netbook.

bee_rider · 2 years ago
Is it possible that they are just the well know representative example? I vaguely suspect they that is the case, but I can’t think of the broader class they are an example of, haha.

The play button they show seems to be a good one, though. It is really nice to have it overlaid on the video.

DonHopkins · 2 years ago
Fortunately the Play button disappears when the video starts playing, so it has no effect on the frame rate!

Or instead of a triangular Play button, you could draw a big funny nose in some position and orientation, and the game would be to pause the video on a frame with somebody's face in it, with the nose in just the right spot.

I don't know why the vlc project is ignoring my prs.

play_ac · 2 years ago
If you're writing a video player or a game or something else that wants direct scan-out, then you can disable the round corners in your CSS.
jiehong · 2 years ago
I’m not sure I understand why an overlay allows partial offloading while rounding the corner of the video does not.

Couldn’t the rounded corners of a video also be an overlay?

I’m sure I’m missing something here, but the article does not explain that point.

audidude · 2 years ago
If you have the video extend to where the corners are rounded, you must use a "rounded clip" on the video ontop of the shadow region (since they butt).

That means you have to power up the 3d part of the GPU to do that (because the renderer does it in shaders).

Where as if you add some 9 pixels of black above/below to account for the rounded corner, there is no clipping of the video and you can use hardware scanout planes.

That's important because keeping the 3d part of the GPU turned off is a huge power savings. And the scanout plane can already scale for you to the correct size.

andyferris · 2 years ago
I think it’s that the _window_ has rounded corners, and you don’t want the content appearing outside the window.
audidude · 2 years ago
No, you can already be sure it's the right size. This has to do with what it takes to occlude the rounded area from the final display.
play_ac · 2 years ago
>Couldn’t the rounded corners of a video also be an overlay?

No because the clipping is done in the client after the content is drawn. The client doesn't have the full screen contents. To make it work with an overlay, the clipping would have to be moved to the server. There could be another extension that lets you pass an alpha mask texture to the server to use as a clip mask. But this doesn't exist (yet?)

audidude · 2 years ago
And even if it did, you can't do clip masks with scanout overlays. So you have to composite (and therefore take the hit of ramping up the 3d capabilities of the GPU).
orra · 2 years ago
I'd love to know the answer to that. This is fantastic work, but it'd be a shame for it to be scunnered by rounded corners.
phkahler · 2 years ago
Because the UX folks want what they want. I want my UI out of the way, including the corners of video and my CPU load.
diath · 2 years ago
I wonder if there are plans to make it work with X11 in the future, I've yet to see the benefit of trying to switch to Wayland on my desktop, it just doesn't work as-is the way my 8 year old setup works.
PlutoIsAPlanet · 2 years ago
This is one of the benefits of the Wayland protocol over X, being able to do this kind of thing relatively straightforwardly.

Once support for hardware planes becomes more common in Wayland compositors, this can be tied to ultimately allow no-copy rendering to the display for non-fullscreen applications, which for video playback (incl. likes of Youtube) equals to reduced CPU & GPU usage and less power draw, as well as reduced latency.

AshamedCaptain · 2 years ago
> This is one of the benefits of the Wayland protocol over X

What.

The original design of X actually encouraged a separate surface / Window for each single widget on your UI. This was actually removed in Gtk+3 ("windowless widgets"). And now they are bringing it back just for wayland ("subsurfaces"). As far as I can read, it is practically the same concept.

maccard · 2 years ago
> I've yet to see the benefit of trying to switch to Wayland on my desktop

how about Graphics Offload?

diath · 2 years ago
This feature would be nice-to-have but is not impactful enough (at least to me) to outweigh the cons of having to switch to Wayland, which would include migrating my DE and getting accustomed to it as well as looking for replacement applications for these that do not work properly with Wayland (most notably ones that deal with global keyboard hooks). Admittedly I have never tried XWayland which I think could potentially solve some of these issues.
AshamedCaptain · 2 years ago
Frankly, it was X11 which introduced "Graphics Offload" in the first place, with stuff like XV, chroma keying, and hardware overlays. Then compositors came and we moved to texture_from_surface extensions and uploading things into GPUs. This is just the eternal wheel of reinventing things in computing (TM) doing yet another iteration and unlikely to give any tangible benefits over the situation from decades ago.
kaba0 · 2 years ago
There are plenty of wheel reinventions in IT, but let’s not pretend that modern graphics are anything like it used to be. We have 8k@120Hz screens now, the amount of pixels that have to be displayed in a short amount of time is staggering.
play_ac · 2 years ago
No, nothing like this exists in X11. Xorg still doesn't really have support for non-RGB surfaces. DRI3 gets you part of the way there for attaching GPU buffers but the way surfaces work would have to be overhauled to work more like Wayland, where they can be any format supported by the GPU. There isn't any incentive to implement this in X11 either because X11 is supposed to work over the network and none of this stuff would.

Yes, you're technically right that this would have been possible years ago but it wasn't actually ever done, because X11 never had the ability to do it at the same time as using compositing.

audidude · 2 years ago
This would require protocol changes for X11 at best, and nobody is adding new protocols. Especially when nobody does Drawable of Drawables anymore and all use client-side drawing with Xshm.

You need to dynamically change stacking of subsurfaces on a per-frame basis when doing the CRTC.

AshamedCaptain · 2 years ago
I really don't see why it would need a new protocol. You can change stacking of "subsurfaces" in the traditional X11 fashion and you can most definitely do "drawables of drawables". At the very least I'd bet most clients still create a separate window for video content.

I agree though it would require a lot of changes to the server and no one is in the mood (like, dynamically decide whether I composite this window or push it to a Xv port or hardware plane? practically inconceivable in the current graphics stack, albeit it is not a technical X limitation per-se). This entire feature is also going to be pretty pointless in Wayland desktop space either way because no one is in the mood either -- your dmabufs are going to end up in the GPU anyway for the foreseeable future, just because of the complexity of liftoff, variability of GPUs, and the like.

knocte · 2 years ago
I doubt they have the energy to backport bleeding edge tech.
kelnos · 2 years ago
I would very much doubt it. This would likely require work on Xorg itself (a new protocol extension, maybe; I don't believe X11 supports anything but RGB, [+A, with XRender] for windows, and you'd probably need YUV support for this to be useful), which no one seems to care to do. And the GTK developers seem to see their X11 windowing backend as legacy code that they want to remove as soon as they can do so without getting too many complaints.
hurryer · 2 years ago
No screen tearing is a major benefit of using a compositor.
mrob · 2 years ago
And screen tearing is a major benefit of not using a compositor. There's an unavoidable tradeoff between image quality and latency. Neither is objectively better than the other. Xorg has the unique advantage that you can easily switch between them by changing the TearFree setting with xrandr.
mnd999 · 2 years ago
If it worked exactly the same there would indeed be no benefit. If you’re happy with that you have then there’s no reason to switch.
aktuel · 2 years ago
I am sorry to tell you that X11 is completely unmaintained by now. So the chances of that happening are zero.
NGRhodes · 2 years ago
FYI 21.1.9 was released less than a month ago (https://lists.x.org/archives/xorg/2023-October/061515.html), they are still fixing bugs.
ng55QPSK · 2 years ago
Is the same infrastructure available in Windows and MacOS?
knocte · 2 years ago
From the article:

> What are the limitations?

> At the moment, graphics offload will only work with Wayland on Linux. There is some hope that we may be able to implement similar things on MacOS, but for now, this is Wayland-only. It also depends on the content being in dmabufs.

PlutoIsAPlanet · 2 years ago
macOS supports similar things in its own native stack, but GTK doesn't make use of it.
jamesfmilne · 2 years ago
macOS has IOSurface [0], so it can be done there too. It would require someone to implement it for GTK.

[0] https://developer.apple.com/documentation/iosurface

audidude · 2 years ago
When I wrote the macOS backend and GL renderer I made them use IOSurface already. So it's really a matter of setting up CALayer automatically the same way that we do it on Linux.

I don't really have time for that though, I only wrote the macOS port because I had some extra holiday hacking time.

torginus · 2 years ago
On Windows and DirectX, you have the concept of Shared Handles, which are essentially handles you can pass between process boundaries. It also comes with a mutex mechanism to signal who is using the resource at the moment. Fun fact - Windows at the kernel level works with the concept of 'objects', which can be file handles, window handles, threads, mutexes, or in this case, textures, which are reference counted. Sharing a particular texture is just exposing the handle to multiple processes.

A bit of reading if you are interested:

https://learn.microsoft.com/en-us/windows/win32/direct3darti...

diath · 2 years ago
The last paragraph says:

> At the moment, graphics offload will only work with Wayland on Linux. There is some hope that we may be able to implement similar things on MacOS, but for now, this is Wayland-only. It also depends on the content being in dmabufs.

Deleted Comment

pjmlp · 2 years ago
Nope, it is yet another step making Gtk only relevant for Linux development.
jdub · 2 years ago
Supporting a feature on one platform does not make a toolkit less relevant or practical on another platform.
andersa · 2 years ago
Has it been relevant for something else before?
ori_b · 2 years ago
The thing that's always felt slow to me in GTK was resizing windows, not getting pixels to the screen. I'm wondering if adding all these composited surfaces adds a cost when resizing the windows and their associated out of process surfaces.
rollcat · 2 years ago
More likely it removes costs. This is very specifically an optimization.
ahartmetz · 2 years ago
It is strange that the article doesn't compare and contrast to full-screen direct scanout, which most X11 and presumably Wayland compositors implement, e.g. KDE's kwin-wayland since 2021: https://invent.kde.org/plasma/kwin/-/merge_requests/502

Maybe that is because full-screen direct scanout doesn't take much (if anything) in a toolkit, it's almost purely a compositor feature.

kaba0 · 2 years ago
Is there a significant difference? Hardware planes are basically that, just optionally not full-screen.
ahartmetz · 2 years ago
Fullscreen direct scanout doesn't require (probably tricky) coordination to blend the graphical outputs of multiple processes. How that coordination works is the interesting technical question.
matheusmoreira · 2 years ago
> A dmabuf is a memory buffer in kernel space that is identified by a file descriptor.

> The idea is that you don’t have to copy lots of pixel data around, and instead just pass a file descriptor between kernel subsystems.

So like sendfile for graphics?

https://www.man7.org/linux/man-pages/man2/sendfile.2.html

It's a pretty awesome system call. We should have more of those.