On Mac I finally got the example to run had to brew install glew and glfw3 and change the makefile to point to the include and lib paths. A lot of the interactions don't seem native, for example tap to click doesn't work, I have to do hard clicks on my Mac. Also dragging doesn't seem to work like other apps, it stops dragging if go outside the boundaries of the element. Overall this is pretty epic and I wonder if serious alternatives to XCode and JS based wrappers like Electron are possible.
The tap-problem depends on how the input integration is done, I've been running into similar problems in Dear ImGui (and Nuklear too). You need to implement your own little input event queue which tracks button up- and down-events and still creates a "click" even if both happen in the same frame. TBF, it's a common oversight when creating an input polling layer over event-based input, and usually it only shows up with touchpads, because physical buttons are usually too slow to trigger up and down in the same frame.
Check the sokol-headers integration example here, this should properly detect short touchpad taps:
The "drag doesn't work outside widget boundaries" seems to be a mix of Nuklear-specific and platform-specific problems.
It seems that scrollbars don't loose input when the mouse is not over them, but slider widgets do (that would be a Nuklear specific problem).
For the problem that an app might lose mouse input when the mouse moves outside the window boundary, this must be handled differently depending on the underlying OS. For instance on Windows you need to call SetCapture() [1]. The platform integration isn't handled by Nuklear itself, so you'll see different problems like this pop up depending on how carefully the platform integration has been implemented (the official samples should be in better shape though I guess).
Based on the gallery of examples it looks to be focused on what you might call "full screen" applications, like games. In these use case consistent cross-platform interaction is more important than feeling "native" on any given platform.
How relevant is C89 really nowadays? As far as I understand it took MSVC a long time to catch up to C99, but it's been there for a long time now. Is the motivation more related to retrocomputing? I would expect that decades-old compilers for retro platforms might not be reliably C89 conforming either.
Should clarify that you're referring (I presume) to dear imgui. Imgui in general refers to the immediate mode gui paradigm in general, as opposed to retained mode gui.
It's not important at all. “single-header-library“ in C basically means #including a .c file in disguise. It's for people who are too lazy to add a source directory to their build system...
Even on Web, in almost 40 years developing software, there was one single project where it did matter, the customer was part of a government organization.
Like writing secure software, until this becomes a legal requirement no matter what, it will keep being ignored by the large community of developers.
> until this becomes a legal requirement no matter what, it will keep being ignored by the large community of developers.
"no matter what"? One of the more popular uses for these immediate-mode UI toolkits is to create interfaces for video games, either debug UIs or the menus painted over the game's main scene/content. I don't think people who need a screen reader are going to be playing a twitch-reaction shooter game.
I agree that these shouldn't be used for general use applications, but I strongly disagree with the sentiment that somehow all programs should be forced to work with screen readers. Some domains and applications are primarily visual and don't really translate well to textual interaction. I think these kinds of toolkits work best with those kinds of applications.
You shouldn't use these to write the next Discord or Slack or Firefox or LibreOffice etc. -- but I don't see a problem with making a debug UI or a menu for an action video game with an immediate mode toolkit.
Woah hang on there buddy, this is an open source project often used for video games. If you want accessibility so bad that you think it should be a “legal requirement” why don’t you write your own accessible GUI tool kit instead of complaining on the internet?
Are there any options, besides Qt, that are accessibility friendly? I don't consider anything but Qt for this very reason (GTK on Windows/MacOS is not accessibility-aware or whatever you want to call it, to my knowledge.)
And I generally like Qt, but I can see how you might consider it heavy and a bit unwieldy.
I believe that .NET MAUI, when it drops, will be accessibility-friendly from the get-go. Of course, if Qt is heavy for one's needs, then I imagine .NET is, too. And of course there's Electron.
My impression, from looking into this a bit a few months ago, is that cross-platform accessibility is just a huge effort, and may be beyond the reach of projects that lack commercial backing.
It's never openly stated, though, so there are a lot of people who DO use it for other things than games, and then end up with a completely non-accessible app.
The thing that makes it more unsuitable for video games imho is the lack of IME support, which absolutely is a concern for video games, as many gamedevs have found out The Hard Way
As usual though this statement is posted yet no real references are linked on how and what needs to be actually done to make a graphical user interface good for accessibility. Rules like have a high contrast color version for people with less eyesight. I tried to find resources about accessibility a while back to actively work towards it but couldn't really find anything official or sometimes only behind paywalls.
Not sure if commenters posting about accessibility have a disability themselves or are speaking up for people with disability. However reality is that in total it is a small percentage of users and without guidelines or having a disability yourself it is really hard to work towards. Especially at least from what I have seen in games the variety and range of disability. Without good guidelines and accessible interfaces for aiding tools. So requiring a gui library without funding to have a high level of accessibility or labeling it worthless is a somewhat cheap way of judging these libraries.
Good points overall. The reason nobody posts examples or references is because it's way too damn hard right now.
Current accessibility APIs are tightly coupled (conceptually and logically) to APIs that originated in the 80s: Win32 and Cocoa.
If you're only using native widgets it's virtually automatic to have full accessibility. But as soon as you need minimal customisation you have to interact with extremely verbose APIs. Cocoa's API is much better, but MS's Automation API is very arcane and complicated, and even MS employees acknowledge that.
On top of that, even if you're using the APIs as intended, the examples provided by Microsoft are low-quality and are not a good starting point for implementing accessibility on non-native.
Thus, only giant corporations have the resources to fully re-implement accessibility in non-native applications. Google can do it in Flutter and Chromium (Electron). Nokia for Qt. Facebook for React Native. But single developers just don't have the power to do it on their lightweight libraries.
What we need is a smaller lower-level accessibility API that gives accessibility to game engines, non-native UI toolkits, TUIs and command line apps. But I don't think there's much incentive coming from OS makers to do it.
Take for instance a Webbrowser. You could implement one using ncurses, etc. like links, lynx, etc. But for the screen reader this is just a terminal window with a bunch of text.
A Gui can help the screen reader to know which part to read when and how it releats to other parts of the GUI.
Also as a blind person, you live in a world of people that see. You cannot expect every developer to take care and cater to your needs. A GUI that takes care about this automatically for the developer means the dev can just continue doing their thing. While the blind people, can benefit from it as well.
I have no experience with assistive software but I suppose non-native UI libraries should be using native OS toolkit (e.g. [0][1]) or specific APIs libraries, which targets NVDA, JAWS, Orca and others (this is one of the same idea shared in the one answer on SO [2]). I guess web browsers and other native GUIs just do that behind the scene.
Edit: IAccessible2 [3] seems to be compliant to both Windows and Linux. Whether Apple AppKit provides specific accessibility-focused UI elements [4]. Flutter has also similar one, it is called Semantics [5].
Are there any screen readers that can model a system from images? Immediate mode GUIs can deliver multiple frames per second, so it seems like there would be plenty of data points from which to build a dynamic model of the system.
This is possible at least for restricted domains: I've personally written software for image processing for text extraction and application steering from high frequency screenshots of a Windows app that didn't have an automation API.
Also: The DeepMind Starcraft 2 AI plays at a high level in real-time from, AIUI, an image stream.
Has anyone made a serious attempt at a intermediate-mode frontend to desktop GUI toolkits (as opposed to single-application ones that are rendered by some general-purpose accelerated graphics library)? I've experimented a little bit in the past (https://github.com/blackhole89/instagui/blob/master/main.cpp, whose implementation is based on something pretty close to my understanding of Elm's "virtual DOM" diffing; don't mind the kooky custom macro system), but wound up bumping into a lot of nasty little problems that made hacking on it not a lot of fun.
Oh, yeah, the Stack Overflow post especially seems to talk about very similar problems to what I have been grappling with. Thanks for the pointer! The code is pretty opaque to me, though; it's been well over a decade since I've last had any interaction with the WINAPI programming style, Hungarian notation and all.
I wonder why he arrives at the conclusion that he needs a full-fledged DSL for what he is doing. I remember that at the time I was working on this, the impression I had was that a lot of my problems would go away if only there were some unique way to identify every distinct invocation of a function (so I could use data along the lines of "you are currently in the 3rd call of Button() in something.cpp"). __FILE__ and __LINE__ get close but don't disambiguate between multiple calls on the same line (and anyhow would need to be baked into the invocations with macro hackery).
A cross platform gui library needs to have consistent layout across all platforms so it has uses basic flexbox layout algorithm instead of the native macos constraint system, I think this turns out better anyway because flexbox is a lot simpler in my mind. It decouples the layout algorithm from the UI code so you don't need to rerun your whole UI whenever the window changes size. It uses pure c++ native calls instead of objective-c which is kind of crazy but the library user never needs to interact with it
I have used Nuklear in a medium sized hobby project, and it's kinda cool but I will be migrating away from it. I am not aware of it being used in shipped products, unlike Dear Imgui which is a more popular alternative.
The first reason is that it doesn't have a good layout system requiring manually specifying positions and sizes in quite a few places and doesn't gracefully handle varying font sizes. My GUI code is littered with x*FONT_SIZE to do some kind of scaling to work on my low res 27" screen and a high res 13" laptop. I don't need it to do any magic behind the scenes with font sizes (such as moving from monitor to monitor), just allow setting the GUI font size to a reasonable value and sticking with it without manually specifying every row height.
The motivation for this probably is that the author wanted to have extremely skinnable UI to be able to do fancy game UI's. However, if you look at a lot of modern game UIs they have a flat "material design" type of UI with flexbox-like layouts, and no fancy skinning, just flat colors and rounded corners. Another motivator is that Nuklear can run on triangle-based GPUs and pixel-pushing APIs. This is not valuable to me.
The second reason is that it's buggy. The developers have taken to the extremes to avoid dependencies. This includes stuff like home-brewed implementation of printf. Which will hang in an infinite loop when you try to print a float that has INF value. With some hacking, I was able to make Nuklear use standard printf and some other standard library functions instead of the bundled implementations.
I appreciate all the effort the developers have put into this, but it's not ready for prime time and expect to spend time fixing bugs if you'll use it.
I'll probably be moving away from the immediate mode GUI paradigm as a whole (instead of using Dear Imgui or Nanogui), it's a poor fit for the application I'm developing.
Recently I've been seeing a quite polished game UI toolkit used in several published games, like Rise of Industry and Space Haven. Does anyone know what this toolkit is? Something Unity offers or some proprietary library?
On my spare time I've also worked on a retained mode flexbox-based UI layout and rendering library that integrates like Nuklear or Dear Imgui. In other words, the GUI library doesn't have any dependencies or have any side effects. You feed in a tree of GUI elements, the events coming from the windowing system and as output you get a vertex buffer and a list of triggered events. This concept shows some promise but unfortunately I don't seem to have the time it takes to turn this into a polished product. I'm happy to talk about it if anyone is interested.
Thanks. I came here to ask how it compares to Dear ImGui (which I've used). Nuklear seems to be skinnable to make it more useful as an in-game UI (going by the screenshots) while Dear ImGui is targeted more at built-in tooling. So I was always curious about Nuklear, but I stuck with Dear ImGui because its more popular and has more stuff available for it.
Your post has made it clear that I should stick with Dear ImGui and figure something else out for in-game UI. I'll probably roll my own simple thing (I'm just playing around for fun, so I can afford to do that).
Well it seems that the developer wanted to support embedded platforms too.
printf isn't a part of the freestanding C standard, and neither is snprintf. That said, there are portable freestanding snprintf implementations out there with permissive licensing.
This might provide a slightly better "experience" (it's about 10x smaller, doesn't look blurry on Retina displays, and doesn't suffer from the "touchpad taps are ignored" problem):
The radio and checkboxes are weird. Radio buttons seem intuitively inverted (empty circle is the selected one). Where as on checkboxes I think the filled square means checked but I can't say because of the weird radio buttons.
I don't understand... the documentation says "does not have any dependencies" but still it requires glfw3. Is that alright? Cannot compile the examples without glfw3 but maybe I'm doing something wrong.
Nuklear doesn't have any dependency, but you must provide it a backend so it can do its drawing. The examples use a glfw3 backend, but you can provide any other backend, it's just a handful of functions to implement and is usually not too hard.
> Nuklear doesn't have any dependency, but you must provide it a backend so it can do its drawing. The examples use a glfw3 backend, but you can provide any other backend, it's just a handful of functions to implement and is usually not too hard.
The text editor gcc doesn't have any dependency, but you must provide it a backend so it can do its text editing. The example uses a nano backend written in C, but you can provide any other backend like vim or emacs, it's just a handful of functions to implement and is usually not too hard.
The library itself doesn't depend on anything, instead it delegates the "platform integration" (rendering and input) to outside code.
The examples somehow need to connect to the underlying operating system, and that's why they depend on GLFW as intermediate layer, but this could also be SDL, or - shameless plug - the sokol headers (https://github.com/floooh/sokol), or (more commonly) a game engine like Unity, Unreal Engine, or your own code.
I think in this context they mean it doesn't depend on any particular dependencies, but at some point you need a way to render their frame buffer. I guess you could trivially swap out glfw3 for something else, such as using the BIOS video mode directly.
Check the sokol-headers integration example here, this should properly detect short touchpad taps:
https://floooh.github.io/sokol-html5/nuklear-sapp.html
It seems that scrollbars don't loose input when the mouse is not over them, but slider widgets do (that would be a Nuklear specific problem).
For the problem that an app might lose mouse input when the mouse moves outside the window boundary, this must be handled differently depending on the underlying OS. For instance on Windows you need to call SetCapture() [1]. The platform integration isn't handled by Nuklear itself, so you'll see different problems like this pop up depending on how carefully the platform integration has been implemented (the official samples should be in better shape though I guess).
[1] https://docs.microsoft.com/en-us/windows/win32/api/winuser/n...
Like writing secure software, until this becomes a legal requirement no matter what, it will keep being ignored by the large community of developers.
"no matter what"? One of the more popular uses for these immediate-mode UI toolkits is to create interfaces for video games, either debug UIs or the menus painted over the game's main scene/content. I don't think people who need a screen reader are going to be playing a twitch-reaction shooter game.
I agree that these shouldn't be used for general use applications, but I strongly disagree with the sentiment that somehow all programs should be forced to work with screen readers. Some domains and applications are primarily visual and don't really translate well to textual interaction. I think these kinds of toolkits work best with those kinds of applications.
You shouldn't use these to write the next Discord or Slack or Firefox or LibreOffice etc. -- but I don't see a problem with making a debug UI or a menu for an action video game with an immediate mode toolkit.
For America it already is. The ADA covers software, too afaik
And I generally like Qt, but I can see how you might consider it heavy and a bit unwieldy.
My impression, from looking into this a bit a few months ago, is that cross-platform accessibility is just a huge effort, and may be beyond the reach of projects that lack commercial backing.
> It was designed as a simple embeddable user interface for application
so, at least rhetorically, it is more generic/general than for video-game use.
"Game Maker's Toolkit" does an overview of accessibility in games: https://www.youtube.com/watch?v=RWQcuBigOj0
Not sure if commenters posting about accessibility have a disability themselves or are speaking up for people with disability. However reality is that in total it is a small percentage of users and without guidelines or having a disability yourself it is really hard to work towards. Especially at least from what I have seen in games the variety and range of disability. Without good guidelines and accessible interfaces for aiding tools. So requiring a gui library without funding to have a high level of accessibility or labeling it worthless is a somewhat cheap way of judging these libraries.
Current accessibility APIs are tightly coupled (conceptually and logically) to APIs that originated in the 80s: Win32 and Cocoa.
If you're only using native widgets it's virtually automatic to have full accessibility. But as soon as you need minimal customisation you have to interact with extremely verbose APIs. Cocoa's API is much better, but MS's Automation API is very arcane and complicated, and even MS employees acknowledge that.
On top of that, even if you're using the APIs as intended, the examples provided by Microsoft are low-quality and are not a good starting point for implementing accessibility on non-native.
Thus, only giant corporations have the resources to fully re-implement accessibility in non-native applications. Google can do it in Flutter and Chromium (Electron). Nokia for Qt. Facebook for React Native. But single developers just don't have the power to do it on their lightweight libraries.
What we need is a smaller lower-level accessibility API that gives accessibility to game engines, non-native UI toolkits, TUIs and command line apps. But I don't think there's much incentive coming from OS makers to do it.
Take for instance a Webbrowser. You could implement one using ncurses, etc. like links, lynx, etc. But for the screen reader this is just a terminal window with a bunch of text.
A Gui can help the screen reader to know which part to read when and how it releats to other parts of the GUI.
Also as a blind person, you live in a world of people that see. You cannot expect every developer to take care and cater to your needs. A GUI that takes care about this automatically for the developer means the dev can just continue doing their thing. While the blind people, can benefit from it as well.
1. Disabled person not the one who controls what gets run.
2. Disabled person not the one who controls what is installed on device.
3. Joint use by two people of the same app.
4. Disabled person wants to be able to ask someone for help who can interact with the app "fully".
Take features like voice control for example where a motion impaired person can still enjoy visually rich content. https://youtu.be/aqoXFCCTfm4
Does anyone have any good resources?
[0]: https://en.wikipedia.org/wiki/Microsoft_Active_Accessibility
[1]: https://en.wikipedia.org/wiki/Microsoft_UI_Automation
[2]: https://stackoverflow.com/questions/65168795/make-non-native...
Edit: IAccessible2 [3] seems to be compliant to both Windows and Linux. Whether Apple AppKit provides specific accessibility-focused UI elements [4]. Flutter has also similar one, it is called Semantics [5].
[3]: https://wiki.linuxfoundation.org/accessibility/iaccessible2/...
[4]: https://developer.apple.com/documentation/appkit/nsaccessibi...
[5]: https://api.flutter.dev/flutter/widgets/Semantics-class.html
This is possible at least for restricted domains: I've personally written software for image processing for text extraction and application steering from high frequency screenshots of a Windows app that didn't have an automation API.
Also: The DeepMind Starcraft 2 AI plays at a high level in real-time from, AIUI, an image stream.
https://sourceforge.net/projects/dyndlgdemo/files/
https://github.com/MikeDunlavey/AucUI
He explains it a bit more here:
https://stackoverflow.com/questions/371898/how-does-differen...
I wonder why he arrives at the conclusion that he needs a full-fledged DSL for what he is doing. I remember that at the time I was working on this, the impression I had was that a lot of my problems would go away if only there were some unique way to identify every distinct invocation of a function (so I could use data along the lines of "you are currently in the 3rd call of Button() in something.cpp"). __FILE__ and __LINE__ get close but don't disambiguate between multiple calls on the same line (and anyhow would need to be baked into the invocations with macro hackery).
example: https://pbs.twimg.com/media/EuQ-6vzXUAca_Ph?format=jpg&name=... (ignore the fact the UI goes from bottom to top, that will be fixed)
A cross platform gui library needs to have consistent layout across all platforms so it has uses basic flexbox layout algorithm instead of the native macos constraint system, I think this turns out better anyway because flexbox is a lot simpler in my mind. It decouples the layout algorithm from the UI code so you don't need to rerun your whole UI whenever the window changes size. It uses pure c++ native calls instead of objective-c which is kind of crazy but the library user never needs to interact with it
The first reason is that it doesn't have a good layout system requiring manually specifying positions and sizes in quite a few places and doesn't gracefully handle varying font sizes. My GUI code is littered with x*FONT_SIZE to do some kind of scaling to work on my low res 27" screen and a high res 13" laptop. I don't need it to do any magic behind the scenes with font sizes (such as moving from monitor to monitor), just allow setting the GUI font size to a reasonable value and sticking with it without manually specifying every row height.
The motivation for this probably is that the author wanted to have extremely skinnable UI to be able to do fancy game UI's. However, if you look at a lot of modern game UIs they have a flat "material design" type of UI with flexbox-like layouts, and no fancy skinning, just flat colors and rounded corners. Another motivator is that Nuklear can run on triangle-based GPUs and pixel-pushing APIs. This is not valuable to me.
The second reason is that it's buggy. The developers have taken to the extremes to avoid dependencies. This includes stuff like home-brewed implementation of printf. Which will hang in an infinite loop when you try to print a float that has INF value. With some hacking, I was able to make Nuklear use standard printf and some other standard library functions instead of the bundled implementations.
I appreciate all the effort the developers have put into this, but it's not ready for prime time and expect to spend time fixing bugs if you'll use it.
I'll probably be moving away from the immediate mode GUI paradigm as a whole (instead of using Dear Imgui or Nanogui), it's a poor fit for the application I'm developing.
Recently I've been seeing a quite polished game UI toolkit used in several published games, like Rise of Industry and Space Haven. Does anyone know what this toolkit is? Something Unity offers or some proprietary library?
On my spare time I've also worked on a retained mode flexbox-based UI layout and rendering library that integrates like Nuklear or Dear Imgui. In other words, the GUI library doesn't have any dependencies or have any side effects. You feed in a tree of GUI elements, the events coming from the windowing system and as output you get a vertex buffer and a list of triggered events. This concept shows some promise but unfortunately I don't seem to have the time it takes to turn this into a polished product. I'm happy to talk about it if anyone is interested.
Your post has made it clear that I should stick with Dear ImGui and figure something else out for in-game UI. I'll probably roll my own simple thing (I'm just playing around for fun, so I can afford to do that).
printf isn't a part of the freestanding C standard, and neither is snprintf. That said, there are portable freestanding snprintf implementations out there with permissive licensing.
https://floooh.github.io/sokol-html5/nuklear-sapp.html
See https://immediate-mode-ui.github.io/Nuklear/doc/nuklear.html...
The text editor gcc doesn't have any dependency, but you must provide it a backend so it can do its text editing. The example uses a nano backend written in C, but you can provide any other backend like vim or emacs, it's just a handful of functions to implement and is usually not too hard.
The examples somehow need to connect to the underlying operating system, and that's why they depend on GLFW as intermediate layer, but this could also be SDL, or - shameless plug - the sokol headers (https://github.com/floooh/sokol), or (more commonly) a game engine like Unity, Unreal Engine, or your own code.