It's funny how every TUI developer eventually stumbles over Unicode and then handling international characters and emojis correctly turns into its own project close to the same scope of (or even bigger than) the original TUI project. It happened to me on rivo/tview and through the resulting rivo/uniseg package, I learned that all other TUI library maintainers deal with the same issues. Finally, everyone invents their own unique solutions to the problem because character width is not standardized and terminals are messy, as noted in the article. OP simply supports Unicode 9 only (Unicode is at version 15.1 at the moment). Sooner or later, users will complain, however, that certain emojis or international characters are not rendered correctly. So I'm not sure that this is a great solution.
That isn't sufficient. Codepoints with ambiguous width can't be detected in a standard way. A large number of pre-emoji symbols have been upgraded to have emoji presentation. Some systems default them to emojis with wide rendering, others maintain the text presentation with narrow rendering. Many systems ignore the presentation selectors if you want to force it.
really great read, thanks. Im a little disappointed that no terminal emulator both implements the Kitty image protocol, and mode 2027. I wish there would be a terminal project that would just pick the best standards we have at the moment. Im not a fan of sixel for a lot of reasons. Im looking forward to trying Ghostty, though.
I personally don't think this mode is all that very useful, to be fair. First of all, the grapheme clustering is not set in stone, it's been changed from one Unicode standard version to other.
Second, and this is mostly because my personal use cases are very humble, a much, much simple to implement workaround, for everyone involved, would be a couple of OSC sequences which would mark a part of output text as the prompt (when terminal is in canonical/cooked mode), so that a huge chunk of readline could be simply thrown away.
So your program could just print a prompt, and then simply read the cooked line. In the meanwhile, the terminal emulator would handle line editing, line-wrapping and asynchronous output: if you keep outputing text to the terminal while a prompt is active, the terminal would clear the prompt and the unfinished line, print the text, then re-display the prompt and the line; basically what all "async readline" libraries do already with rl_clear/rl_redisplay — but doing it in the terminal would take care of this properly, because the terminal definitely knows how wide all the symbols it itself thinks are. And the tab completion could be supported by returning a <TAB>-terminated line to the program, instead of an <LF>-terminated line.
Unfortunately, I don't think something like this can actually become even moderately widely adopted.
Edit: Or, you know, maybe we could extend terminfo? Like, introduce twcswidth() function that would take your string, and the somehow encoded Unicode grapheme clustering data that the current terminal is actually using which you can query from terminfo, and return the number of screen cells it would take on this terminal.
It's a design decision. On one end, if I'm reading your question correctly, you could use 0xFFFD (the replacement character) for anything not recognized as language-specific characters in the BMP and SMPs (this can be done within practically all existing Unicode libraries by filtering on character class) which will inadvertantly filter some non-emoji symbols and doesn't really convey any information (it can even look unprofessional, it reminds me a lot of the early web during the pre-unicode growing pains of poorly implemented i18n/l11n).
There are libraries like Unidecode[0py] [0go] [0js] which convert from unicode to ASCII text that might be easiest to include in a TUI. All the ones I looked at will convert emoji to `[?]` but many other characters are converted to that, too, including unknowns.
On the other end you can keep a running list of what you mean by emoji[1] and pattern match on those characters, then substitute for a representative emoji. But it will still pose some difficulty around what to choose for the representative symbol and how to make it fit nicely within a TUI. An example of a library for pattern-matching on emoji is emoji-test-regex-pattern[2] but you can see it is based on a txt file that needs to be updated to correspond with additions to Unicode.
It doesn't matter; what matters is that both your (terminal-manipulating) program and terminal emulator agree on the symbols widths. Considering that they usually won't (lots of terminal emulators have their own hand-crafted, statically linked wcwidth/wcswidth functions; the readline library also has them hard-coded, by the way), it's quite frustrating.
My big complaint with textual is that it wants to be react. I can see why it would want to be react, that's a very popular framework that a lot of people are already familiar with, but I don't think it's actually a good way of doing user interfaces. But the basic reactive design is a well trod road, and basing your system design on something that's known to work is a great way to derisk the project. Sure, we'll draw some heavy inspiration from react.
Alright, so we're using some bastardization of CSS as well? That might be going a little bit too far. The react model already breaks the idea of CSS in a lot of ways, preferring standardized components. Sure, developers still use CSS to customize components, but I view that more as a side effect of how react evolved rather than as a justifiable architectural choice. But as long as you don't have to use CSS I suppose it's fine.
Last I tried it, you do have to use CSS. There are no good standard components, so you will be making your own, and instead of having components be one nice self encapsulated Python class the standard docs use things like list components and then style them with an external style sheet.
For those reasons textual just isn't for me yet. In python there should be one, and preferably only one, obvious way to do something. By mirroring react so closely they're also mirroring what I see as the JavaScript communities biggest vice.
I don't see how it is trying to be React; It definitely tries to use concepts from HTML and CSS (and perhaps even some JS?), but I actually found that it allowed me to move very quickly as I didn't need to learn a whole new UI system from scratch. I didn't need to create any component of my own, and CSS is something I already know. If anything, like I wrote in another comment, the slight layout differences and deviation from CSS is what sometimes got me a little confused.
I want a way to embed a terminal (it doesn't have to support a myriad terminal emulations, only one) inside a graphical program. MacOS first, but other platforms would be nice.
So, imagine a normal GUI window, but one of the components in it is a terminal window. Is there something like that?
GNOMEs Gtk has a companion library named libvte which provides the terminal as widget. It is used by Gnome-Terminal (and many other terminals) itself and now supports Gtk4:
What hurts me?
The only thing which hurts me is that the maintainer doesn’t like background transparency in the official terminal application. The library supports it and many terminals use it. But this is another topic. With an embedded terminal you will likely not use that feature ;)
PS: KDE likely has a smiliar solution within Qt. As you named portability either Gtk or Qt are you tools. The biggest hurdle is shipping the libraries on macOS. On Linux it is done automatically. Windows is rather easy. Regarding Gtk, ship the gdk-pixbuf loaders alongside, they are loaded at runtime via dlopen(). The small differences hurt during porting.
I've used xterm.js for this, but I was already in a webview. It's pretty good, might even be worth the webview for https://github.com/xtermjs/xterm.js
Used by vscode among others
I don't agree that React is FRP. FRP is a generic solution that builds data-UI synchronization around data, and assumes that incremental UI updates will be written in UI code. React puts data-UI synchronization code inside UI components and provides an Immediate Mode-like API to avoid writing incremental updates in UI code.
This TUI looks pretty, but I cannot imagine situation, when I would actually use it and be ready to pay for it. Probably I am not living in a right environment for it. But in my experience, either people are happy with something truly minimalistic or they try to please a user with GUI right away.
For example, YouTube link in the article showed a possibility to display table with highlighting cells. Why would I need that as TUI? Probably if I want to navigate through table with highlighting active cell I would also need a bunch of other stuff and eventually I would need a proper GUI.
One reason to prefer textual UIs is that you can use them from any computer anywhere fast (no slow video VNC). Also, if you are already using a terminal, then improving that experience is nice. There is a big enough market there for this company.
However, I'm not sure what a "proper" GUI is. Terminals have a widely used open standard. These protocols are more standard and interoperable than any GUI framework, and they are supported on every system. Once you add video in there, close a few more gaps, I think competing with the browser, or bringing the full computer experience to the terminal is reasonable.
Obviously if terminals add something like video, it's "copying" rendering pipelines from "proper" GUIs, but making the terminal experience better, which could also be viewed as bringing UNIX zen to GUIs, if it makes it big, you'll love it too! <3
Or what if you just prefer to not use a browser to get your information on the internet. Textualize is like the only choice I can jump to.
It’s much better to hyperlink and open image and video in the browser from the terminal. In my opinion. Playing it would just block your terminal session, so it’s pretty annoying to have to open another tab with an inline video playing. I am happy with the features with textualize and haven’t even thought about needing things that play over a duration beyond a few seconds.
Textualize then becomes the browser api that I build my terminal browser and render only the components I care about.
None of that background or js libraries, ads. It’s well worth it.
The use case is something like medical billing entry where people are trained to know all of the billing codes and they still use an AS400 green screen console.
The employees work by keyboard shortcuts and are extremely efficient, and every time someone tries to replace the AS400 with a modern web app, their productivity drops 100x.
It’s the same scenario as a vim/emacs wizard vs a slick looking GUI that doesn’t have keyboard shortcuts.
The solution to manual medical billing is to have computers do the billing. The system can be configured via GUI with rules about billing codes, payers, DX codes, etc. and then the rest of the system Just Does It.
It does seems niche at this point. One scenario I can see is where you want something more user friendly than pure CLI and where providing a web UI might be too risky for some reason. A TUI could allow users to SSH in to server somewhere and just have TUI app as their shell. It's a bit contrived I grant you that.
Personally I found Textual a little weird to use, but better than ncurses. Though it didn't really yield what I wanted. I like the old mainframe style TUI application, those already struck me as being wildly efficient.
Seems like it provides a TUI toolkit as well, and it looks a bit less weird than the approach Textual uses.
Was thinking of trying it out with a side project recently, but got pulled onto some other stuff instead so haven't yet started. Nor made the choice between them. ;)
greybeard nix admin here: I agree
People have forgotten way too many of the lessons we learned in Ops
Companies like to slap "agile" and "devops" labels on things, but I see the same old fights...
(biased because I hate gui-ninjas)
This is how games were written back in the day before DirectX was a thing. You'd write directly to the frame buffer and instead of clearing and redrawing, you'd redraw what changed and what was around and under it (because there was no time to refresh the entire view in time in addition to everything else you need to do)
There were at least two other techniques back then.
The first is to write to another buffer (possibly in normal RAM, not video RAM), then when the frame is done copy the whole buffer at once, so every pixel gets changed only once.
The second is to write to another buffer that must be in video RAM too, then change the registers of the graphics hardware to use that buffer to generate pixels for the monitor to show.
They had different tradeoffs. Copying the whole buffer when done was expensive, changing an address register was cheap. But the details of the register were possibly hardware-dependent, and there was no real graphics driver framework in place. Also, to just "flip buffers" (as changing the address register was called), rendering to the off-screen buffer meant sending pixels to video RAM, which was (IIRC) slower to access than normal RAM (basically a NUMA architecture), so depending on how often a pixel gets overdrawn, rendering in normal RAM could be faster overall even with the final copy taken into account.
I learned this from ComputerCraft programming. It's become applicable many times since then, occasionally professionally. But, the ability to tell the terminal when it's starting/done a frame is a lot more powerful IMO.
It's the other ubiquitously installed cross-platform GUI toolkit other than the web.
Additionally, it has a hacker aesthetic. The styling is aggressively not separated from the content, and the styling knobs are pretty limited, so tui apps kind of converge on a single style. That style reminds us of hacker movies and cool sci-fi shit :)
It's not loved by corporate designers. Companies don't make sales based on their TUIs (to either businesses or consumers). So without those commercial pressures, tuis are designed by developers for developers.
Because of this, the meme of tuis self-reinforces. Developers see and use TUIs, notice that they are usually tools built with developers in mind, and then want to go on to make their own TUIs.
Each to their own. But running apps over SSH is a big plus. And some folk, like myself, enjoy the snappy keyboards focused experience that perhaps GUI apps could offer, but typically don't.
There isn't much obvious benefit in using TUIs, but writing TUIs has a big upside: it's easier in most ways. It's trivially cross-platform, you don't have to make pixel-perfect GUIs (because it's impossible), you don't need icons or graphics (because you can't show graphics), etc.
That's the main reason. TUI programs can be created quickly, and they're mostly developer-oriented, so they just need to be useful enough and don't need to be optimized as much as a GUI program.
Its great for running things in a terminal where you want more of a user interface than a CLI. Sometimes a TUI is faster and gives better oversight than a CLI.
I use CLI heavily but some tasks GUI suits better and TUI is a middle ground between GUI and CLI - you can continue using your terminal emulator with you favorite font but have a GUI like interface. Also all TUI applications I've used have low response time. Most GUI apps are also has fast interface (most of the time) but all frustratingly slow apps I've used were GUI apps.
Because you may have no issue using browsers with extended batteries + features.
How about displaying data on cli w/ textualize vs on an admin web interface.
I find it much easier + direct to use some py-orm w/ textualize.
On an admin interface you’d have to worry about auth + some js ui framework that prints your custom html directives to tables… that eventually display text.
I like TUIs because they are usually the exact opposite of the large range of UX fashions that absolutely suck, and have infected every other GUI app and website I use. The terminal is a breath of fresh “I’m not going to treat you like a moron” “more is more” “you’ll know if it’s a button” high contrast air.
I guess, mainly because of nostalgia, back from the days TUIs were the only way to interact with computers.
Turbo Vision, curses and dialog were cool back in the 1990's.
Having started with computers in 1986, I really don't get the TUI fetisch, not even remote access is an issue, given X Windows, VNC, RDP, Citrix,... exist for decades.
Having run X programs over network connections quite a bit, I'd say that they make sense for graphics stuff, and textual interfaces over SSH are significantly more responsive.
But once a fixed-width text grid stops being the right tool for the job, it's likely better to have a web UI.
> X Windows, VNC, RDP, Citrix,... exist for decades.
Yeah but those are all awful. I mean, okay, they work almost okay with a wired network connection and on the same network.
But as soon as you hit the WAN and you throw wifi in there it's painful. Having noticeable latency and graphical artifacts does, in my opinion, hurt productivity. For those pieces of software where completing tasks as fast and accurately as possible is the most important goal, TUIs are great. Especially when you have comprehensive keyboard shortcuts. If you've ever seen an office worker rip through a TUI underwriting a loan, you'll know what I mean.
GUI is not intrinsically any more resource-heavy than TUI; ultimately you need to render stuff on screen somehow and get user input, and going through tty layer just is extra bloat.
Anyone old and naiv enough to share this observation: Almost everything I looked at after TurboVision was inspired, but actually not really finished. Once you take the toolkit for a ride, you realize its kind of cute but unfinished. Maybe another way of looking at this is to call many of the TUI frameworks I say "opinionated", whatever that exactly means.
I am likely just dense and uncreative, but the truth is, when I switched from DOS to Linux in the 90s, I was never again as productive as I happened to be with B800. Granted, it likely took me a long time to understand the need for double buffering and the difference between a local/direct text mode vs a terminal, let alone escape sequences. But still. Whenever I tried to do something directly in ncurses, I pretty much gave up due to a distinct feeling of being unhappy. Completely different to what I was able to do with the simple ideal of B800.
I learned GUI programming using win16, then win32 — this would've been during the transition to WinXP. I must have a city-wide blind spot, but every post message pump GUI framework has left me completely befuddled. One thing I never understood was how these OO frameworks helped to really solve the multithreaded UI issues. In Win32, I just threw the main renderer into a thread, then had support renderers build models "on the side", and then updated the difference. The code never really got out of hand.
I get what Turbo Vision is (was), but what's that B800 thing? Surely you aren't talking about a Celeron processor? Seems tricky to google, also no Wiki page on that. You got me curious, plz spill it! =)
0xb800 is the address of the framebuffer for text. Simply write characters to that address, using offset `((row * width) + col)` and they'll appear on screen.
In graphics mode it was, IIRC, 0xa000. I once wrote a pacman-type easter egg inside the point-of-sale system[1], because doing direct graphics straight to an address is so easy and simple.
[1] Was removed after pilot and before actual release.
In real mode addressing and graphic card in text mode, your video memory started at absolute address 0xb800:0x0000. It was an two-dimensional array, where you could poke and whatever you changed here, was reflected immediately on the display. Each element of the array was two bytes: character itself and color attributes.
I prefer WezTerm over Kitty, because of the Kitty's author attitude towards feature requests and even pull requests.
And yes, you can do graphics on both, using the same protocols. If you really need graphics, a terminal is hardly a right solution. It's occasionally useful for tiny stuff like icons though.
I maintain two TUI libraries which use this technique and emoji support has been (nearly) great. (One of which uses your uniseg library!)
https://mitchellh.com/writing/grapheme-clusters-in-terminals
Second, and this is mostly because my personal use cases are very humble, a much, much simple to implement workaround, for everyone involved, would be a couple of OSC sequences which would mark a part of output text as the prompt (when terminal is in canonical/cooked mode), so that a huge chunk of readline could be simply thrown away.
So your program could just print a prompt, and then simply read the cooked line. In the meanwhile, the terminal emulator would handle line editing, line-wrapping and asynchronous output: if you keep outputing text to the terminal while a prompt is active, the terminal would clear the prompt and the unfinished line, print the text, then re-display the prompt and the line; basically what all "async readline" libraries do already with rl_clear/rl_redisplay — but doing it in the terminal would take care of this properly, because the terminal definitely knows how wide all the symbols it itself thinks are. And the tab completion could be supported by returning a <TAB>-terminated line to the program, instead of an <LF>-terminated line.
Unfortunately, I don't think something like this can actually become even moderately widely adopted.
Edit: Or, you know, maybe we could extend terminfo? Like, introduce twcswidth() function that would take your string, and the somehow encoded Unicode grapheme clustering data that the current terminal is actually using which you can query from terminfo, and return the number of screen cells it would take on this terminal.
There are libraries like Unidecode[0py] [0go] [0js] which convert from unicode to ASCII text that might be easiest to include in a TUI. All the ones I looked at will convert emoji to `[?]` but many other characters are converted to that, too, including unknowns.
On the other end you can keep a running list of what you mean by emoji[1] and pattern match on those characters, then substitute for a representative emoji. But it will still pose some difficulty around what to choose for the representative symbol and how to make it fit nicely within a TUI. An example of a library for pattern-matching on emoji is emoji-test-regex-pattern[2] but you can see it is based on a txt file that needs to be updated to correspond with additions to Unicode.
[0py]: https://github.com/avian2/unidecode
[0go]: (actually there are a few of these) https://pkg.go.dev/github.com/gosimple/unidecode
[0js]: https://github.com/xen0n/jsunidecode
[1]: these aren't really contiguous ranges, and opinions vary, see https://en.m.wikipedia.org/wiki/Emoji#Unicode_blocks
[2]: https://github.com/mathiasbynens/emoji-test-regex-pattern
Alright, so we're using some bastardization of CSS as well? That might be going a little bit too far. The react model already breaks the idea of CSS in a lot of ways, preferring standardized components. Sure, developers still use CSS to customize components, but I view that more as a side effect of how react evolved rather than as a justifiable architectural choice. But as long as you don't have to use CSS I suppose it's fine.
Last I tried it, you do have to use CSS. There are no good standard components, so you will be making your own, and instead of having components be one nice self encapsulated Python class the standard docs use things like list components and then style them with an external style sheet.
For those reasons textual just isn't for me yet. In python there should be one, and preferably only one, obvious way to do something. By mirroring react so closely they're also mirroring what I see as the JavaScript communities biggest vice.
You don't have to use CSS (actually you never did). Every style can be set in code, and the docs have CSS + Python equivalent for every style.
> There are no good standard components
I guess its been a while since you checked https://textual.textualize.io/widget_gallery/
So, imagine a normal GUI window, but one of the components in it is a terminal window. Is there something like that?
Or should I just use mono font text view?
https://gitlab.gnome.org/GNOME/vte
Basically its usage starts with vte_terminal_new().
Unicode? Emojis? https://gitlab.gnome.org/GNOME/vte/-/blob/master/doc/ambiguo...
What hurts me? The only thing which hurts me is that the maintainer doesn’t like background transparency in the official terminal application. The library supports it and many terminals use it. But this is another topic. With an embedded terminal you will likely not use that feature ;)
PS: KDE likely has a smiliar solution within Qt. As you named portability either Gtk or Qt are you tools. The biggest hurdle is shipping the libraries on macOS. On Linux it is done automatically. Windows is rather easy. Regarding Gtk, ship the gdk-pixbuf loaders alongside, they are loaded at runtime via dlopen(). The small differences hurt during porting.
This includes a lot of what you'd expect from HTML, classes, CSS, etc.
They have reactive attributes https://textual.textualize.io/guide/reactivity/
It has HTML (or at least a DoM), css, and you design widgets the same way.
For example, YouTube link in the article showed a possibility to display table with highlighting cells. Why would I need that as TUI? Probably if I want to navigate through table with highlighting active cell I would also need a bunch of other stuff and eventually I would need a proper GUI.
However, I'm not sure what a "proper" GUI is. Terminals have a widely used open standard. These protocols are more standard and interoperable than any GUI framework, and they are supported on every system. Once you add video in there, close a few more gaps, I think competing with the browser, or bringing the full computer experience to the terminal is reasonable.
Obviously if terminals add something like video, it's "copying" rendering pipelines from "proper" GUIs, but making the terminal experience better, which could also be viewed as bringing UNIX zen to GUIs, if it makes it big, you'll love it too! <3
It’s much better to hyperlink and open image and video in the browser from the terminal. In my opinion. Playing it would just block your terminal session, so it’s pretty annoying to have to open another tab with an inline video playing. I am happy with the features with textualize and haven’t even thought about needing things that play over a duration beyond a few seconds.
Textualize then becomes the browser api that I build my terminal browser and render only the components I care about.
None of that background or js libraries, ads. It’s well worth it.
The employees work by keyboard shortcuts and are extremely efficient, and every time someone tries to replace the AS400 with a modern web app, their productivity drops 100x.
It’s the same scenario as a vim/emacs wizard vs a slick looking GUI that doesn’t have keyboard shortcuts.
(I work at a company automating medical billing.)
Personally I found Textual a little weird to use, but better than ncurses. Though it didn't really yield what I wanted. I like the old mainframe style TUI application, those already struck me as being wildly efficient.
Out of curiosity, have you looked at it's sibling project "rich"?
https://github.com/Textualize/rich
Seems like it provides a TUI toolkit as well, and it looks a bit less weird than the approach Textual uses.
Was thinking of trying it out with a side project recently, but got pulled onto some other stuff instead so haven't yet started. Nor made the choice between them. ;)
Sometimes that works ok (ie forwarding X on a high bandwidth connection), but other times the proper GUI acts like a complete pig. :(
A text based GUI sounds like it might be the best of both worlds.
This is how games were written back in the day before DirectX was a thing. You'd write directly to the frame buffer and instead of clearing and redrawing, you'd redraw what changed and what was around and under it (because there was no time to refresh the entire view in time in addition to everything else you need to do)
The first is to write to another buffer (possibly in normal RAM, not video RAM), then when the frame is done copy the whole buffer at once, so every pixel gets changed only once.
The second is to write to another buffer that must be in video RAM too, then change the registers of the graphics hardware to use that buffer to generate pixels for the monitor to show.
They had different tradeoffs. Copying the whole buffer when done was expensive, changing an address register was cheap. But the details of the register were possibly hardware-dependent, and there was no real graphics driver framework in place. Also, to just "flip buffers" (as changing the address register was called), rendering to the off-screen buffer meant sending pixels to video RAM, which was (IIRC) slower to access than normal RAM (basically a NUMA architecture), so depending on how often a pixel gets overdrawn, rendering in normal RAM could be faster overall even with the final copy taken into account.
Did this change with 3dfx's Glide (or subsequently Direct3D once Windows got a foothold into the gaming industry)?
Additionally, it has a hacker aesthetic. The styling is aggressively not separated from the content, and the styling knobs are pretty limited, so tui apps kind of converge on a single style. That style reminds us of hacker movies and cool sci-fi shit :)
It's not loved by corporate designers. Companies don't make sales based on their TUIs (to either businesses or consumers). So without those commercial pressures, tuis are designed by developers for developers.
Because of this, the meme of tuis self-reinforces. Developers see and use TUIs, notice that they are usually tools built with developers in mind, and then want to go on to make their own TUIs.
How about displaying data on cli w/ textualize vs on an admin web interface.
I find it much easier + direct to use some py-orm w/ textualize.
On an admin interface you’d have to worry about auth + some js ui framework that prints your custom html directives to tables… that eventually display text.
Turbo Vision, curses and dialog were cool back in the 1990's.
Having started with computers in 1986, I really don't get the TUI fetisch, not even remote access is an issue, given X Windows, VNC, RDP, Citrix,... exist for decades.
But once a fixed-width text grid stops being the right tool for the job, it's likely better to have a web UI.
Yeah but those are all awful. I mean, okay, they work almost okay with a wired network connection and on the same network.
But as soon as you hit the WAN and you throw wifi in there it's painful. Having noticeable latency and graphical artifacts does, in my opinion, hurt productivity. For those pieces of software where completing tasks as fast and accurately as possible is the most important goal, TUIs are great. Especially when you have comprehensive keyboard shortcuts. If you've ever seen an office worker rip through a TUI underwriting a loan, you'll know what I mean.
https://github.com/Nokse22/ascii-draw
I am likely just dense and uncreative, but the truth is, when I switched from DOS to Linux in the 90s, I was never again as productive as I happened to be with B800. Granted, it likely took me a long time to understand the need for double buffering and the difference between a local/direct text mode vs a terminal, let alone escape sequences. But still. Whenever I tried to do something directly in ncurses, I pretty much gave up due to a distinct feeling of being unhappy. Completely different to what I was able to do with the simple ideal of B800.
Deleted Comment
In graphics mode it was, IIRC, 0xa000. I once wrote a pacman-type easter egg inside the point-of-sale system[1], because doing direct graphics straight to an address is so easy and simple.
[1] Was removed after pilot and before actual release.
https://notcurses.com/