Funny story: using kilo was the final straw [1] in getting me to give up on terminals. These days I try to do all my programming atop a simple canvas I can draw pixels on.
Here's the text editor I use all the time these days (and base lots of forks off of): https://git.sr.ht/~akkartik/text2.love. 1200 LoC, proportional font, word-wrap, scrolling, clipboard, unlimited undo. Can edit Moby Dick.
I really enjoyed the plan9 way of an application slurping up the terminal window (not a real terminal anyway) and then using it as full fledged GUI window. No weird terminal windows floating around in the background and you still could return to it when quitting for any logs or outputs.
Whatever works! I mostly use LÖVE, and it supports both. Some reasons to run it from the terminal rather than simply double-clicking or a keyboard shortcut in the OS:
* While I'm building an app I want to run from a directory rather than a .love file.
* I want to pass additional arguments. Though I also extensively use drag and drop for filenames.
Not GP but the terminal is inefficient and limiting for input and UI. For one you cannot detect key-up and key-down events, only a full key press. The press of multiple (non-modifier) keys at once can't be recognized either. Also there are some quirks, like in many terminals your application cannot distinguish between the Tab key and Ctrl-I as they look the same. But in some (e.g. Alacritty) it can work, so now if you have two different keybindings for Tab & Ctrl-I your program will behave differently in different terminals.
If you want to do anything that's not printing unformatted text right where the cursor is, you need to print out control sequences that tell the terminal where to move the cursor or format the upcoming text. So you build weird strings, print them out and then the terminal has to parse the string to know what to do. As you can imagine this is kind of slow.
If you accidentally print a line that's too long it might break and shift the rest of the UI. That's not too bad because it's a monospaced font, so you only have to count the unicode symbols (not bytes)...until you realize chinese symbols are rendered twice as wide. Text is weird and in the terminal there is nothing but text. But to be fair it's still a lot simpler than proportional fonts and a lot of fun, but I definitely understand why someone would decide to just throw pixels on a canvas and not deal with the historical quirks.
This is a problem with every TUI out there built using ncurses. "What escape code does your terminal emit for backspace?" is a completely artificial problem at this point.
There are good reasons to deal with the terminal: I need programs built for it, or I need to interface with programs built for it. Programs that deal with 1D streams of bytes for stdin and stdout are simpler in text mode. But for anything else, I try to avoid it.
Immature, obviously. Far fewer person-hours of labor have been put in relative to what you use all the time. But I find it worthwhile to get off the constant treadmill of new versions with features I don't care about. Cutting down on complexity there creates headroom for me or you to try out new approaches I or you might care more about.
My most common development environments these days:
* A live-programming infinite surface of definitions that works well on a big screen: https://git.sr.ht/~akkartik/driver.love Has minimal syntax highlighting for just Lua comments and strings.
My own editor is array of lines in Ruby, and in now about 8 years of using it daily, and having the actual editor interact with the buffer storage via IPC to a server holding all the buffers, it's just not been a problem.
It does become a problem if you insist on trying to open files of hundred of MB of text, but my thinking is that I simply don't care to treat that as a text editing problem for my main editor, because files that size are usually something I only ever care to view or is better off manipulating with code.
If you want to be able to open and manipulate huge files, you're right, and then an editor using these kind of simple methods isn't for you. That's fine.
As it stands now, my editor holds every file I've ever opened and not explicitly closed in the last 8 years in memory constantly (currently, 5420 buffers; the buffer storage is persisted to disk every minute or so, so if I reboot and open the same file, any unsaved changes are still there unless I explicitly reload), and it's not even breaking the top 50 or so of memory use on my machine usually (those are all browser tabs...)
I'm not suggesting people shouldn't use "fancier" data structures when warranted. It's great some editors can handle huge files. Just that very naive approaches will work fine for a whole lot of use cases.
E.g. the 5420 open buffers in my editor currently are there because even the naive approach of never garbage collecting open buffers just hasn't become an issue yet - my available RAM has increased far faster than the size of the buffer storage so adding a mechanism for culling them just hasn't become a priority.
Oh by "more complex" operations I referred to multiple cursors and multi line regex searches. I've noticed some performance problems in my own editor but it's mostly because "lines" become fragmented, if you allocate all the lines with their own allocation, they might be far away from each other in memory. It's especially true when programming where lines are relatively short.
Regex searches and code highlight might introduce some hitches due to all of the seeking.
The core data structure (array of lines) just isn't that well suited to more complex operations.
Modern CPUs can read and write memory at dozens of gigabytes per second.
Even when CPUs were 3 orders of magnitude slower, text editors using a single array were widely used. Unless you introduce some accidentally-quadratic or worse algorithm in your operations, I don't think complex datastructures are necessary in this application.
The actual latency budget would be less than a single frame to be completely non-noticable, so you are in fact limited to less than 1 GB to move per each keystroke. And each character may hold additional metadata like syntax highlight states, so 1 GB of movable memory doesn't translate to 1 GB of text either. You are still correct in that a line-based array is enough for most cases today, but I don't think it's generally true.
> The core data structure (array of lines) just isn't that well suited to more complex operations.
Just how big (and how many lines) does your file have to be before it is a problem? And what are the complex operations that make it a problem?
(Not being argumentative - I'd really like to know!)
On my own text editor (to which I lost the sources way back in 2004) I used an array of bytes, had syntax highlighting (Used single-byte start-stop codes for syntax highlighting) and used a moving "window" into the array for rendering. I never saw a latency problem back then on a Pentium Pro, even with files as large as 20MB.
I am skeptical of the piece table as used in VS Code being that much faster; right now on my 2011 desktop, a VS Code with no extra plugins has visible latency when scrolling by holding down the up/down arrow keys and a really high keyboard repeat setting. Same computer, same keyboard repeat and same file using Vim in a standard xterm/uxterm has visibly better scrolling; takes half as much time to get to the end of the file (about 10k lines).
From what I have experienced the complex data structures used here are more about maintaining responsiveness when overall system load is high and that may result slightly slower performance overall. Say you used the variable "x" a thousand times in your 10k lines of code and you want to do a find and replace on it to give it a more descriptive name like, "my_overused_variable," think about all of the memory copying that is happening if all 10k lines are in a single array. If those 10k lines are in 10k arrays which are all twice the size of the line you reduce that a fair amount. It might be slower than simpler methods when the system load is low but it will stay responsive longer.
I think vim uses a gap structure, not a single array but don't remember.
I am not a programmer, my experience could very well be due to failings elsewhere in my code and my reasoning could be hopelessly flawed, hopefully someone will correct me if I am wrong. It has also been awhile since I dug into this, the project which got me to dig into this is one of the things which got me to finally make an account on hn and one of my first submissions was Data Structures for Text Sequences.
VS Code used 40-60 bytes per line, so a file with 15 million single character lines balloons from 30 MB to 600+ MB. kilo uses 48 bytes per line on my 64-bit machine (though you can make it 40 if you move the last int with the other 3 ints instead of wasting space on padding for memory alignment), so it would have the same issue.
I played around with kilo when it was released, and eventually made a multi-buffer version with support for scripting with embedded Lua. Of course it was just a fun hack not a serious thing, I continue to do all my real editing with Emacs, but it did mean I got to choose the best project name:
Here’s a second recommendation for that tutorial. It’s the first coding tutorial I’ve finished because it’s really good and I enjoyed building the foundational software program that my craft relies on. I don’t use that editor but it was fun to create it.
Author of hecto here, thank you for mentioning it! I wrote the first version around 5 years ago and I’m happy that people still use it. (I updated it in the meantime)
Reading through this code is a veritable rite of passage. You learn how C works, how text editors work, how VT codes work, how syntax highlighting works, how find works, and how little code it really takes to make anything when you strip away almost all conveniences, edge cases, and error handling.
I made a similar editor using Lazarus... since it has syntax highlighting components... I guess that's cheating. The more I think about it though, I wonder if Freepascal could produce a nice GUI for Neovim.
I did try to build one in Qt in C++ years ago, stopped at trying to figure out how to add Syntax Highlighting since I'm not really that much into C++. Pivoted it to work like Notepad so I was still happy with how it wound up.
Although it does cheat a bit in an effort to better handle Unicode:
> unicode-width is used to determine the displayed width of Unicode characters. Unfortunately, there is no way around it: the unicode character width table is 230 lines long.
Personally, this is the reason I don't really buy the extreme size reduction; such projects generally have to sacrifice some essential features that demand a certain but necessary amount of code.
A lot of those features are only "essential" for a subset of possible users.
My own editor exists because I realised it was possible to write an editor smaller than my Emacs configuration. While my editor lacks all kinds of features that are "essential" for lots of other people, it doesn't lack any features essential for me.
So in terms of producing a perfect all-round editor that will work for everyone, sure, editors like Kilo will always be flawed.
Their value is in providing a learning experience, something that works for the subset who don't need those features, or a basis for people to customise something just right for their needs in a compact way. E.g. my own editor has quirks that are custom-tailored to my workflow, and even to my environment.
Ah darn. Closing in on retirement (will never happen, coding is too much fun for profit or charity) age, I resistent building an editor but I want to. Need to. I hacked so much vim, emacs, eclipse, vs code and its all crap (the newer, the worse: all these useless gimmicks you won't use past grade school aaarrr while lacking power user features). Can I do better? This seems a good start.
Here's the text editor I use all the time these days (and base lots of forks off of): https://git.sr.ht/~akkartik/text2.love. 1200 LoC, proportional font, word-wrap, scrolling, clipboard, unlimited undo. Can edit Moby Dick.
[1] https://git.sr.ht/~akkartik/teliva
https://arcan-fe.com/2025/01/27/sunsetting-cursed-terminal-e...
* While I'm building an app I want to run from a directory rather than a .love file.
* I want to pass additional arguments. Though I also extensively use drag and drop for filenames.
* I want to print() while debugging.
Why?
If you want to do anything that's not printing unformatted text right where the cursor is, you need to print out control sequences that tell the terminal where to move the cursor or format the upcoming text. So you build weird strings, print them out and then the terminal has to parse the string to know what to do. As you can imagine this is kind of slow.
If you accidentally print a line that's too long it might break and shift the rest of the UI. That's not too bad because it's a monospaced font, so you only have to count the unicode symbols (not bytes)...until you realize chinese symbols are rendered twice as wide. Text is weird and in the terminal there is nothing but text. But to be fair it's still a lot simpler than proportional fonts and a lot of fun, but I definitely understand why someone would decide to just throw pixels on a canvas and not deal with the historical quirks.
"Backspace is known to not work in some configurations. As a workaround, typing ctrl-h tends to work in those situations." (https://git.sr.ht/~akkartik/teliva#known-issues)
This is a problem with every TUI out there built using ncurses. "What escape code does your terminal emit for backspace?" is a completely artificial problem at this point.
There are good reasons to deal with the terminal: I need programs built for it, or I need to interface with programs built for it. Programs that deal with 1D streams of bytes for stdin and stdout are simpler in text mode. But for anything else, I try to avoid it.
My most common development environments these days:
* A live-programming infinite surface of definitions that works well on a big screen: https://git.sr.ht/~akkartik/driver.love Has minimal syntax highlighting for just Lua comments and strings.
* An environment that lets me add hyperlinks, graphics and box-and-arrow diagrams in addition to code. Also works on mobile devices. Examples: https://akkartik.itch.io/sokoban, https://akkartik.name/post/2025-03-08-devlog, https://akkartik.name/post/2025-05-12-devlog
The second set of apps are built using the first approach.
The core data structure (array of lines) just isn't that well suited to more complex operations.
Anyway here's what I built: https://github.com/lorlouis/cedit
If I were to do it again I'd use a piece table[1]. The VS code folks wrote a fantastic blog post about it some time ago[2].
[1] https://en.m.wikipedia.org/wiki/Piece_table [2] https://code.visualstudio.com/blogs/2018/03/23/text-buffer-r...
It does become a problem if you insist on trying to open files of hundred of MB of text, but my thinking is that I simply don't care to treat that as a text editing problem for my main editor, because files that size are usually something I only ever care to view or is better off manipulating with code.
If you want to be able to open and manipulate huge files, you're right, and then an editor using these kind of simple methods isn't for you. That's fine.
As it stands now, my editor holds every file I've ever opened and not explicitly closed in the last 8 years in memory constantly (currently, 5420 buffers; the buffer storage is persisted to disk every minute or so, so if I reboot and open the same file, any unsaved changes are still there unless I explicitly reload), and it's not even breaking the top 50 or so of memory use on my machine usually (those are all browser tabs...)
I'm not suggesting people shouldn't use "fancier" data structures when warranted. It's great some editors can handle huge files. Just that very naive approaches will work fine for a whole lot of use cases.
E.g. the 5420 open buffers in my editor currently are there because even the naive approach of never garbage collecting open buffers just hasn't become an issue yet - my available RAM has increased far faster than the size of the buffer storage so adding a mechanism for culling them just hasn't become a priority.
Regex searches and code highlight might introduce some hitches due to all of the seeking.
Modern CPUs can read and write memory at dozens of gigabytes per second.
Even when CPUs were 3 orders of magnitude slower, text editors using a single array were widely used. Unless you introduce some accidentally-quadratic or worse algorithm in your operations, I don't think complex datastructures are necessary in this application.
Just how big (and how many lines) does your file have to be before it is a problem? And what are the complex operations that make it a problem?
(Not being argumentative - I'd really like to know!)
On my own text editor (to which I lost the sources way back in 2004) I used an array of bytes, had syntax highlighting (Used single-byte start-stop codes for syntax highlighting) and used a moving "window" into the array for rendering. I never saw a latency problem back then on a Pentium Pro, even with files as large as 20MB.
I am skeptical of the piece table as used in VS Code being that much faster; right now on my 2011 desktop, a VS Code with no extra plugins has visible latency when scrolling by holding down the up/down arrow keys and a really high keyboard repeat setting. Same computer, same keyboard repeat and same file using Vim in a standard xterm/uxterm has visibly better scrolling; takes half as much time to get to the end of the file (about 10k lines).
I think vim uses a gap structure, not a single array but don't remember.
I am not a programmer, my experience could very well be due to failings elsewhere in my code and my reasoning could be hopelessly flawed, hopefully someone will correct me if I am wrong. It has also been awhile since I dug into this, the project which got me to dig into this is one of the things which got me to finally make an account on hn and one of my first submissions was Data Structures for Text Sequences.
https://www.cs.unm.edu/~crowley/papers/sds.pdf
https://github.com/antirez/kilo/blob/323d93b29bd89a2cb446de9...
Would highly recommend the tutorial as it is really well done.
I played around with kilo when it was released, and eventually made a multi-buffer version with support for scripting with embedded Lua. Of course it was just a fun hack not a serious thing, I continue to do all my real editing with Emacs, but it did mean I got to choose the best project name:
https://github.com/skx/kilua
The original in C: https://git.timshomepage.net/tutorials/kilo
Go: https://git.timshomepage.net/timw4mail/gilo
Rust: https://git.timshomepage.net/timw4mail/rs-kilo
And the more rusty tutorial version (Hecto): https://git.timshomepage.net/tutorials/hecto
PHP: https://git.timshomepage.net/timw4mail/php-kilo
...and Typescript: https://git.timshomepage.net/timw4mail/scroll
I did try to build one in Qt in C++ years ago, stopped at trying to figure out how to add Syntax Highlighting since I'm not really that much into C++. Pivoted it to work like Notepad so I was still happy with how it wound up.
https://github.com/Giancarlos/qNotePad
Although it does cheat a bit in an effort to better handle Unicode:
> unicode-width is used to determine the displayed width of Unicode characters. Unfortunately, there is no way around it: the unicode character width table is 230 lines long.
My own editor exists because I realised it was possible to write an editor smaller than my Emacs configuration. While my editor lacks all kinds of features that are "essential" for lots of other people, it doesn't lack any features essential for me.
So in terms of producing a perfect all-round editor that will work for everyone, sure, editors like Kilo will always be flawed.
Their value is in providing a learning experience, something that works for the subset who don't need those features, or a basis for people to customise something just right for their needs in a compact way. E.g. my own editor has quirks that are custom-tailored to my workflow, and even to my environment.
And these projects:
https://github.com/antirez/kilo/forks
Why are all the commenters so eager to get out of terminals?