I use ncdu almost daily across many different systems. Especially handy when running macOS on a <512GB volume and you need to keep an eye on ballooning caches and other "helpful" cruft.
`rmlint` and `jdupes` are also seeing a lot of use here lately, reclaiming terabytes of space from years of sloppy media organization (or the lack thereof!)
I love ncdu and install it on all of my machines. But at the risk of sounding like a broken record - why isn’t its functionality baked into stock file managers on windows and Linux?
Why can’t either of these systems do what the Mac has been able to do since the 90s, and display the recursive size of a directory in bytes in the file manager, allowing one to sort directories by recursive size?
I am not exaggerating to say this is the single biggest roadblock to my permanent migration to Linux!
(I would love nothing more than to hear I’m wrong and “you fool, Dolphin can do that with flag: foo”!)
Except that running "ls" doesn't show you the directory content size, and "ncdu" requires the user to make a tea first. The above poster is right in saying that having this built-in to the filesystem metrics would be a huge win.
I assume the restriction is file system related. It's probably not always cheap to calculate the full size of a directory, especially if it's heavily nested.
Windows will tell you the size of a dir in the right click -> properties menu, but it takes a while to calculate for large/complicated directories.
>Windows will tell you the size of a dir in the right click -> properties menu, but it takes a while to calculate for large/complicated directories.
Caja (and probably Nautilus/other-Nautilus-based managers) does that as well. But although can show it in properties arranging by size doesn't take it in consideration. (Rather it just sorts them by number of items inside.)
Just lie to me a little bit. I wouldn't mind seeing quick cached approximations that assume that I have changed the disk between reboots, or recently just move huge files around (and the OS would know anyway)
> Why can’t either of these systems do what the Mac has been able to do since the 90s, and display the recursive size of a directory in bytes in the file manager
Many file managers can do that, although for obvious reasons it's rather built as a contextual action on a single directory than an always on feature than would slow down the filesystem horribly by accessing it recursively on many levels. On Thunar (XFCE's file manager) for example it's accessible from the contextual menu opened using the right mouse button on a directory name; other file managers would work in a similar way.
I'm sure filesystems could be modified so that any write would automatically update a field referred by the containing directory, so it would quickly propagate to the upper level, but that would imply many more write accesses which for example on SSD media would do more harm than good.
You fool, Dolphin's predecessor Konqueror had a directory view embedding the k4dirstat component! There you can sort by subtree percentage, subtree total (bytes) and amount of items, files, subdirs.
This broke some time in the past (KDE really jumped the shark) and is now available as stand-alone applications only: k4dirstat and filelight. The MIME type inode/directory is already associated with those, so you can run them from the context menu of a directory anywhere, including file managers.
I'm not sure what exactly you're asking for, but Dolphin shows me the size of a directory. You may have to right click and update it from time to time.
Almost every district has a tool called "Disk Usage Analyser" that does exactly what you want. Very helpful when you start getting "no space left on device" errors.
One thing ncdu does not improve on over du | sort is that it still needs to scan the full directory structure before even displaying any result.
I would like something that starts estimating sizes immediately, and then refines those estimations as it is able to spend more time. I tried writing it myself, but I ended up not quite knowing how to go about it, because just getting the count of files in a directory takes about as long as getting the total size of said files...
(Another problem is that file sizes are notoriously fat tailed, so any estimation based on a subset of files is likely to underestimate the true size. Maybe by looking at how the estimation grows with more data one can infer something about the tail exponent and use that to de-bias the estimation?)
If you're okay with a GUI, I think that's how baobab works. I think it only shows the intermediate updates if the disk is slow enough, as I remember it doing that in the past, but checking my SSD just now it didn't.
If you like ncdu, you might also like dua[0]. You can run `dua i` to get an interface similar to ncdu, and can also run `dua` to list file sizes in the current directory, similar to `du`. Or `dua filename` to get the size of a given file.
Actually it is in the arch community repositories and seems to be quiet a bit faster than ncdu so I will keep it in my toolbox for now.
Painpoints are that there seems to be no progress bar in interactive mode, the ui is (imho) ugly/unintuitive (for instance the usage bar seems to be relative? and the shortcuts look like glyphs) and there are functions missing (like exclude patterns, you can exclude dirs though!).
So it won't replace ncdu, but if it get a interactive progressbar maybe it will be on all my machines (with arch)
If you use a Btrfs filesystem with snapshots, I can recommend Btdu as an alternative. Advantage: Can handle duplicate files (snapshots), which however only occupy 1x disk space.
More interesting than its support of Btrfs features is its unusual statistical approach:
> btdu is a sampling disk usage profiler […] Pick a random point on the disk then find what is located at that point […] btdu starts showing results instantly. Though wildly inaccurate at first, they become progressively more accurate the longer btdu is allowed to run.
Broadly, is anyone aware of a generalized list of "new versions of classic tools?"
There are so many now that are better than the old stuff; I almost feel like a unified round up of these, maybe even in a distro form, might be good for linux enthusiasts, newcomers, old-timers, etc.
Zellij instead of tmux (not necessarily better, but it's easier to use)
Xonsh instead of bash (because you already know Python, why learn a new horrible language?)
bat instead of cat (syntax highlights and other nice things)
exa instead of ls (just nicer)
neovim instead of vim (just better)
helix instead of neovim (just tested it, seems promising though)
nix instead of your normal package manager (it works on Mac, and essentially every Linux dist. And it's got superpowers with devshells and home-manager to bring your configuration with you everywhere)
rmtrash instead of rm (because you haven't configured btrfs snapshots yet)
starship instead of your current prompt (is fast and displays a lot of useful information in a compact way, very customizable)
mcfly instead of your current ctrl+r (search history in a nice ncurses tui)
dogdns instead of dig (nicer colors, doesn't display useless information)
amp, kakoune (more alternative text editors)
ripgrep instead of grep (it's just better yo)
htop instead of top (displays stuff nicer)
gitui/lazygit instead of git cli (at least for staging, nice with file, hunk and line staging when you have ADHD)
gron + ripgrep instead of jq when searching through JSON in the shell (so much easier)
One thing that comes to mind as someone who is not in the know of these new tools would be, are they safe?
The old tools have been there forever and used everywhere. My assumption would be these are safe and don't change often. For better or for worse, I would be concerned about using the newer tools unless they are backed and/or approved by a large open source org.
If the tool is "$x, but with pretty colors", there's a good chance they are not safe to use in pipelines. It's distressingly common for colored output to be sent even when piped.
I really respect this take -- and also kind of don't like it at the same time?
Basically, I don't do "mission critical" Linux things. I teach IT and I hack around on my own boxes with scripts and stuff because it's fun and useful to me. I'm always on the lookout for the hooks and such that can get more and different people into Linux.
I like iotop. Pretty much exactly what it sounds like - a top-like program for I/O operations. It's also just an apt/yum/dnf install away on most distros.
`rmlint` and `jdupes` are also seeing a lot of use here lately, reclaiming terabytes of space from years of sloppy media organization (or the lack thereof!)
Why can’t either of these systems do what the Mac has been able to do since the 90s, and display the recursive size of a directory in bytes in the file manager, allowing one to sort directories by recursive size?
I am not exaggerating to say this is the single biggest roadblock to my permanent migration to Linux!
(I would love nothing more than to hear I’m wrong and “you fool, Dolphin can do that with flag: foo”!)
Windows will tell you the size of a dir in the right click -> properties menu, but it takes a while to calculate for large/complicated directories.
Caja (and probably Nautilus/other-Nautilus-based managers) does that as well. But although can show it in properties arranging by size doesn't take it in consideration. (Rather it just sorts them by number of items inside.)
Deleted Comment
Many file managers can do that, although for obvious reasons it's rather built as a contextual action on a single directory than an always on feature than would slow down the filesystem horribly by accessing it recursively on many levels. On Thunar (XFCE's file manager) for example it's accessible from the contextual menu opened using the right mouse button on a directory name; other file managers would work in a similar way.
I'm sure filesystems could be modified so that any write would automatically update a field referred by the containing directory, so it would quickly propagate to the upper level, but that would imply many more write accesses which for example on SSD media would do more harm than good.
This broke some time in the past (KDE really jumped the shark) and is now available as stand-alone applications only: k4dirstat and filelight. The MIME type inode/directory is already associated with those, so you can run them from the context menu of a directory anywhere, including file managers.
I would like something that starts estimating sizes immediately, and then refines those estimations as it is able to spend more time. I tried writing it myself, but I ended up not quite knowing how to go about it, because just getting the count of files in a directory takes about as long as getting the total size of said files...
(Another problem is that file sizes are notoriously fat tailed, so any estimation based on a subset of files is likely to underestimate the true size. Maybe by looking at how the estimation grows with more data one can infer something about the tail exponent and use that to de-bias the estimation?)
Seems to be specific for btrfs
[0] https://github.com/Byron/dua-cli
Painpoints are that there seems to be no progress bar in interactive mode, the ui is (imho) ugly/unintuitive (for instance the usage bar seems to be relative? and the shortcuts look like glyphs) and there are functions missing (like exclude patterns, you can exclude dirs though!).
So it won't replace ncdu, but if it get a interactive progressbar maybe it will be on all my machines (with arch)
https://github.com/CyberShadow/btdu
> btdu is a sampling disk usage profiler […] Pick a random point on the disk then find what is located at that point […] btdu starts showing results instantly. Though wildly inaccurate at first, they become progressively more accurate the longer btdu is allowed to run.
https://github.com/dundee/gdu
Deleted Comment
There are so many now that are better than the old stuff; I almost feel like a unified round up of these, maybe even in a distro form, might be good for linux enthusiasts, newcomers, old-timers, etc.
Xonsh instead of bash (because you already know Python, why learn a new horrible language?)
bat instead of cat (syntax highlights and other nice things)
exa instead of ls (just nicer)
neovim instead of vim (just better)
helix instead of neovim (just tested it, seems promising though)
nix instead of your normal package manager (it works on Mac, and essentially every Linux dist. And it's got superpowers with devshells and home-manager to bring your configuration with you everywhere)
rmtrash instead of rm (because you haven't configured btrfs snapshots yet)
starship instead of your current prompt (is fast and displays a lot of useful information in a compact way, very customizable)
mcfly instead of your current ctrl+r (search history in a nice ncurses tui)
dogdns instead of dig (nicer colors, doesn't display useless information)
amp, kakoune (more alternative text editors)
ripgrep instead of grep (it's just better yo)
htop instead of top (displays stuff nicer)
gitui/lazygit instead of git cli (at least for staging, nice with file, hunk and line staging when you have ADHD)
gron + ripgrep instead of jq when searching through JSON in the shell (so much easier)
keychain instead of ssh-agent (better cli imo)
Wrote this on the train with my phone by checking https://github.com/Lillecarl/nixos/blob/master/common/defaul... for which packages I have installed myself :)
Exactly, one horrible language is enough!
[1] https://micro-editor.github.io/
https://github.com/riquito/tuc/
lsd instead of exa (better formatting, icons)
mosh instead of ssh for interactive sessions (maintains session even with bad connectivity)
hyprland instead of sway instead of i3 instead of XMonad
The old tools have been there forever and used everywhere. My assumption would be these are safe and don't change often. For better or for worse, I would be concerned about using the newer tools unless they are backed and/or approved by a large open source org.
Basically, I don't do "mission critical" Linux things. I teach IT and I hack around on my own boxes with scripts and stuff because it's fun and useful to me. I'm always on the lookout for the hooks and such that can get more and different people into Linux.
Go take a trip through the GNU userland tools and you'll find a lot of dodgy code that hasn't been touched in 30+ years.
> A collection of modern/faster/saner alternatives to common unix commands.
http://www.etalabs.net/sh_tricks.html
Not exactly what you had in mind but might still be interesting.
[0] https://jvns.ca/blog/2022/04/12/a-list-of-new-ish--command-l...
https://news.ycombinator.com/item?id=19967138
There's also more detailed CPU usage you can turn on, including I/O wait.
Use it, it's great.