The part that resonated most with me is "Show things that are normally hidden".
Tools that do this make things clearer almost immediately. Consider the developer tools in a web browser. Do you remember the "dark ages" before such things existed? It was awful because you had to guess instead of seeing what was going on.
Tools like Wireshark that show you every last byte of network packets that it has access to AND parses it to help you see the structure. This isn't just for debugging networking data; it's hugely beneficial in teaching networking concepts because nothing is hidden.
This is also one of my favorite things about open source software. I can view the source to understand what's causing a bug, to fill in knowledge gaps left by the documentation, or just learn more about programming concepts. Nothing is hidden.
Wireshark is great but it does not show you every byte the network carried. For example it never shows Ethernet preambles, only sometimes shows Ethernet frame checksums, and never shows interpacket gaps (which are a required part of the Ethernet protocol).
So yes it comes close but it just goes to show you, there is always more detail hiding somewhere!
Yes, the toughest "hidden things" problems are pulling together data that is related, but not part of the same system. In this case, Wireshark can only show you what the OS gives to it.
In the article, it was pointed out that DNS caches can be hidden. They're especially hidden when they're upstream and in another computer!
My biggest problem with Wireshark is that it can't do anything with HTTPS traffic - which is most of the traffic I'd be interested in. I understand that's kind of the point of HTTPS, and a MITM proxy with cert replacement is somewhat out of the scope of Wireshark, it still limits the usefulness of the program.
Are the Ethernet frame checksums even visible to Wireshark, which hooks into the IP layer? would some of the ethernet stuff be only visible within the ethernet card itself, not to the software stack?
>My dream is to make everything visualizable at runtime.
Check out demos of the old Lisp Machines, [1] is a brief overview demo, [2] links to a timestamp with a view of some simple diagramming, but I’ve seen TI-Symbolics beasts routinely display complex relationships in Lisp code on their massive (for the time) bitmapped screens. The limitation was the end user managing the visualization complexity.
With open source llvm, clang and similar making available abstract syntax trees and even semantic analysis and type checking results, LLM’s assisting with decompiling binary blobs, and modern hardware (goggles, graphics cards, and so on), I sometimes wonder how close we can come to reproducing that aspect of the Lisp Machine experience on open source operating systems.
'"Show things that are normally hidden".
Tools that do this make things clearer almost immediately.'
At least with all the DevOps shit going on now, it seems that many of the tools increasingly hide things. And the gurus who had that knowledge and could teach you are now concentrated in those tool companies instead of in your org.
This is one thing I love about Magit for Emacs. The UI is really clever and slick—maybe the best Git frontend that I've ever used—but the way you interact with UI is by toggling flags and options that actually map to the underlying command line Git arguments. I can seamlessly hop into the command line and feel right at home using it directly.
On top of the UI having very tight and visible relationship to git CLI commands, there's also a log buffer showing you exact commands executed and their outputs, and it's only a single $ press away.
Man, having spent waaaay too long troubleshooting errors where it's not remotely clear what part of the config the program is even consulting makes this hit home.
This is what I like about SQL databases. They frequently have tables that you can query to find system information. This makes it easy to explore the system within the system.
The proc filesystem on Linux is philosophically similar. It allows you to understand processes by working with files.
This is also one of my favorite things about open source software. I can view the source to understand what's causing a bug, to fill in knowledge gaps left by the documentation, or just learn more about programming concepts. Nothing is hidden.
On the contrary, you don't need the source, and in some cases it may even be misleading in many ways, when you can look directly at the instructions the machine executes. Tools like disassemblers and decompilers would be equivalent to what you speak of.
Nothing stops you from doing this with open source software either, though; making something open source is strictly an increase in the information available, and at least to me that seems in the spirit of "show me things that are normally hidden"
That's one of the advantages of using programming languages where source code is distributed (e.g. Python, JavaScript, PHP), not compiled binary artifacts (C/C++, Java). You can see the source. You can even modify it and run the modified version without compilation.
It's also awesome to use Java IDEs that can show both the bytecode of .class files and also perform decompilation.
Julia has to be one of the most likable people in tech! Every time I read one of her articles I feel that same bubbly rush of excitement I got when I was a kid, just starting to unfurl the secrets of reality through my own little experiments. Absolutely lovely.
Yes, it's rare to find someone who possesses deep technological know-how and is also a brilliant teacher and communicator. Andrej Karpathy is another who comes to mind. Fortunately, I've discovered that more people fit this mold recently.
Very much. I'm usually not a fan of the "omg awesomesauce"-style of overexcited blogposts or tutorials, (I much prefer the drier, high signal-to-noise ratio, concise, beautiful texts of the Landau&Lifschitz style), but her posts all make me feel that giddy rush of excitement you're talking about :)
> One thing that I sometimes hear is -- a newcomer will say "this is hard", and someone more experienced will say "Oh, yeah, it's impossible to use bash. Nobody knows how to use it."
> But I would say this is factually untrue. How many of you are using bash?
I think the meaning of the statement is not that straightforwardly literal.
I think what it means is, "We don't have strong confidence in our understanding of our bash code or confidence that it will behave as we expect in untested scenarios. If anything out of the ordinary happens, we kind of expect that something will fail and we will learn something new about bash that will make us cringe and/or strike a nearby object hard enough to injure ourselves."
Bash is a complex language, and for most programmers, it is unlike any other language they use. Most companies have a little bit in production somewhere, and most of them don't have a single person who writes enough bash to know it well. I think it's no accident that build tools, CI tools, and cloud orchestration tools are evolving in the direction of minimizing the need for shell scripting.
Personally I think the complexity from tools like bash come from lack of evolution.
As a thought experiment, why couldn't bash have a better assignment statement available?
In other words, something like:
set --goodass
a = string1 + '.' + string2
This would cut through SO much of the shell quoting nonsense that you deal with.
another tool like "make" would benefit too.
I think 6 months of development to "make" to have usable variables, clear ways of manipulating paths and filenames and making targets more usable... that would be better than 6 months of creating complex makefiles.
I think the question is whether a newcomer will understand the implicit meaning or if they might interpret it more literally than its meant. In particular, "for most programmers, it is unlike any other language they use" is not something someone new would necessarily be able to infer because that sentiment requires enough experience to tell the difference between "uncommon" and "extremely esoteric".
On a related note, most software is over-engineered. I think it's partly because of centralization of the industry; it's pushing everyone towards a small number of tools for the benefit of a small number of people who control them and so many of these tools end up becoming 'everything tools' and cover more use cases than they should.
Companies want developers to all know the same tools; that way they are easily replaceable across projects and companies and have little bargaining power in the industry. This is why software has a single mainstream trunk and alternative approaches are shunned with no jobs available. The industry is not being allowed to decentralize despite the fact that it naturally 'wants' to.
On the bright side, I think that eventually, some new, far superior non-mainstream approaches are going to materialize and they will erode the mainstream approaches.
Tech is not like math and not even like science; it can support MANY different branches solving any given problem in many different ways.
I agree. In some ways it feels like we have gone backwards in web dev since say the early days of ASP.NET and Rails. Back then we had browser wars to keep is busy. But now browsers are broadly compatible but we have invented all this front end complexity for web apps that often don’t need it.
Stuff like DNS, IP, https can’t be helped as they are fundamental things that need backwards compatibility and are somewhat political too.
I feel that learning those things well is a better investment though than learning the frameworks.
… if I keep going I will start talking about innovation tokens!
You can learn both, though. As much as people like to trash talk it, I think learning from "the bottom up", as long as you remember to follow the 80/20 principle and not go too deep into unnecessary rabbit holes, is still the best approach in terms of long term ROI on your time. I got a degree in EE because I wanted to be a "true" full stack engineer; last year I finally got a chance to learn React and a lot of cockpit flight hours setting Microsoft Azure. It took longer but I feel I'm on much steadier ground to keep climbing up.
To make hard things easy you have to find the right way to abstract them so you hold only some bits of the hard things in your head and all the frequently-used details too (maybe), and everything else you have to look up as needed. That's what I do, and that's roughly what TFA says.
The problem is that people don't necessarily bother to form a cognitive compression of a large topic until they really have to. That's because they already carry other large cognitive burdens with them, so they (we!) tend to resist adding new ones. If you can rely on someone else knowing some topic X well, you might just do that and not bother getting to know topic X well-enough. For those who know topic X well the best way to reduce help demand is to help others understand a minimal amount of topic X.
> So, bash is a programming language, right? But it's one of the weirdest programming languages that I work with.
Yes, `set -e` is broken. The need to quote everything (default splitting on $IFS) is broken. Globbing should be something one has to explicitly ask for -- sure, on the command-line that would be annoying, but in scripts it's a different story, and then you have to disable globbing globally, and globbing where you want to gets hard. Lots of bad defaults like that.
It's not just Bash, but also Ksh, and really, all the shells with the Bourne shell in their cultural or actual lineage.
As for SQL, yes, lots of people want the order of clauses to be redone. There's no reason it couldn't be -- I think it'd be a relatively small change to existing SQL parsers to allow clauses to come in different orders. But I don't have this particular cognitive problem, and I think it's because I know to look at the table sources first, but I'm not sure.
These docs are comprehensive, but most people don't want that level of detail, so having someone else test it and write something short would help!
For awhile I didn't "push" Oils because it still had a Python dependency. But it's now in pure C++, and good news: as of this week, we're beating bash on some compute-bound benchmarks!
(I/O bound scripts have always been the same speed, which is most shell scripts)
The problem is that we're still using these ancient shells when we have better ones. Users shouldn't be wasting time memorizing arcana like "set -e". At least we have search engines now...
I'm quite partial to the fish shell myself for this reason.
But I SSH into a lot of embedded systems these days, where you don't exactly have the luxury of installing your own shell all the time. For those times I like to whip out the "minimal safe Bash template" and `sftp` it to the server.
When explanations include superfluous detail, I find it very confusing. Like Chekhov's gun, I keep trying to fit it into the plot but it doesn't fit.
My super power is a terrible memory. So I have to understand things in order to remember them (aka a cognitive compression). I can't just learn things like normal people.
What if my SQL engine is Presto, Trino [1], or a similar query engine? If it's federating multiple source databases we peel the SQL back and get... SQL? Or you peel the SQL back and get... S3 + Mongo + Hadoop? Junior analysts would work at 1/10th the speed if they had to use those raw.
TiL: The shell does not exit if the command that fails is a part of any command executed in a && or || list except the command following the final && or ||.
"Fails" is a higher-level concept than the shell is concerned with. Failure conditions and reactions are entirely at the discretion of the programmer and are not built as an assumption into the shell.
The only thing /bin/false does is return 1. Is that a failure? No, that's how it was designed to work and literally what it is for. I have written hundreds of shell scripts and lots of them contain commands which quite normally return non-zero in order to do their job of checking a string for a certain pattern or whatever.
Programs are free to return whatever exit codes they want in any circumstance they want, and common convention is to return 0 upon success and non-zero upon failure. But the only thing that the shell is concerned with is that 0 evaluates to "true" and non-zero evaluates to "false" in the language.
It would be pretty inconvenient if the shell exited any time any program returned non-zero, otherwise if statements and loops would be impossible.
If a script should care about the return code of a particular program it runs, then it should check explicitly and do something about it. As you linked to, there are options you can set to make the shell exit if any command within it returns non-zero, and lots of beginner to intermediate shell script writers will _dogmatically_ insist that they be used for every script. But I have found these to be somewhat hacky and full of weird hard-to-handle edge cases in non-trivial scripts. My opinion is that if you find yourself needing those options in every script you write, maybe you should be writing Makefiles instead.
> It would be pretty inconvenient if the shell exited any time any program returned non-zero, otherwise if statements and loops would be impossible.
In another life I worked as a Jenkins basher and if I remember correctly I had this problem all the time with some Groovy dsl aborting on any non zero shell command exit. It was so annoying.
In my opinion, it is one of the biggest flaws in the shell language design, because it means, that a function can lead to different results independent of the arguments, but depending of the context from which you call it. And it even overrides explicitly setting `set -e` within a function.
There are more arcane things to learn about shell, at some point one has to go shrug, it's a fine tool for getting quick results but not for writing robust programs.
This is a great description of things that seem like they shouldn't be so difficult but can have many complications. The SQL part seems to double-down on a conceptual failure rather than demystifying it though.
A query's logic is declarative which defines the output. It's the query plan that has any sense of execution order or procedural nature to it. That's the first thing to learn. Then one can learn the fuzzy areas like dependent subqueries etc. But being able to see the equivalence between not-exists and an anti-join enables understanding and reasoning.
Using an analogy such as procedurally understanding of written queries only kicks the can further down the road, then when you're really stuck on something more complicated have no way to unravel the white lies.
> The SQL part seems to double-down on a conceptual failure rather than demystifying it though.
She talked about a mental model to help her understand the query (it can be useful), and mentioned that it probably is not how the database actually processes the query.
My point is that there should be two mental models. One for getting the correct results. Then another for doing so performantly. Being able to write many different forms of obtaining the same correct results is where this leads to combined understanding and proficiency.
An example of where muddling these ends up with real questions like "how does the db know what the select terms are when those sources aren't even defined yet?" By 'yet' they mean lexically but also procedurally.
Excellent talk. She seems to be a very likable person. She is right about Bash being full of "gotchas" and trivia and memorizing them all is very hard, but I think it is nice to memorize some trivia. For instance, I tended to forget the order of the arguments of the find command, and I would lose time trying to remember its syntax when I'm in front of a machine with no readily available internet connection. So I committed to learning and memorizing the most common command line tools and some of their "gotchas". I used Anki for that, and some mnemonics, and the return on the investment has been worth it I think.
I came here to say Anki is my lifeline for grokking difficult things like DNS.
It was in fact on jvns.ca's book recommendation that I got Michael W. Lucas's _Networking for System Administrators_, and strip mined it for Anki cards containing both technical know-how and more than a little sysadmin wisdom.
It might be one of the highest ROI books I've ever read, considering I actually remember how to use things like nectat and tcpdump to debug transport layer issues at a moment's notice now.
I maintain a file with commands that I don't use often (ex: increase volume with ffmpeg, add a border to an image with convert, etc). I even have a shortcut that'll add the last executed command to this file and another shortcut to search from this file.
I wrote an Emacs package that works fairly well for saving commands but also making them reusable from its file manager without the need to tweak input or output file paths https://github.com/xenodium/dwim-shell-command
While Emacs isn’t everyone’s cup of tea, I think the same concept can be applied elsewhere. Right click on file(s) from macOS Finder or Windows Explorer and apply any of those saved commands.
If you don't mind, it would be awesome to see your cheatsheet. I think this would be a great thing for people to share - like their dotfiles. But maybe they already do and I don't pay much attention to it because I'm lazy - like their dotfiles.
Oh, that's a great idea. I have a doc that I maintain by hand, either via ">>" or editing directly. Time to go and make a shortcut. Do you do any annotation to help with the search?
I like `fzf`'s default override of Ctrl+R backwards search for this purpose, along with the fish shell's really good built in autocompletion.
I've been thinking about updating the GIFs in my fzf tutorial to show off fish, but I think I'd rather leave them with ish just so I don't dilute the pedagogical message.
If you’re already putting them in a file, you might as well put them in a shell script on $PATH: at a certain point I started writing shell scripts and little utilities for relatively infrequently used commands and other tasks (e.g. clone this repo from GitHub to a well-known location and cd to it)
Pretty much the same, though I usually just keep the file open in a side terminal. I want to use stuff like cheat.sh (ex. curl cheat.sh/grep) but I never remember.
I tended to forget the order of the arguments of the find command, and I would lose time trying to remember its syntax when I'm in front of a machine with no readily available internet connection.
The man pages are readily available.
The bash man page is huge and hairy, but comprehensive, I've found it pretty valuable to be familiar with the major sections and the visual shape of the text in the man page so I can page through it quickly to locate the exact info I need. This is often faster than using a Internet search engine.
It might be better to invest in something more general like better docs/cheatsheets (the bad old man pages which you could convert to a text editor friendly format, or something better like tldr, or something like Dash) so you don't depend on the internet, but also don't have to memorize bad designs (since find wouldn't be the only one)
Tools that do this make things clearer almost immediately. Consider the developer tools in a web browser. Do you remember the "dark ages" before such things existed? It was awful because you had to guess instead of seeing what was going on.
Tools like Wireshark that show you every last byte of network packets that it has access to AND parses it to help you see the structure. This isn't just for debugging networking data; it's hugely beneficial in teaching networking concepts because nothing is hidden.
This is also one of my favorite things about open source software. I can view the source to understand what's causing a bug, to fill in knowledge gaps left by the documentation, or just learn more about programming concepts. Nothing is hidden.
So yes it comes close but it just goes to show you, there is always more detail hiding somewhere!
In the article, it was pointed out that DNS caches can be hidden. They're especially hidden when they're upstream and in another computer!
We are visualizing things in our head already. And any explanation of anything in computing is a diagram. But we have zero diagrams when coding.
Just dynamically instrument all code to send messages to a GUI.
Check out demos of the old Lisp Machines, [1] is a brief overview demo, [2] links to a timestamp with a view of some simple diagramming, but I’ve seen TI-Symbolics beasts routinely display complex relationships in Lisp code on their massive (for the time) bitmapped screens. The limitation was the end user managing the visualization complexity.
With open source llvm, clang and similar making available abstract syntax trees and even semantic analysis and type checking results, LLM’s assisting with decompiling binary blobs, and modern hardware (goggles, graphics cards, and so on), I sometimes wonder how close we can come to reproducing that aspect of the Lisp Machine experience on open source operating systems.
[1] https://youtu.be/o4-YnLpLgtk
[2] https://youtu.be/jACcgLfyiyM?t=43m52s
It's one of the areas that homoiconicity helps: code is data, data is code, so visualization tools can work on both sides.
At least with all the DevOps shit going on now, it seems that many of the tools increasingly hide things. And the gurus who had that knowledge and could teach you are now concentrated in those tool companies instead of in your org.
The proc filesystem on Linux is philosophically similar. It allows you to understand processes by working with files.
On the contrary, you don't need the source, and in some cases it may even be misleading in many ways, when you can look directly at the instructions the machine executes. Tools like disassemblers and decompilers would be equivalent to what you speak of.
Is there a way to get Awk to emit a non-terse version of the script passed in? ie awk '/test/' -> '{ if($0~/test/){print $0} }'
(Or back in the day, looking at the source code because we ran uncompiled stuff in Basic and whatever and that was pretty cool)
It's also awesome to use Java IDEs that can show both the bytecode of .class files and also perform decompilation.
It’s really easy to bleed one into the other.
> But I would say this is factually untrue. How many of you are using bash?
I think the meaning of the statement is not that straightforwardly literal.
I think what it means is, "We don't have strong confidence in our understanding of our bash code or confidence that it will behave as we expect in untested scenarios. If anything out of the ordinary happens, we kind of expect that something will fail and we will learn something new about bash that will make us cringe and/or strike a nearby object hard enough to injure ourselves."
Bash is a complex language, and for most programmers, it is unlike any other language they use. Most companies have a little bit in production somewhere, and most of them don't have a single person who writes enough bash to know it well. I think it's no accident that build tools, CI tools, and cloud orchestration tools are evolving in the direction of minimizing the need for shell scripting.
As a thought experiment, why couldn't bash have a better assignment statement available?
In other words, something like:
set --goodass
a = string1 + '.' + string2
This would cut through SO much of the shell quoting nonsense that you deal with.
another tool like "make" would benefit too.
I think 6 months of development to "make" to have usable variables, clear ways of manipulating paths and filenames and making targets more usable... that would be better than 6 months of creating complex makefiles.
Companies want developers to all know the same tools; that way they are easily replaceable across projects and companies and have little bargaining power in the industry. This is why software has a single mainstream trunk and alternative approaches are shunned with no jobs available. The industry is not being allowed to decentralize despite the fact that it naturally 'wants' to.
On the bright side, I think that eventually, some new, far superior non-mainstream approaches are going to materialize and they will erode the mainstream approaches.
Tech is not like math and not even like science; it can support MANY different branches solving any given problem in many different ways.
Stuff like DNS, IP, https can’t be helped as they are fundamental things that need backwards compatibility and are somewhat political too.
I feel that learning those things well is a better investment though than learning the frameworks.
… if I keep going I will start talking about innovation tokens!
The problem is that people don't necessarily bother to form a cognitive compression of a large topic until they really have to. That's because they already carry other large cognitive burdens with them, so they (we!) tend to resist adding new ones. If you can rely on someone else knowing some topic X well, you might just do that and not bother getting to know topic X well-enough. For those who know topic X well the best way to reduce help demand is to help others understand a minimal amount of topic X.
> So, bash is a programming language, right? But it's one of the weirdest programming languages that I work with.
Yes, `set -e` is broken. The need to quote everything (default splitting on $IFS) is broken. Globbing should be something one has to explicitly ask for -- sure, on the command-line that would be annoying, but in scripts it's a different story, and then you have to disable globbing globally, and globbing where you want to gets hard. Lots of bad defaults like that.
It's not just Bash, but also Ksh, and really, all the shells with the Bourne shell in their cultural or actual lineage.
As for SQL, yes, lots of people want the order of clauses to be redone. There's no reason it couldn't be -- I think it'd be a relatively small change to existing SQL parsers to allow clauses to come in different orders. But I don't have this particular cognitive problem, and I think it's because I know to look at the table sources first, but I'm not sure.
The need to quote everything (default splitting on $IFS) is broken
Globbing should be something one has to explicitly ask for
By the way OSH runs existing shell scripts and ALSO fixes those 3 pitfalls, and more. Just add
to the top of your script, and those 3 things will go away.If anyone wants to help the project, download a tarball, test our claims, and write a blog post about it :)
Details:
https://www.oilshell.org/release/latest/doc/error-handling.h...
https://www.oilshell.org/release/latest/doc/simple-word-eval...
These docs are comprehensive, but most people don't want that level of detail, so having someone else test it and write something short would help!
For awhile I didn't "push" Oils because it still had a Python dependency. But it's now in pure C++, and good news: as of this week, we're beating bash on some compute-bound benchmarks!
(I/O bound scripts have always been the same speed, which is most shell scripts)
(Also, we still need to rename Oil -> YSH in those docs, that will probably cause some confusion for awhile - https://www.oilshell.org/blog/2023/03/rename.html )
https://lobste.rs/s/6gycoi/making_hard_things_easy#c_sjfxif
Feedback is welcome (especially based on upgrading real scripts)
But I SSH into a lot of embedded systems these days, where you don't exactly have the luxury of installing your own shell all the time. For those times I like to whip out the "minimal safe Bash template" and `sftp` it to the server.
https://betterdev.blog/minimal-safe-bash-script-template/
Also, we have ChatGPT now. That helps a lot.
My super power is a terrible memory. So I have to understand things in order to remember them (aka a cognitive compression). I can't just learn things like normal people.
We should peel off SQL and get access to the underlying layers.
[1] https://trino.io/
Reference: https://www.gnu.org/software/bash/manual/bash.html#index-set
The only thing /bin/false does is return 1. Is that a failure? No, that's how it was designed to work and literally what it is for. I have written hundreds of shell scripts and lots of them contain commands which quite normally return non-zero in order to do their job of checking a string for a certain pattern or whatever.
Programs are free to return whatever exit codes they want in any circumstance they want, and common convention is to return 0 upon success and non-zero upon failure. But the only thing that the shell is concerned with is that 0 evaluates to "true" and non-zero evaluates to "false" in the language.
It would be pretty inconvenient if the shell exited any time any program returned non-zero, otherwise if statements and loops would be impossible.
If a script should care about the return code of a particular program it runs, then it should check explicitly and do something about it. As you linked to, there are options you can set to make the shell exit if any command within it returns non-zero, and lots of beginner to intermediate shell script writers will _dogmatically_ insist that they be used for every script. But I have found these to be somewhat hacky and full of weird hard-to-handle edge cases in non-trivial scripts. My opinion is that if you find yourself needing those options in every script you write, maybe you should be writing Makefiles instead.
In another life I worked as a Jenkins basher and if I remember correctly I had this problem all the time with some Groovy dsl aborting on any non zero shell command exit. It was so annoying.
Deleted Comment
What's more pernicious is that pipelines don't cause the shell to exit (assuming set -e) unless the last command fails:
does not fail if README doesn't exist, unless you've also used `set -o pipefail`.Some time ago I gave an example: https://news.ycombinator.com/item?id=22213830
A query's logic is declarative which defines the output. It's the query plan that has any sense of execution order or procedural nature to it. That's the first thing to learn. Then one can learn the fuzzy areas like dependent subqueries etc. But being able to see the equivalence between not-exists and an anti-join enables understanding and reasoning.
Using an analogy such as procedurally understanding of written queries only kicks the can further down the road, then when you're really stuck on something more complicated have no way to unravel the white lies.
She talked about a mental model to help her understand the query (it can be useful), and mentioned that it probably is not how the database actually processes the query.
An example of where muddling these ends up with real questions like "how does the db know what the select terms are when those sources aren't even defined yet?" By 'yet' they mean lexically but also procedurally.
It was in fact on jvns.ca's book recommendation that I got Michael W. Lucas's _Networking for System Administrators_, and strip mined it for Anki cards containing both technical know-how and more than a little sysadmin wisdom.
It might be one of the highest ROI books I've ever read, considering I actually remember how to use things like nectat and tcpdump to debug transport layer issues at a moment's notice now.
Dead Comment
Just today I saved a new one for trimming borders on video screenshots https://xenodium.com/trimming-video-screenshots to https://github.com/xenodium/dwim-shell-command/blob/main/dwi... (that’s my cheat sheet).
I wrote an Emacs package that works fairly well for saving commands but also making them reusable from its file manager without the need to tweak input or output file paths https://github.com/xenodium/dwim-shell-command
While Emacs isn’t everyone’s cup of tea, I think the same concept can be applied elsewhere. Right click on file(s) from macOS Finder or Windows Explorer and apply any of those saved commands.
Edit: More examples…
- Stitching multiple images: https://xenodium.com/joining-images-from-the-comfort-of-dire...
- Batch apply on file selections: https://xenodium.com/emacs-dwim-shell-command
I've been thinking about updating the GIFs in my fzf tutorial to show off fish, but I think I'd rather leave them with ish just so I don't dilute the pedagogical message.
The man pages are readily available.
The bash man page is huge and hairy, but comprehensive, I've found it pretty valuable to be familiar with the major sections and the visual shape of the text in the man page so I can page through it quickly to locate the exact info I need. This is often faster than using a Internet search engine.
True, but I find the man pages not easy and quick to parse.
Bash is terrible.