Readit News logoReadit News
titanomachy · 8 months ago
“Good debugger worth weight in shiny rocks, in fact also more”

I’ve spent time at small startups and on “elite” big tech teams, and I’m usually the only one on my team using a debugger. Almost everyone in the real world (at least in web tech) seems to do print statement debugging. I have tried and failed to get others interested in using my workflow.

I generally agree that it’s the best way to start understanding a system. Breaking on an interesting line of code during a test run and studying the call stack that got me there is infinitely easier than trying to run the code forwards in my head.

Young grugs: learning this skill is a minor superpower. Take the time to get it working on your codebase, if you can.

demosthanos · 8 months ago
There was a good discussion on this topic years ago [0]. The top comment shares this quote from Brian Kernighan and Rob Pike, neither of whom I'd call a young grug:

> As personal choice, we tend not to use debuggers beyond getting a stack trace or the value of a variable or two. One reason is that it is easy to get lost in details of complicated data structures and control flow; we find stepping through a program less productive than thinking harder and adding output statements and self-checking code at critical places. Clicking over statements takes longer than scanning the output of judiciously-placed displays. It takes less time to decide where to put print statements than to single-step to the critical section of code, even assuming we know where that is. More important, debugging statements stay with the program; debugging sessions are transient.

I tend to agree with them on this. For almost all of the work that I do, this hypothesis-logs-exec loop gets me to the answer substantially faster. I'm not "trying to run the code forwards in my head". I already have a working model for the way that the code runs, I know what output I expect to see if the program is behaving according to that model, and I can usually quickly intuit what is actually happening based on the incorrect output from the prints.

[0] The unreasonable effectiveness of print debugging (349 points, 354 comments) April 2021 https://news.ycombinator.com/item?id=26925570

johnfn · 8 months ago
On the other hand, John Carmack loves debuggers - he talks about the importance of knowing your debugging tools and using them to step through a complex system in his interview with Lex Friedman. I think it's fair to say that there's some nuance to the conversation.

My guess is that:

- Debuggers are most useful when you have a very poor understanding of the problem domain. Maybe you just joined a new company or are exploring an area of the code for the first time. In that case you can pick up a lot of information quickly with a debugger.

- Print debugging is most useful when you understand the code quite well, and are pretty sure you've got an idea of where the problem lies. In that case, a few judicious print statements can quickly illuminate things and get you back to what you were doing.

josephg · 8 months ago
There's another story I heard once from Rob Pike about debugging. (And this was many years ago - I hope I get the details right).

He said that him and Brian K would pair while debugging. As Rob Pike told it, he would often drive the computer, putting in print statements, rerunning the program and so on. Brian Kernighan would stand behind him and quietly just think about the bug and the output the program was generating. Apparently Brian K would often just - after being silent for awhile - say "oh, I think the bug is in this function, on this line" and sure enough, there it was. Apparently it happened so often enough that he thought Brian might have figured out more bugs than Rob did, even without his hands touching the keyboard.

Personally I love a good debugger. But I still think about that from time to time. There's a good chance I should step away from the computer more often and just contemplate it.

recursivedoubts · 8 months ago
I think a lot of “naturals” find visual debuggers pointless, but for people who don’t naturally intuit how a computer works it can be invaluable in building that intuition.

I insist that my students learn a visual debugger in my classes for this reason: what the "stack" really is, how a loop really executes, etc.

It doesn't replace thinking & print debugging, but it complements them both when done properly.

neogodless · 8 months ago
> time to decide where to put print statements

But... that's where you put breakpoints and then you don't need to "single-step" through code. Takes less time to put a breakpoint then to add (and later remove) temporary print statements.

(Now if you're putting in permanent logging that makes sense, do that anyway. But that probably won't coincide with debugging print statements...)

BlackFly · 8 months ago
The tools are not mutually exclusive. I also do quite a lot with print debugging, but some of the most pernicious problems often require a debugger.

> It takes less time to decide where to put print statements than to single-step to the critical section of code

Why would you ever be single-stepping? Put a break point (conditional if necessary) where you would put the print statement. The difference between a single break point and a print statement is that the break point will allow you to inspect the local variables associated with all calls in the stack trace and evaluate further expressions.

So when do you debug instead of using print statements? When you know that no matter what the outcome of your hypothesis is, that you will need to iteratively inspect details from other points up the stack. That is, when you know, from experience, that you are going to need further print statements but you don't know where they will be.

nmeofthestate · 8 months ago
I disagree - using an interactive debugger can give insights that just looking at the code can't (tbf it might be different for different people). But the number of times I have found pathological behaviour from just stepping through the code is many. Think "holy f**, this bit of code is running 100 times??" type stuff. With complex event-driven code written by many teams, it's not obvious what is happening at runtime by just perusing the code and stroking one's long wizard beard.
samsepi01 · 8 months ago
But can't you instead just set a breakpoint next to wherever you are gonna put that print stmt and inspect value once code hits? print stmt seems like extra overhead
kapildev · 8 months ago
Exactly, these judiciously placed print statements help me locate the site of the error much faster than using a debugger. Then, I could switch to using a debugger once I narrow things down if I am still unsure about the cause of the problem.
winrid · 8 months ago
There's this idea that the way you use a debugger is by stepping over line after line during execution.

That's not usually the case.

Setting conditional breakpoints, especially for things like break on all exceptions, or when the video buffer has a certain pattern, etc, is usually where the value starts to come in.

james_marks · 8 months ago
Adding these print statements is one of my favorite LLM use cases.

Hard to get wrong, tedious to type and a huge speed increase to visually scan the output.

jacques_chester · 8 months ago
> Brian Kernighan and Rob Pike

Most of us aren't Brian Kernighan or Rob Pike.

I am very happy for people who are, but I am firmly at a grug level.

throwaway173738 · 8 months ago
I tend not to use a debugger for breakpoints but I use it a lot for watchpoints because I can adjust my print statements without restarting the program
nradov · 8 months ago
You probably just don't know how to use conditional breakpoints effectively. This is faster than adding print statements.
owlstuffing · 8 months ago
Their comment conflates debugging with logging.

Professional debuggers such as the one in IntelliJ IDEA are invaluable regardless of one's familiarity with a given codebase, to say otherwise is utter ignorance. Outside of logging, unless attaching a debugger is impractical, using print statements is at best wasting time.

never_inline · 8 months ago
I use single-stepping very rarely in practice when using a debugger, except when following through a "value of a variable or two". Yet it's more convenient than pprint.pprint() for that because structured display of values, eval expression, and ability to inspect callers up the stack.
titanomachy · 8 months ago
I do a lot of print statements as well. I think the greatest value of debuggers comes when I’m working on a codebase where I don’t already have a strong mental model, because it lets me read the code as a living artifact with states and stack traces. Like Rob Pike, I also find single-stepping tedious.
edanm · 8 months ago
This depends on a lot of things.

For example, one thing you wrote that jumps out at me:

> I already have a working model for the way that the code runs [...]

This is not always true. It's only true for code that I wrote or know very well. E.g. as a consultant, I often work on codebases that are new to me, and I do tend to use debuggers there more often than I use print debugging.

Although lots of other variables affect this - how much complicated state there is to get things running, how fast the "system" starts up, what language it's written in and if there are alternatives (in some situations I'll use a Jupyter Notebook for exploring the code, Clojure has its own repl-based way of doing things, etc).

Xeoncross · 8 months ago
That is the difference between complex state and simple state.

I use a debugger when I've constructed a complex process that has a large cardinally of states it could end up in. There is no possibility that I can write logic checks (tests) for all source inputs to that state.

I don't use one when I could simply increase test situations to find my logical error.

Consider the difference between a game engine and a simple state machine. The former can be complex enough to replicate many features of the real world while a simple state machine/lexer probably just needs more tests of each individual state to spot the issue.

danfunk · 8 months ago
This feels a little like "I don't use a cordless drill because my screw driver works so well and is faster in most cases" grug brain says use best tool, not just tool grug used last.
gorjusborg · 8 months ago
> substantially faster

Than what? In languages with good debugger support (see JVM/Java) it can be far quicker to click a single line to set a breakpoint, hit Debug, the inspect the values or evaluate expressions to get the runtime context you cant get from purely reading code. Print statements require rebuilding code and backing them out, so its hard to imagine that technique being faster.

I do use print debugging for languages with poor IDE/debugger support, but it is one big thing I miss when outside of Java.

eikenberry · 8 months ago
One thing this quote doesn't touch is that speed of fixing the bug isn't the only variable. The learning along the way is at least as important, if not more so. Reading and understanding the code serves the developer better long term if they are regularly working on it. On the other hand debuggers really shine when jumping into a project to help out and don't have or need a good understanding of the code base.
jemmyw · 8 months ago
I personally use both, and I'm not sure I find the argument about needing to step through convincing. I put the debugger breakpoint at the same place I might put a print. I hardly ever step through, but I do often continue to reach this line again. The real advantage is that you can inspect the current state live and make calls with the data.

However, I use prints a lot more because you can, as you say, usually get to the answer faster.

splitframe · 8 months ago
I feel like you need to know when to use what. Debugger is so much faster and easier for me when looking for errors in the use of (my own) abstractions. But when looking for errors in the bowels of abstracted or very small partitioned code print logs are far easier to see the devil in the detail.
ozim · 8 months ago
I wonder if Brian Kerninghan was using modern tooling or that comment was using quote from 70’s.
TZubiri · 8 months ago
roncesvalles · 8 months ago
I'd love to use a real debugger but as someone who has only ever worked at large companies, this was just never an option. In a microservices mesh architecture, you can't really run anything locally at all, and the test environment is often not configured to allow hooking up a stepping debugger. Print debugging is all you have. If there's a problem with the logging system itself or something that crashes the program before the logs can flush, then not even that.
alisonatwork · 8 months ago
This is basically it. When I started programming in C, I used a debugger all the time. Even a bit later doing Java monoliths I could spin up the whole app on my local and debug in the IDE. But nowadays running a dozen processes and containers and whatnot, it's just hopeless. The individual developer experience has gone very much backwards in the microservice era so the best thing to do is embrace observability, feature toggles etc and test in prod or a staging environment somewhere outside of your local machine.
idontwantthis · 8 months ago
At my company our system is composed of 2 dozen different services and all of them can run locally in minikube and easily be debugged in jetbrains.
soseng · 8 months ago
Curious to learn more about why it is difficult to debug. I'm not familiar with service mesh. I also work at a large corp, but we use gateways and most things are event driven with kafka across domain boundaries. I spend most of my time debugging each service locally by injecting mock messages or objects. I do this one at a time if the problem is upstream. Usually, our logging helps us pinpoint the exact service at to target for debugging. It's really easy. Our devops infrastructure has built out patterns and libraries when teams need to mint a new service. Everything is standardized with terraform. Everything has the same standard swagger pages, everything is using okta, etc.. Seems a bit boring (which is good)
frollogaston · 8 months ago
Same, this isn't my choice, debuggers don't work here. And we don't even have microservices.
mananaysiempre · 8 months ago
> In a microservices mesh architecture, you can't really run anything locally at all, and the test environment is often not configured to allow hooking up a stepping debugger.

I don't often use a debugger, and I still feel the need to point out Visual Studio could step into DCOM RPCs across machines in the first release of DCOM, ca. 1995. (The COM specification has a description of how that is accomplished.)

nyarlathotep_ · 8 months ago
> and I’m usually the only one on my team using a debugger. Almost everyone in the real world (at least in web tech) seems to do print statement debugging.

One of the first things I do in a codebase is get some working IDE/editor up where I can quickly run the program under a debugger, even if I'm not immediately troubleshooting something. It's never long before I need to use it.

I was baffled when I too encountered this. Even working collaboratively with people they'd have no concept of how to use a debugger.

"No, set a breakpoint there"

"yeah now step into the function and inspect the state of those variables"

"step over that"

: blank stares at each instance :

joshvm · 8 months ago
For desktop GUI development I can't imagine not using breakpoints and stepping. Especially when you have several ways that some callback might be triggered. It's super helpful to break on a UI element signal (or slot) and then follow along to see why things aren't working.

I don't use debuggers as often in Python, probably because it's eaiser to throw code in a notebook and run line by line to inspect variables, change/inject state and re-run. That's possible but a lot harder to do in C++.

Also for embedded work, using a debugger and memory viewer is pretty powerful. It's not something people think about for Arduino but almost every commodity micro supports some sort of debugwire-like interface (which is usually simpler than JTAG).

geophile · 8 months ago
I am also in the camp that has very little use for debuggers.

A point that may be pedantic: I don't add (and then remove) "print" statements. I add logging code, that stays forever. For a major interface, I'll usually start with INFO level debugging, to document function entry/exit, with param values. I add more detailed logging as I start to use the system and find out what needs extra scrutiny. This approach is very easy to get started with and maintain, and provides powerful insight into problems as they arise.

I also put a lot of work into formatting log statements. I once worked on a distributed system, and getting the prefix of each log statement exactly right was very useful -- node id, pid, timestamp, all of it fixed width. I could download logs from across the cluster, sort, and have a single file that interleaved actions from across the cluster.

AdieuToLogic · 8 months ago
> A point that may be pedantic: I don't add (and then remove) "print" statements. I add logging code, that stays forever. For a major interface, I'll usually start with INFO level debugging, to document function entry/exit, with param values.

This is an anti-pattern which results in voluminous log "noise" when the system operates as expected. To the degree that I have personally seen gigabytes per day produced by employing it. It also can litter the solution with transient concerns once thought important and are no longer relevant.

If detailed method invocation history is a requirement, consider using the Writer Monad[0] and only emitting log entries when either an error is detected or in an "unconditionally emit trace logs" environment (such as local unit/integration tests).

0 - https://williamyaoh.com/posts/2020-07-26-deriving-writer-mon...

hobs · 8 months ago
A log is very different than a debugger though, one tells you what happened, one shows you the entire state and doesn't make you assemble it in your head.
switchbak · 8 months ago
What I find annoying is how these async toolkits screw up the stack trace, so I have little what the real program flow looks like. That reduces much of the benefit off the top.

Some IDEs promise to solve that, but I’ve not been impressed thus far.

YMMV based on language/runtime/toolkit of course. This might get added to my wishlist for my next language of choice.

orthoxerox · 8 months ago
This sounds like TRACE level information to me.
avhception · 8 months ago
Using a debugger on my own code is easy and I love it. The second the debugger steps deep into one of the libs or frameworks I'm using, I'm lost and I hate it. That framework / lib easily has many ten thousands of person-hours under it's belly, and I'm way out of my league.
ronjakoi · 8 months ago
But you can just use the "step out" feature to get back out when you realise you've gone into a library function. Or "step over" when you can see you're about to go into one.
never_inline · 8 months ago
IDEs tend to have a "just my code" option.
msluyter · 8 months ago
"Step out" is how to get out of the lower level frameworks, and or "step over" to avoid diving into them in the first place. I can't speak for other IDEs, but all of the JetBrains products have these.
jalapenos · 8 months ago
Debuggers are great. I too have tried and failed to spread their use.

But the reason is simple: they are always 100x harder to set up and keep working than just type a print statement and be done with it.

Especially when you're in a typical situation where an app has a big convoluted docker based setup, and with umpteen packers & compilers & transpilers etc.

This is why all software companies should have at least one person tasked with making sure there are working debuggers everywhere, and that everyone's been trained how to use them.

There should also be some kind of automated testing that catches any failures in debugging tooling.

But that effort's hard to link directly to shipped features, so if you started doing that, management would home in on that dev like a laser guided missile and task him to something with "business value", thereby wasting far more dev time and losing far more business value.

never_inline · 8 months ago
I am young grug who didn't use debuggers much until last year or so.

What sold me on debugger is the things you can do with it

    * See values and eval expressions in calling frames.
    * Modify the course of execution by eval'ing a mutating expression.
    * Set exception breakpoints which stop deep where the exception is raised.
One other such tool is REPL. I see REPL and debugger as complementary to each other, and have some success using both together in VSCode, which is pretty convenient with autoreload set. (https://mahesh-hegde.github.io/posts/vscode-ipython-debuggin...)

mikepurvis · 8 months ago
The rise of virtualization, containers, microservices, etc has I think contributed to this being more difficult. Even local dev-test loops often have something other than you launching the executable, which can make it challenging to get the debugger attached to it.

Not any excuse, but another factor to be considered when adding infra layers between the developer and the application.

bluefirebrand · 8 months ago
Debuggers are also brittle when working with asynchronous code

Debuggers actually can hide entire categories of bugs caused by race conditions when breakpoints cause async functions to resolve in a different order than they would when running in realtime

PaulHoule · 8 months ago
I would tend to say that printf debugging is widespread in the Linux-adjacent world because you can't trust a visual debugger to actually be working there because of the general brokenness of GUIs in the Linux world.

I didn't really get into debuggers until (1) I was firmly in Windows, where you expect the GUI to work and the LI to be busted, and (2) I'd been burned too many times by adding debugging printfs() that got checked into version control and caused trouble.

Since then I've had some adventures with CLI debuggers, such as using gdb to debug another gdb, using both jdb and gdb on the same process at the same time to debug a Java/C++ system, automating gdb, etc. But there is the thing, as you say, is that there is usually some investment required to get the debugger working for a particular system.

With a good IDE I think Junit + debugging gives an experience in Java similar to using the REPL in a language like Python in that you can write some code that is experimental and experiment it, but in this case the code doesn't just scroll out of the terminal but ultimately gets checked in as a unit test.

ses1984 · 8 months ago
Debuggers exist in the terminal, in vim, and in emacs.
bandrami · 8 months ago
Why would you want a GUI debugger?
MangoToupe · 8 months ago
> Breaking on an interesting line of code during a test run and studying the call stack that got me there is infinitely easier than trying to run the code forwards in my head.

I really don't get this at all. To me it is infinitely easier to iterate and narrow the problem rather than trying to identify sight-unseen where the problem is—it's incredibly rare that the bug immediately crashes the program. And you can fit a far higher density of relevant information through print statements over execution of a reproduced bug than you can reproduce at any single point in the call stack. And 99% of the information you can access at any single point in the call stack will be irrelevant.

To be sure, a debugger is an incredibly useful and irreplaceable tool.... but it's far too slow and buggy to rely on for daily debugging (unless, as you indicate, you don't know the codebase well enough to reason about it by reading the code).

Things that complicate this:

* highly mutable state

* extremely complex control or data flow

* not being able to access logs

* the compiler lying to you or being incorrect

* subtle instruction ordering issues

acureau · 8 months ago
I refuse to believe there are professional software developers who don't use debuggers. What are you all working on? How do you get your bearings in a new code-base? Do you read it like a book, and keep the whole thing in your mind? How do you specify all of the relevant state in your print statements? How do you verify your logic?
bloomca · 8 months ago
Personally, I think that debugger is very helpful in understanding what is going on, but once you are familiar with the code and data structures, I am very often pretty close in my assessment, so scanning code and inserting multiple print lines is both faster and more productive.

I only used debugger recently in C# and C, when I was learning and practicing them.

oh_my_goodness · 8 months ago
A lot of people think that. That's why it's important to read the essay.
wagwang · 8 months ago
I find that debuggers solve a very specific class of bugs of intense branching complexity in a self contained system. But the moment there's stuff going in and out of DBs, other services, multithreading, integrations, etc, the debugger becomes more of a liability without a really advanced tooling team.
eternityforest · 8 months ago
I think of "don't break the debugger" as a top important design heuristic.

It's one point in favor of async, and language that make async debugging easy.

And at the very least, you can encapsulate things so they can be debugged separately from the multi threading stuff.

If you're always using the debugger, you learn to build around it as one of your primary tools for interacting with code, and you notice right away if the debugger can't help with something.

norir · 8 months ago
Here is my circular argument against debuggers: if I learn to use a debugger, I will spend much, possibly most, of my time debugging. I'd rather learn how to write useful programs that don't have bugs. Most people believe this is impossible.

The trouble of course is that there is always money to be made debugging. There is almost no incentive in industry to truly eliminate bugs and, indeed, I would argue that the incentives in the industry actively encourage bugs because they lead to lucrative support contracts and large dev teams that spend half their time chasing down bugs in a never-ending cycle. If a company actually shipped perfect software, how could they keep extracting more money from their customer?

dieortin · 8 months ago
So you think that not knowing how to operate a debugger will exempt your code from having bugs, or you from having to debug them?

Deleted Comment

GrumpyYoungMan · 8 months ago
Debuggers are great until you have to work in a context where you can't attach a debugger. Good old printf works in every context.

By all means, learn to use a debugger well but don't become overly dependent on them.

mrngm · 8 months ago
I'll introduce you to our bespoke tool that automatically restarts processes when they exit... and redirects stdout/err by default to /dev/null :D

Deleted Comment

thesz · 8 months ago
I used to use debugger when I was young - disk space was small, disks were slow and logging was expensive.

Now, some 35 years later, I greatly prefer logs. This way I can compare execution paths of different use cases, I can compare outcomes of my changes, etc. I am not confined to a single point of time with tricky manual control as with debugger, I can see them all.

To old grugs: learning to execute and rewind code in your head is a major superpower. And it works on any codebase.

TOGoS · 8 months ago
Some people are wizards with the debugger. Some people prefer printfs.

I used to debug occasionally but haven't touched a debugger in years. I'm not sure exactly why this is, but I'm generally not concerned with exactly what's in the stack on a particular run, but more with some specific state and where something changes time after time, and it's easier to collect that data programatically in the program than to figure out how to do it in the debugger that's integrated with whatever IDE I'm using that day.

And the codebase I deal with at work is Spring Boot. 90% of the stack is meaningless to me anyway. The debugger has been handy for finding out "what the f*$% is the caller doing to the transaction context" but that kind of thing is frustrating no matter what.

Anyway, I think they're both valid ways to explore the runtime characteristics of your program. Depends on what you're comfortable with, how you think of the problem, and the type of codebase you're working on. I could see living in the debugger being more useful to me if I'm in some C++ codebase where every line potentially has implicit destructor calls or something.

ay · 8 months ago
There might be another factor at not using the debugger beyond the pure cluelessness: often you can’t really run it in production. Back when I started with coding (it was Turbo Pascal 3.0, so you get the idea :-), I enjoyed the use of the debugger quite a lot.

But in 2000 I started working in a role which required understanding the misbehavior of embedded systems that were forwarding the live traffic, there was a technical possibility to do “target remote …” but almost never an option to stop a box that is forwarding the traffic.

So you end up being dependent on debugs - and very occasional debug images with enhanced artisanal diagnostics code (the most fun was using gcc’s -finstrument-function to catch a memory corruption of an IPSec field by an unrelated IKE code in a use-after free scenario)

Where the GDB shined though is the analysis of the crash dumps.

Implementing a “fake” gdb stub in Perl, which was sucking in the crash dump data and allow to leisurely explore it with debugger rather than decoding hex by hand, was a huge productivity boon.

So I would say - it’s better to have more than one tool in the toolbox and use the most appropriate one.

leojfc · 8 months ago
Wholeheartedly agree. There’s often good performance or security reasons why it’s hard to get a debugger running in prod, but it’s still worth figuring out how to do it IMO.

Your experience sounds more sophisticated than mine, but the one time I was able to get even basic debugger support into a production Ruby app, it made fixing certain classes of bug absolutely trivial compared to what it would have been.

The main challenge was getting this considered as a requirement up front rather than after the fact.

Buttons840 · 8 months ago
Another underutilized debugging superpower is debug-level logging.

I've never worked somewhere where logging is taken seriously. Like, our AWS systems produce logs and they get collected somewhere, but none of our code ever does any serious logging.

If people like print-statement debugging so much, then double down on it and do it right, with a proper logging framework and putting quality debug statements into all code.

mrkeen · 8 months ago
If you want to double-down on logging and do it right: make your logs fit for computer consumption and the source of truth.

That's all event-sourcing is.

novia · 8 months ago
Well, what's your workflow? Is there a particular debugger that you love?
titanomachy · 8 months ago
I’ve learned not to go against the grain with tools, at least at big companies. Probably some dev productivity team has already done all the annoying work needed to make the company’s codebase work with some debugger and IDE, so I use that: currently, it’s VS Code and LLDB, which is fine. IntelliJ and jdb at my last job was probably better overall.

My workflow is usually:

1. insert a breakpoint on some code that I’m trying to understand

2. attach the debugger and run any tests that I expect to exercise that code

3. walk up and down the call stack, stepping occasionally, reading the code and inspecting the local variables at each level to understand how the hell this thing works and why it’s gone horribly wrong this time.

4. use my new understanding to set new, more relevant breakpoints; repeat 2-4.

Sometimes I fiddle with local variables to force different states and see what happens, but I consider this advanced usage, and anyway it often doesn’t work too well on my current codebase.

cyberax · 8 months ago
> Almost everyone in the real world (at least in web tech) seems to do print statement debugging. I have tried and failed to get others interested in using my workflow.

Sigh. Same. To a large extent, this is caused by debuggers just sucking for async/await code. And just sucking in general for webdev.

sensanaty · 8 months ago
I try all the time, but I always end up having to wrestle a trillion calls into some library code that has 0 relevance to me, and if the issue is happening at some undetermined point in the chain, you basically have to step through it all to get an idea for where things are going wrong.

On the other hand, the humble console.log() just works without requiring insanely tedious and frustrating debugger steps.

DangitBobby · 8 months ago
Sometimes I need a debugger because there's a ton of variables or I just have no idea what's wrong and that's the easiest way to see everything that's going on. It's really frustrating to feel like I need a debugger and don't have a good way to add the IDEs visual debugger (because I'm using a CLI on a remote session or something). It's also really frustrating to be inside a debugging session and wish you knew what the previous value for something was but you can't because you can't go backwards in time. That happens so often to me, in fact, that print debugging is actually more effective for me in the vast majority of cases.
jmull · 8 months ago
I find the differences between printf debugging and line debuggers (or whatever you call them) unimportant in most circumstances.

Line debuggers usually have some nice conveniences, but the major bottlenecks are between the ears, not in the tool.

voidUpdate · 8 months ago
I usually use a normal debugger to find a problem when I can see its symptoms but not the original caus. That way I can break on the line that is causing the symptom, check what the variables are like and go back up the call stack to find the origin of the incorrect state. I can do all that in one shot (maybe a couple if I need to break somewhere else instead) rather than putting prints everywhere to try and work out what the call stack is, and a load of prints to list off all the local variables
kakuri · 8 months ago
I loved Chrome's debugger for years, then build tools and React ruined debugging for me.

Built code largely works with source maps, but it fails often enough, and in bizarre ways, that my workflow has simply gone back to console logs.

React's frequent re-renders have also made breakpoints very unpleasant - I'd rather just look at the results of console logs.

Are there ways I can learn to continue enjoying the debugger with TS+React? It is still occasionally useful and I'm glad its there, but I have reverted to defaulting to console logs.

duderific · 8 months ago
I find myself doing a mix of both. Source maps are good enough most of the time, I haven't seen the bizarre failures you're seeing - maybe your bundling configuration needs some tweaking? But yes, the frequent re-renders are obnoxious. In those cases logging is generally better.

Conditional breakpoints help alleviate the pain when there are frequent re-renders. Generally you can pinpoint a specific value that you're looking for and only pause when that condition is satisfied. Setting watch expressions helps a lot too.

dgb23 · 8 months ago
Console logs in the browser have some unique advantages. You can assign the output to a variable, play with it etc.

But yes, any code that is inside jsx generally sucks to debug with standard tooling. There are browser plugins that help you inspect the react ui tree though.

jesse__ · 8 months ago
Agreed, debugging tools for the browser are almost comically incapable, to the point of not even useful in most cases.
nardi · 8 months ago
Having worked in many languages and debuggers across many kinds of backend and front end systems, I think what some folks miss here is that some debuggers are great and fast, and some suck and are extremely slow. For example, using LLDB with Swift is hot garbage. It lies to you and frequently takes 30 seconds or more to evaluate statements or show you local variable values. But e.g. JavaScript debuggers tend to be fantastic and very fast. In addition, some kinds of systems are very easy to exercise in a debugger, and some are very difficult. Some bugs resist debugging, and must be printf’d.

In short, which is better? It depends, and varies wildly by domain.

fellatio · 8 months ago
I thought debugging was table stakes. It isn't always the answer. If a lot is going on logs can be excellent (plus grep or an observability tool)

However debugging is an essential tool in the arsenal. If something is behaving oddly even the best REPL can't match debugging as a dev loop (maybe lisp excepted).

I even miss the ability to move the current execution point back in .NET now I use Go and JS. That is a killer feature. Edit and continue even more so!

Then next level is debugging unit tests. Saved me hours.

octo888 · 8 months ago
My hypothesis is that because we generally don't/can't use debuggers in production but rather rely on logging and tracing, that extends to local dev.
Nezteb · 8 months ago
Agreed!

If you usually aren't able/allowed to use a debugger in production and must rely on observability tools, it's helpful to know how to utilize those tools locally as effectively as possible when debugging.

connicpu · 8 months ago
I love debuggers, but unfortunately at my current job I've found that certain things we do to make our application more performant (mainly using giant structs full of fixed size arrays allocated at the start of the application) cause LLDB to slow to a crawl when `this` points to them. It really really doesn't like trying to read the state of a nearly 1GB struct...
oehpr · 8 months ago
This is one of those reasons why you really really need to get other people on board with your workflows. If you're the only one who works like that and someone does something insane, but it technically works, but it blows your workflow up... that's your problem. "You should just develop how I'm developing. Putting in a print statement for every line then waiting 5 minutes for the application to compile."

So long as no one sees your workflow as valuable, they will happily destroy it if it means getting the ticket done.

shadowgovt · 8 months ago
It has gotten to the point where when somebody wants to add a DSL to our architecture one of my first questions is "where is your specification for integrating it to the existing debuggers?"

If there isn't one, I'd rather use a language with a debugger and write a thousand lines of code than 100 lines of code in a language I'm going to have to black box.

lhamil64 · 8 months ago
I work with some pretty niche tech where it's usually, ironically, easier to use a debugger than to add print statements. Unfortunately the debugger is pretty primitive, it can't really show the call stack for example. But even just stopping at a line of code and printing variables or poking around in memory is pretty powerful.
ghiculescu · 8 months ago
I’m a debugger guy too, but print statements can be very powerful. I keep this bookmarked https://tenderlovemaking.com/2016/02/05/i-am-a-puts-debugger...
btreecat · 8 months ago
I've had more sr devs than I tell me they don't see a benefit to using a debugger, because they have a type system.

Wtaf?

scotty79 · 8 months ago
What's insane is that debuggers could support at least 80-90% of print debugging workflow with a good UI.

The only thing that's really missing is history of the values of the watches you added, all in one log. With filters because why not.

For some reason I never seen this option in any debugger I tried.

scotty79 · 8 months ago
UPDATE:

Heh, who knew, debuggers already have this feature under the name tracepoints (VS) or logpoints (Viscose).

https://code.visualstudio.com/blogs/2018/07/12/introducing-l...

It should be way more well known among print debug crowd.

Cthulhu_ · 8 months ago
It's been years since I last used a debugger, but then, it's been years since I last worked on code that was complicated enough to warrant it.

Which is a good thing! Easily comprehended code that you can reason about without stepping through it is good for grug brain.

perrygeo · 8 months ago
Running a debugger on test failure is a ridiculously effective workflow. Instead of getting a wall of text, you drop right into the call stack where the failure/error happened. `pytest --pdb` in python, worth its weight in shiny rocks for sure!
flmontpetit · 8 months ago
My current position has implemented a toolchain that essentially makes debugging either impossible or extremely unwieldy for any backend projects and nobody seems to think it's a problem.
mlinhares · 8 months ago
I don't use debuggers in general development but use them a lot when writing and running automated tests, much faster and easier to see stuff than with print statements.
ipsento606 · 8 months ago
I've been doing this professionally for over a decade and have basically never used a debugger

I've often felt that I should, but never enough to actually learn how

pydry · 8 months ago
>Young grugs: learning this skill is a minor superpower. Take the time to get it working on your codebase, if you can.

TIL Linus Torvald is a young grug.

profsummergig · 8 months ago
I want to master JS/React and Python debugging. Am an "advanced beginner" in both. What tools do you recommend?
steve_adams_86 · 8 months ago
For JavaScript, you're actually able to debug fairly easily by default by adding a `debugger()` call in your code. Browsers will stop at that call, and start the debugger.

https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...

Another way (and probably a better idea) is creating a launch definition for VS Code in launch.json which attaches the IDE's debugger to Chrome. Here is an article describing how that works: https://profy.dev/article/debug-react-vscode

Breakpoints are nice because they can't accidentally get shipped to production like debugger calls can.

For Python I essentially do the same thing, minus involving Chrome. I run the entry point to my application from launch.json, and breakpoints in the IDE 'just work'. From there, you can experiment with how the debugger tools work, observe how state change as the application runs, etc.

If you don't use VS Code, these conventions are similar in other IDEs as well.

Deleted Comment

HPsquared · 8 months ago
Indeed. Let the machine do the work!

Deleted Comment

Drew_ · 8 months ago
I'm looking around and I don't see anyone mentioning what I think is one of the biggest advantages of debugging: not having to recompile.

Even assuming print statements and debuggers are equally effective (they're not), debuggers are better simply because they are faster. With print statements, you might need to recompile a dozen times before you find whatever it is you're looking for. Even with quick builds this is infuriating and a ton of wasted time.

rendaw · 8 months ago
Are you more productive with a debugger? For all bugs? How so?
mdavid626 · 8 months ago
Nobody has time for that. Need time for prompting AI!
whateveracct · 8 months ago
I use a repl instead of a debugger. White box vs black box.
never_inline · 8 months ago
They can be used in conjunction. You can drop into debugger from REPL and vice versa.
slt2021 · 8 months ago
debugging is useful when your codebase is bad: imperative style, mutable state, weak type system, spaghetti code, state and control variables intermixed.

i'd rather never use debugger, so that my coding style is enforced to be clean code, strong type system, immutable variables, explicit error control, explicit control flow etc

commandlinefan · 8 months ago
You've gotten downvoted but I think you're correct - if there were no debuggers, the developers would be forced to write (better) code that didn't need them.
butterlesstoast · 8 months ago
Professor Carson if you're in the comments I just wanted to say from the bottom of my heart thank you for everything you've contributed. I didn't understand why we were learning HTMX in college and why you were so pumped about it, but many years later I now get it. HTML over the wire is everything.

I've seen your work in Hotwire in my role as a Staff Ruby on Rails Engineer. It's the coolest thing to see you pop up in Hacker News every now and then and also see you talking with the Hotwire devs in GitHub.

Thanks for being a light in the programming community. You're greatly respected and appreciated.

recursivedoubts · 8 months ago
i'm not crying your crying
deadbabe · 8 months ago
Wasn’t HTMX just a meme? I can’t really tell if it’s serious because of Poe’s Law.
recursivedoubts · 8 months ago
brushfoot · 8 months ago
Solopreneur making use of it in my bootstrapped B2B SaaS business. Clients don't need or want anything flashy. There are islands of interactivity, and some HTMX sprinkled there has been a great fit.
dgb23 · 8 months ago
I started using htmx relatively early on, because its a more elegant version of what I've been doing anyways for a series of projects.

It's very effective, simple and expressive to work this way, as long as you keep in mind that some client side rendering is fine.

There are a few bits I don't like about it, like defaulting to swap innerHTML instead of outerHTML, not swapping HTML when the status code isn't 200-299 by default and it has some features that I avoid, like inline JSON on buttons instead of just using forms.

Other than that, it's great. I can also recommend reading the book https://hypermedia.systems/.

Dead Comment

anthomtb · 8 months ago
So many gems in here but this one about microservices is my favorite:

grug wonder why big brain take hardest problem, factoring system correctly, and introduce network call too

jiggawatts · 8 months ago
I keep trying to explain this to tiny dev teams (1-2 people) that will cheerfully take a trivial web app with maybe five forms and split it up into “microservices” that share a database, an API Management layer, a queue for batch jobs to process “huge” volumes (megabytes) of data, an email notification system, an observablity platform (bespoke!) and then… and then… turn the trivial web forms into a SPA app because “that’s easier”.

Now I understand that “architecture” and “patterns” is a jobs program for useless developers. It’s this, or they’d be on the streets holding a sign saying “will write JavaScript for a sandwich”.

mattmanser · 8 months ago
It's all they've seen. They don't get why they're doing it, because they're junior devs masquerading as architects. There's so many 'senior' or 'architect' level devs in our industry who are utterly useless.

One app I got brought in late on the architect had done some complicated mediator pattern for saving data with a micro service architecture. They'd also semi-implemented DDD.

It was a ten page form. Literally that was what it was supposed to replace. An existing paper, 10 page, form. One of those "domains" was a list of the 1,000 schools in the country. That needed to be updated once a year.

A government spent millions on this thing.

I could have done it on my todd in 3 months. It just needed to use simple forms, with some simple client side logic for hiding sections, and save the data with an ORM.

The funniest bit was when I said that it couldn't handle the load because the architecture had obvious bottlenecks. The load was known and fairly trivial (100k form submissions in one month).

The architect claimed that it wasn't possible as the architecture was all checked and approved by one of the big 5.

So I brought the test server down during the call by making 10 requests at once.

frollogaston · 8 months ago
The only useful definition of a "service" I've ever heard is that it's a database. Doesn't matter what the jobs and network calls are. One job with two DBs is two services, one DB shared by two jobs is one service. We once had 10 teams sharing one DB, and for all intents and purposes, that was one huge service (a disaster too).
someothherguyy · 8 months ago
> Now I understand that “architecture” and “patterns” is a jobs program for useless developers.

Yet, developers are always using patterns and are thinking about architecture.

Here you are doing so too, a pattern, "form submission" and an architecture, "request-response".

djeastm · 8 months ago
>I keep trying to explain this to tiny dev teams

I'm curious what role you have where you're doing this repeatedly

default-kramer · 8 months ago
I'm convinced that some people don't know any other way to break down a system into smaller parts. To these people, if it's not exposed as a API call it's just some opaque blob of code that cannot be understood or reused.
dkarl · 8 months ago
That's what I've observed empirically over my last half-dozen jobs. Many developers treat decomposition and contract design between services seriously, and work until they get it right. I've seen very few developers who put the same effort into decomposing the modules of a monolith and designing the interfaces between them, and never enough in the same team to stop a monolith from turning into a highly coupled amorphous blob.

My grug brain conclusion: Grug see good microservice in many valley. Grug see grug tribe carry good microservice home and roast on spit. Grug taste good microservice, many time. Shaman tell of good monolith in vision. Grug also dream of good monolith. Maybe grug taste good monolith after die. Grug go hunt good microservice now.

isoprophlex · 8 months ago
I swear I'm not making this up; a guy at my current client needed to join two CSV files. A one off thing for some business request. He wrote a REST api in Java, where you get the merged csv after POSTing your inputs.

I must scream but I'm in a vacuum. Everyone is fine with this.

(Also it takes a few seconds to process a 500 line test file and runs for ten minutes on the real 20k line input.)

9rx · 8 months ago
To be fair, microservices is about breaking people down into smaller parts, with the idea of mirroring services found in the macro economy, but within the microcosm of a single business. In other words, a business is broken down into different teams that operate in isolation from each other, just as individual businesses do at the macro scale. Any technical outcomes from that are merely a result of Conway's Law.
cjfd · 8 months ago
Well, if people are really that stupid maybe they should just not be developers.
demosthanos · 8 months ago
> To these people, if it's not exposed as a API call it's just some opaque blob of code that cannot be understood or reused.

I think this is correct as an explanation for the phenomenon, but it's not just a false perception on their part: for a lot of organizations it is actually true that the only way to preserve boundaries between systems over the course of years is to stick the network in between. Without a network layer enforcing module boundaries code does, in fact, tend to morph into a big ball of mud.

I blame a few things for this:

1. Developers almost universally lack discipline.

2. Most programming languages are not designed to sufficiently account for #1.

It's not a coincidence that microservices became popular shortly after Node.js and Python became the dominant web backend languages. A strong static type system is generally necessary (but not sufficient) to create clear boundaries between modules, and both Python and JavaScript have historically been even worse than usual for dynamic languages when it comes to having a strong modularity story.

And while Python and JS have it worse than most, even most of our popular static languages are pretty lousy at giving developers the tools needed to clearly delineate module boundaries. Rust has a pretty decent starting point but it too could stand to be improved.

closeparen · 8 months ago
The network boundary gives you a factoring tool that most language module systems don't: the ability for a collection of packages to cooperate internally but expose only a small API to the rest of the codebase. The fact that it's network further disciplines the modules to exchange only data (not callbacks or behaviors) which simplifies programming, and to evolve their interfaces in backwards compatible ways, which makes it possible to "hot reload" different modules at different times without blowing up.

You could probably get most of this without the literal network hop, but I haven't seen a serious attempt.

jakewins · 8 months ago
Any language that offers a mechanism for libraries has formal or informal support for defining modules with public APIs?

Or maybe I’m missing what you mean - can you explain with an example an API boundary you can’t define by interfaces in Go, Java, C# etc? Or by Protocols in Python?

alganet · 8 months ago
grug hears microservice shaman talk about smol api but then grug see single database, shared queue, microservice smol but depend on huge central piece, big nest of complexity demon waiting to mock grug
api · 8 months ago
I have a conspiracy theory that it’s a pattern pushed by cloud to get people to build applications that:

- Cannot be run without an orchestrator like K8S, which is a bear to install and maintain, which helps sell managed cloud.

- Uses more network bandwidth, which they bill for, and CPU, which they bill for.

- Makes it hard to share and maintain complex or large state within the application, encouraging the use of more managed database and event queue services as a substitute, which they bill for. (Example: a monolith can use a queue or a channel, while for microservices you’re going to want Kafka or some other beast.)

- Can’t be run locally easily, meaning you need dev environments in cloud, which means more cloud costs. You might even need multiple dev and test environments. That’s even more cloud cost.

- Tends to become dependent on the peculiarities of a given cloud host, such as how they do networking, increasing cloud lock in.

Anyone else remember how cloud was pitched as saving money on IT? That was hilarious. Knew it was BS way back in the 2000s and that it would eventually end up making everything cost more.

pphysch · 8 months ago
Those are all good points, but missing the most important one, the "Gospel of Scalability". Every other startup wants to be the next Google and therefore thinks they need to design service boundaries that can scale infinitely...
nyarlathotep_ · 8 months ago
It's 100% this; you're right on the money (pun intended).

Don't forget various pipelines, IaC, pipelines for deploying IaC, test/dev/staging/whatever environments, organization permissions strategies etc etc...

When I worked at a large, uh, cloud company as a consultant, solutions were often tailored towards "best practices"--this meant, in reality, large complex serverless/containerized things with all sorts of integrations for monitoring, logging, NoSQL, queues etc, often for dinky little things that an RPI running RoR or NodeJS could serve without breaking a sweat.

With rare exceptions, we'd never be able to say, deploy a simple go server on a VM with server-side rendered templates behind a load balancer with some auto-scaling and a managed database. Far too pedestrian.

Sure, it's "best practices" for "high-availability" but was almost always overkill and a nightmare to troubleshoot.

npodbielski · 8 months ago
I think mostly this is to brake down the system between teams. This is easier to manage this way. Nothing to do with technical decision - more the way of development. What is the alternative? Mono-repo? IMHO it is even worse.
nothrabannosir · 8 months ago
microservices and mono repo are not mutually exclusive. Monolith, is. Important distinction imo, Micro services in mono repo definitely works and ime is >>> multi repo.

Of course the best is mono repo and monolith :3

jppope · 8 months ago
The frequency that you use the term "re-factor" over the term "factor" is often very telling about how you develop your systems. I worked a job one time where the guys didn't even know what factoring was.
zelphirkalt · 8 months ago
Probably many people don't pick up on the word "to factor" something these days. They do not make the connection between the thing that mathematicians do and what that could relate to in terms of writing code. At the same time everyone picks up the buzzword "to refactor". It all depends on what ecosystems you expose yourself to. I think I first heard the term "to factor" something in math obviously, but in software when I looked at some Forth. Most people will never do that, because it is so far off the mainstream, that they have never even heard of it.

Deleted Comment

fellatio · 8 months ago
Unfortunately it is useful to do this for many other reasons!
chamomeal · 8 months ago
Unfortunately indeed. I lament the necessity of microservices at my current job. It’s just such a silver bullet for so many scaling problems.

The scaling problems we face could probably be solved by other solutions, but the company is primed and ready to chuck functionality into new microservices. That’s what all our infrastructure is set up to do, and it’s what inevitably happens every time

arturocamembert · 8 months ago
> given choice between complexity or one on one against t-rex, grug take t-rex: at least grug see t-rex

I think about this line at least once a week

boricj · 8 months ago
grug obviously never took on invisible t-rex

this grug keeps one on one invisible t-rex, grug cursed

dev0p · 8 months ago
I felt like my third eye has been opened after reading that. Truly inspiring.
EstanislaoStan · 8 months ago
"...even as he fell, Leyster realized that he was still carrying the shovel. In his confusion, he’d forgotten to drop the thing. So, desperately, he swung it around with all his strength at the juvenile’s legs.

Tyrannosaurs were built for speed. Their leg bones were hollow, like a bird’s. If he could break a femur …

The shovel connected, but not solidly. It hit without breaking anything. But, still, it got tangled up in those powerful legs. With enormous force, it was wrenched out of his hands. Leyster was sent tumbling on the ground.

Somebody was screaming. Dazed, Leyster raised himself up on his arms to see Patrick, hysterically slamming the juvenile, over and over, with the butt of the shotgun. He didn’t seem to be having much effect. Scarface was clumsily trying to struggle to its feet. It seemed not so much angry as bewildered by what was happening to it.

Then, out of nowhere, Tamara was standing in front of the monster. She looked like a warrior goddess, all rage and purpose. Her spear was raised up high above Scarface, gripped tightly in both hands. Her knuckles were white.

With all her strength, she drove the spear down through the center of the tyrannosaur’s face. It spasmed, and died. Suddenly everything was very still."

hackable_sand · 8 months ago
I know this is fiction because everyone is focused on the same problem.
dgb23 · 8 months ago
One thing to appreciate is that this article comes from someone who can do the more sophisticated (complex) thing, but tries not to based on experience.

There is of course a time and place for sophistication, pushing for higher levels of abstraction and so on. But this grug philosophy is saying that there isn't any inherent value in doing this sort of thing and I think that is very sound advice.

Also I noticed AI assistance is more effective with consistent, mundane and data driven code. YMMV

cowthulhu · 8 months ago
I feel like this would fit the bell curve meme -

Novice dev writes simple code

Intermediate dev writes complex code

Expert dev writes simple code

bluefirebrand · 8 months ago
I gave this advice to an intermediate dev at my company a couple of years ago

Something along the lines of "Hey, you're a great developer, really smart, you really know your stuff. But you have to stop reaching for the most complicated answer to everything"

He took it to heart and got promoted at the start of this year. Was nice to see. :)

ahartmetz · 8 months ago
The time and place for sophistication and abstraction is when and where they make the code easier to understand without first needing a special course to explain why it's easier to understand. (It varies by situation which courses can be taken for granted.)
cortesoft · 8 months ago
> Everything should be made as simple as possible, but not simpler
GMoromisato · 8 months ago
One of the many ironies of modern software development is that we sometimes introduce complexity because we think it will "save time in the end". Sometimes we're right and it does save time--but not always and maybe not often.

Three examples:

DRY (Don't Repeat Yourself) sometimes leads to premature abstraction. We think, "hey, I bet this pattern will get used elsewhere, so we need to abstract out the common parts of the pattern and then..." And that's when the Complexity Demon enters.

We want as many bugs as possible caught at compile-time. But that means the compiler needs to know more and more about what we're actually trying to do, so we come up with increasingly complex types which tax your ability to understand.

To avoid boilerplate we create complex macros or entire DSLs to reduce typing. Unfortunately, the Law of Leaky Abstractions means that when we actually need to know the underlying implementation, our head explodes.

Our challenge is that each of these examples is sometimes a good idea. But not always. Being able to decide when to introduce complexity to simplify things is, IMHO, the mark of a good software engineer.

mplanchard · 8 months ago
For folks who seek a rule of thumb, I’ve found SPoT (single point of truth) a better maxim than DRY: there should be ideally one place where business logic is defined. Other stuff can be duplicated as needed and it isn’t inherently a bad thing.

To modulate DRY, I try to emphasize the “rule of three”: up to three duplicates of some copy/paste code is fine, and after that we should think about abstracting.

Of course no rule of thumb applies in all cases, and the sense for that is hard to teach.

GMoromisato · 8 months ago
100% agree. Duplication is far cheaper than the wrong abstraction.

Student: I notice that you duplicated code here rather than creating an abstraction for both.

Master: That is correct.

Student: But what if you need to change the code in the future?

Master: Then I will change it in the future.

At that point the student became enlightened.

bluefirebrand · 8 months ago
> To modulate DRY, I try to emphasize the “rule of three”: up to three duplicates of some copy/paste code is fine, and after that we should think about abstracting

Just for fun, this more or less already exists as another acronym: WET. Write Everything Twice

It basically just means exactly what you said. Don't bother DRYing your code until you find yourself writing it for the third time.

ghosty141 · 8 months ago
> I’ve found SPoT (single point of truth) a better maxim than DRY

I totally agree. For example having 5 variables that are all the same value but mean very different things is good. Combining them to one variable would be "DRY" but would defeat separations of concern. With variables its obvious but the same applies to more complex concepts like functions, classes, programs to a degree.

It's fine to share code across abstractions but you gotta make sure that it doesn't end up tying these things too much together just for the cause of DRY.

Deleted Comment

PaulHoule · 8 months ago
I still believe that most code, on average, is not DRY enough, but for projects I do on my own account I've recently developed a doctrine of "there are no applications, only screens" and funny enough this has been using HTMX which I think the author of that blog wrote.

Usually I make web applications using Sinatra-like frameworks like Flask or JAXB where I write a function that answers URLs that match a pattern and a "screen" is one or more of those functions that work together and maybe some HTML templates that go with them. For instance there might be a URL for a web page that shows data about a user, and another URL that HTMX calls when you flip a <select> to change the status of that user.

Assuming the "application" has the stuff to configure the database connection and file locations and draw HTML headers and footers and such, there is otherwise little coupling between the screens so if you want to make a new screen you can cut and paste an old screen and modify it, or you can ask an LLM to make you a screen or endpoint and if it "vibe coded" you a bad screen you can just try again to make another screen. It can make sense to use inheritance or composition to make a screen that can be specialized, or to write screens that are standalone (other than fetching the db connection and such.)

The origin story was that I was working on a framework for making ML training sets called "Themis" that was using microservices, React, Docker and such. The real requirement was that we were (1) always adding new tasks, and (2) having to create simple but always optimized "screens" for those tasks because if you are making 20,000 judgements it is bad enough to click 20,000 times, if you have to click 4x for each one and it adds up to 80,000 you will probably give up. As it was written you had to write a bunch of API endpoints as part of a JAXB application and React components that were part of a monolithic React app and wait 20 minutes for typescript and Docker and javac to do their things and if you are lucky it boots up otherwise you have to start over.

I wrote up a criticism of Themis and designed "Nemesis" that was designed for rapid development of new tasks and it was a path not taken at the old job, but Nemesis and I have been chewing through millions of instances of tasks ever since.

GMoromisato · 8 months ago
Fascinating!

I also recoiled at the complexity of React, Docker, etc. and went a different path: I basically moved all the code to the server and added a way to "project" the UI to the browser. From the program's perspective, you think you're just showing GUI controls on a local screen. There is no client/server split. Under the covers, the platform talks to some JavaScript on the browser to render the controls.

This works well for me since I grew up programming on Windows PCs, where you have full control over the machine. Check it out if you're interested: https://gridwhale.com.

I think pushing code to the server via HTMX and treating the browser like a dumb terminal has the same kind of advantage: you only have to worry about one system.

Fundamentally, IMHO, the client/server split is where all the complexity happens. If you're writing two programs, on the client and one on the server, you're basically creating a distributed system, which we know is very hard.

Symmetry · 8 months ago
On my laptop I have a yin-yang sticker with the yin labeled DRY and the yang labeled YAGNI.
GMoromisato · 8 months ago
I love it!
frollogaston · 8 months ago
DRY isn't very dangerous. It's not telling you to spin off a helper that's only used in one place. If a ton of logic is in one function/class/file/whatever, it's still DRY as long as it's not copied.

Premature abstraction is a thing. Doesn't help that every CS course kinda tells you to do this. Give a new grad a MySQL database and the first thing they might try to do is abstract away MySQL.

0xbadcafebee · 8 months ago
sometime grug spend 100 hours building machine to complete work, but manual work takes 1 hour. or spend 1 hour to make machine, lead to 100 hours fixing stupid machine.

dont matter if complex or simple, if result not add value. focus on add more value than detract, worry complexity after

zie1ony · 8 months ago
You must be a Rust developer.
GMoromisato · 8 months ago
Worse--C++
mcqueenjordan · 8 months ago
One of my favorite LLM uses is to feed it this essay, then ask it to assume the persona of the grug-brained developer and comment on $ISSUE_IM_CURRENTLY_DEALING_WITH. Good stress relief.
CactusRocket · 8 months ago
I am not very proficient with LLMs yet, but this sounds awesome! How do you do that, to "feed it this essay"? Do you just start the prompt with something like "Act like the Grug Brained Developer from this essay <url>"?
rm_-rf_slash · 8 months ago
Could put it in a ChatGPT project description or Cursor rules to avoid copy pasting every time.
prmph · 8 months ago
> complexity very bad

Oh boy, this is so true. In all my years of software engineering this is one of those ideas that has proved consistently true in every single situation. Some problems are inherently complex, yes, but even then you'd be much, much better off spending time to think things through to arrive at the simplest way to solve it. Again and again my most effective work has been after I questioned my prior approaches and radically simplified things. You might lose some potential flexibility, but in most case you don't even need all that you think you need.

Some examples:

- Now that reasonably good (and agentic) LLMs are a thing, I started avoiding overly complex TypeScript types that are brittle and hard to debug, in favor of writing spec-like code and asking the LLM to statically generate other code based on it.

- The ESLint dependency in my projects kept breaking after version updates, many rules were not sophisticated enough to avoid false positives, and keeping it working properly with TypeScript and VSCode was getting complicated. I switched to Biome.js, and it was simpler and just as effective. However, I'm recently having bugs with it (not sure if Biome itself or the VSCode extension is to blame). But whatever, I realized that linting is a nice-to-have, not something I should be spending inordinate amount of times babying. So I removed it from the build tool-chain, and neither do I even need have it enabled all the time in VSCode. I run Biome every now and then to check the code style and formatting , and that's it, simple.

- Working on custom data migration tooling for my projects, I realized forward migrations are necessary to implement, but backwards migrations are not worth the time and complexity to implement. In case a database with data needs to be rolled back, just restore the backup. If there was no data, or it is not a production database, just run the versioned initialization script(s) to start from a clean state. Simple.

ttoinou · 8 months ago
Your two first examples, you just hide the complexity by using another tool, no ?

And I don’t see how number 3 is simpler. In my maths head I can easily create bijective spaces. Emulating backward migration through others means might be harder (depending on details of course thats not a general rule)

johnfn · 8 months ago
> Your two first examples, you just hide the complexity by using another tool, no ?

The article says that the best way to manage complexity is to find good cut-points to contain complexity. Another tool is an excellent cut-point, probably the best one there is. (Think about how much complexity a compiler manages for you without you ever having to worry about it.)

prmph · 8 months ago
I'm not sure where the complexity is hiding in my examples.

For the code generation, note that some types are almost impossible to express properly, but code can be generated using simpler types that capture all the same constraints that you wanted. And, of course I only use this approach for cases where it is not that complicated to generate the code, and so I can be sure that each time I need to (re)generate it, it will be done correctly (ie., the abstraction is not leaky). Also, I don't use this approach for generating large amounts of code, which would hide the inherent structure of the code when reading it.

For the eslint example, I simply made do without depending on linting as a hard dependency that is always active. That is one of my points: sometimes simply some "niceties" would simplify thing a lot. As another example in this vein, I avoid too much complex configuration and modding of my dev environment; that allows me to focus on what matters.

In the migration example, the complexity with backward migration is that you then need to write a reverse migration script for every forward migration script. Keeping this up and managing and applying them properly can become complex. If you have a better way of doing it I'd like to hear it.

sesm · 8 months ago
Many talk complexity. Few say what mean complexity. Big brain Rich say complect is tie together. Me agree. Big brain Rich say complexity bad. Me disagree. Tie things necessary. If things not connected things not solve problem.
pramodbiligiri · 8 months ago
Haha. But I thought Rich Hickey was making the simple point that don't intertwine things than can be kept separate!

P.S: For those wondering what this refers to, here is his talk: https://youtu.be/SxdOUGdseq4?t=1896