Readit News logoReadit News
_bxg1 · 6 years ago
Over my first five years of professional programming, I've been thirstily chasing the dragon of "perfect description". Early on I thought it was OOP. Then entity/component. Then FP. Then it was really about the type system.

Possibly the biggest lesson I've learned - both from the kiln of real-world project requirements (within a multi-paradigm language) and from my intentional ventures into other different and diverse programming languages and frameworks - is that there's no such thing. There's no perfect way of describing, not within a domain and certainly not across them. It's not just a matter of abstracting farther and farther from real-world concerns, sacrificing optimization until you're in descriptive nirvana. There are many good ways to describe a given thing in code, and there are many more bad ways, but there's no perfect way. Once I grasped that I became a much better (and less stressed) programmer.

quickthrower2 · 6 years ago
Yes definitely. The essence is finding the right abstraction. The computer doesn't care if you get this wrong, and the code could work perfectly, but it can be a pain to maintain something if something is abstracted the wrong way. And aiming to reduce the file size of your source files by "Don't Repeat Yourself" isn't the necessarily always the best way to make code maintainable. I've breathed a sigh of relief when I saw a code base that was your usual scaffolded MVC app rather than something with a tonne of metaprogramming. I've seen both, and the Keep It Simple principle has some merit.

Infact the best abstraction may depend on the team who will be maintaining that code - so whether to use Tech A or B or Pattern X or Y might have, as an important factor, whether you are moving office from one city to another, and whether the job market is good or bad, affecting flow of people in or out of the company etc.

hn_throwaway_99 · 6 years ago
I feel like engraving this paragraph in a wall:

Taboos tend to accrete over time. For example, overzealous object-oriented design has produced a lot of lasagna code (too many layers) and a tendency towards overly complex designs. Chasing semantic markup purity, we sometimes resorted to hideous and even unreliable CSS hacks when much simpler solutions were available in HTML. Now, with microservices, people sometimes break up a trivial app into a hard-to-follow spiderweb of components. Again, these are cases of people taking a valuable guideline for an end in itself. Always keep a hard-nosed pragmatic aim at the real goals: simplicity, clarity, generality.

From Java's "AbstractFactoryBuilderDelegator" insanity to "nanoservices", the common thread to me seems to be overzealous decoupling, to the point where I need to look in 10 different locations just to find out what happens during a single request.

clarry · 6 years ago
> the common thread to me seems to be overzealous decoupling, to the point where I need to look in 10 different locations just to find out what happens during a single request

If it were decoupling in any meaningful sense, you wouldn't need to look in 10 different locations. But you need to look, because it's all related and tightly coupled!

commandlinefan · 6 years ago
It’s become dogmatic in Java to automatically create getters and setters for every single private variable in the class - include mutable objects like Lists and Maps (so a reference from a getter can actually change the referenced object). I’ve pointed out more than once that these getters and setters - usually auto-generated by an IDE or an XML compiler - serve absolutely no purpose and you might as well just cut out the middle man and mark the variables public at that point. Nothing makes a Java “architect” recoil in horror at the suggestion of just admitting that you’re not actually doing object-oriented programming and make the variables in your de-facto “struct” public, so I’ve given up arguing with them and I shrug my shoulders and create reams of pointless “getters” and “setters” now.
tluyben2 · 6 years ago
When you get more experienced most of these things make me laugh or cry (depending on the siuation); it does not matter what companies like FB, Google do; people on HN or Reddit will take it and do it to the extreme: we now ‘need to’ use React for everything; if it does not fit, just beat it with a hammer until it does. Kubernetes and microservices must be used for every tiny little part if the app even if it causes a lot more overhead in performance/memory use (computers are cheap and fast!) or debugging. Abstract almost everything! (Java + OOP, Javascript and the npm mess) to Abstract almost nothing! (Go w/o generics), Make everything reusable (left-pad), Rewrite everything in JS!, Rust!, Go! etc etc. Everyone is running after eachother and doing it more extreme and the end result is just as shit as if you would not have done that at all and just thought about it bit before opening some IDE and codegenerate you million lines of boilerplate with the instable and slow framework-du-jour. As an older coder I sigh when a codebase is taken out of the ‘mothballs’ even 6-12 months after creation and people cannot get it running because everything they used is outdated because the framework and library authors move fast and break everything all the time. And ofcourse it is in an outdated language / framework(Ruby on Rails is soooo pase) so noone knows anything , it uses the 358 most popular DSLs (350 unmaintained since january) at the time so unless you drank the same coolaid it is a nightmare spelonking adventure.

At least Dijkstra had sound mathematical reasoning for his arguments and wrote about them eloquently (and with good humor I may add); most of what is peddled in the hipster coding circles is a smooth talk by a gifted social media frontman that has no solid basis in anything besides that the person is popular. I do not even understand how people dare to put their name on complete messes like npm or one line npm packages unless it is a joke. I assume things like leftpad are in fact a joke; if they are not I would have to cry myself to sleep every night. So I just lie and say it is funny.

Only when someone codes something without any of that and it gets popular or makes a lot of money, people come with ‘it was best for this occassion’. The best example I can think off being anything Arther Whitney (k/kbd+) does; his softare makes a ton of money, it is faster, smaller and, in my opinion, easier to debug and uses less resources than most things I have ever seen passing here (including what people call embedded; no people, something with a gig of memory is not emdedded) and yet it pukes over almost all rules and styleguides that everyone loves so much. Not to mention: he does something a lot of programmers are jealous off (including me); he makes money with a programming language and is always used here as a counter example when people shout that programming languages that are not opensource and/or are commercial (even very costly) do not work.

I wanted to write one sentence; it became slightly more, but I guess most of it is on topic.

ritty · 6 years ago
I'm probably going to take a lot of heat from all the young whippersnappers out there for this, but I absolutely love your comment about React. I'm going to save it. It totally describes my experiences with other developers. They want to use React to re-write major portions of our codebase that work perfectly well as is, just because React is super awesome! Can you guess how many of our customers have complained that our website isn't a single page application? I'll give you a hint, it's less than one. The devs will also make little teeny projects that would take less than an hour to write in Vanilla JS and make this big 20 hour development project that has a monolithic codebase that all the sudden needs routers and back button integration and url mangling and gigantic switch statements to draw the correct "page." Oh and don't forget you have to set up all that webpack and and compiling routines so that you can compile all that garbage into other garbage. And then you also have to do that build over and over again for every change. This is JavaScript. Script is in the name. It's not meant to be a compiled language. And contrary to our dev's beliefs, React does not run or draw faster than Vanilla JS, unless you are constantly redrawing the whole page in Vanilla JS, which no one does. I hate React.
ridiculous_fish · 6 years ago
I was surprised by the number of gotos in the Python runtime. The link in the article was down so here:

https://github.com/python/cpython/search?q=goto

There's a lot of "goto exit" which is obviously a CPython runtime convention - fair enough. However there's plenty of classically bad code, example:

https://gist.github.com/ridiculousfish/ffe4fa2a17c831ed06e57...

These are old-school-bad gotos: `if` statements would do the job more clearly. Is this a broken-window phenomenon: one planted `goto` opens the door for the rest? Or is there a deeper motivation for this style?

naniwaduni · 6 years ago
To be clear, these are not the "classically bad" go-to that Dijkstra &al. railed against. C's goto is restricted to local jumps. You'll rarely see a go-to used in the classically bad style these days outside of something like a hand-written assembly interpreter main loop.

As for why you'd want to write a goto where an if would be semantically equivalent, there's a mix of style and human-level semantics: gotos are for "exceptional" cases, so the "normal" case looks flat and falls though directly to unconditional code, keeping the main logic flow at a consistent indentation level. (And apparently this heuristic is also baked into simple branch predictors, though I doubt that's something that comes up nearly as much.)

arcticbull · 6 years ago
This feels less like 'folly of dogma' and more like these (C/C#) programming languages don't have the constructs to safely and properly express what the programmer is trying to do. 'goto exit' is an unsafe and dangerous version of Rust's '?' operator.

> We should be willing to break generic rules when the circumstances call for it. Keep it simple.

I argue we should instead iterate on the programming language design to make sure we don't need to make these kinds of trade-offs.

ridiculous_fish · 6 years ago
C++ has solid "cleanup" constructs so I wonder why CPython is in C instead of C++. Is it portability, compilation speed, complexity control, transition cost, something else...
kstenerud · 6 years ago
It's the problem of worse programmers seeing something used, and then misusing it because they don't understand the fundamental reasoning that led to its use in the first place.

Graceful error exiting is to this day an unsolved problem in computer science (at least as far as the popular languages are concerned). Even after GOTO elimination hit its stride, Knuth noted:

"Another important class of go to statements is an error exit. Such checks on the validity of data are very important, especially in software, and it seems to be the one class of go to's that still is considered ugly but necessary by today's leading reformers. Sometimes it is necessary to exit from several levels of control, cutting across code that may even have been written by other programmers; and the most graceful way to do this is a direct approach with a go to or its equivalent. Then the intermediate levels of the program can be written under the assumption that nothing will go wrong."

dvfjsdhgfv · 6 years ago
> I was surprised by the number of gotos in the Python runtime.

And yet, there is no goto/label in the Python language itself.

shakna · 6 years ago
Well, not first class, but you do have access to longjmp.

    from libc.setjmp cimport jmp_buf, longjmp, setjmp

maxxxxx · 6 years ago
Dogma is a real problem in this industry. When OO came up suddenly everything had to be objects. So instead of writing

A=add(B,C)

You had to write

Adder AA; A=AA.Add(B,C)

I remember endless discussions about this and people always argued that functions are not OO whereas I said OO is about state so no OO needed for adding two numbers.

Same with goto. In FORTRAN it was an essential tool but suddenly it became illegal and you had to write complex if statements and other things just to get the same effect.

I guess software is so complex that it’s very to always understand all drawbacks and advantages of something so you have to live by a set of rules that usually work and follow them blindly.

seanmcdirmid · 6 years ago
> Adder AA; A=AA.Add(B,C)

Did anyone actually ever do that or is this just a huge red herring?

Also, the above looks more like a data flow language where adders are necessarily components in the wiring diagram (try building a CPU without adders!).

Add can be a virtual method on B (so B.add(C)), but then you really want Dylan-esque multidispatch on both B and C. But those kind of debates fell out of style with the 90s.

maxxxxx · 6 years ago
“Did anyone actually ever do that or is this just a huge red herring?”

Yes people did that and still do.

worik · 6 years ago
Borland C++ windows toolkit back in the early nineties would do that sort of thing, if memory serves.

Really deeply convoluted OO.

jeltz · 6 years ago
Yes, it was a real issue. I have once had the misfortune to have to work on such a project, it was the by far hardest legacy code base to understand of any I have seen (even compared to those written be self taught PHP guys or those from academia).
deanCommie · 6 years ago
Cargo culting OO helps no one, but neither does dismissing it out of hand.

Here are some other things that Adder AA; A=AA.Add(B,C) has over A=add(B,C) that you are glossing over

1) You move the adding logic to it's own file, with other adding-only responsibilities

2) You allow injecting different implementations of Adder - maybe introducing a more efficient one in some cases but not others

3) You enable mocking Adder so that your logic that verifies that B and C are added can be tested without having to re-add B and C all the time.

Not sure how much of a concern this was at the time when OO paradigms were first being created, but in today's world where .Add might be a call to different cloud services, and where unit tests with good mocking are essential for any serious application, these concepts are essential.

clarry · 6 years ago
> 1) You move the adding logic to it's own file, with other adding-only responsibilities

The code snippet above says nothing about where the adding logic is, other than that one places it in a method while another uses a straight function. Said method/function can live anywhere, unless you have a specific language in mind that prevents you from moving functions to a file?

> 2) You allow injecting different implementations of Adder - maybe introducing a more efficient one in some cases but not others

The thing the code wants to do is add something, so if that can be made more efficient, it can be made more efficient by implementing a more efficient add function. Why is it that you need to introduce something that is not an add function, in order to inject a more efficient add function? Why can you inject an Adder but not add()? What language is that again?

> 3) You enable mocking Adder so that your logic that verifies that B and C are added can be tested without having to re-add B and C all the time.

This makes no sense to me. If you add B and C, you test it once, so what's the deal with this re-adding? And why can't you mock add()?

maxxxxx · 6 years ago
I hope you are kidding. But yes let’s worry about injection for even the simplest things. Maybe we should start with language injection where you write code and later on you inject a different language. That would be the ultimate maintainable system.
enriquto · 6 years ago
Bonus points for good sarcasm!
idlewords · 6 years ago
This rant kind of has it backwards, and Dijkstra's argument against GOTO has been the victim of its own success. The use of GOTO statements he was critiquing doesn't really exist in the wild anymore, so people see the tamed version of GOTO we use to break out of nested loops and so on, and wonder what the big deal was.

It's almost like an anti-vax argument. "This disease doesn't exist anymore, why are we cargo-culting by vaccinating against it?"

The argument in the original rant was about the limits of our ability to reason about code, and remains a deep and useful insight. The fact that we don't really have examples of non-structured codebases to point to in 2019 shows how essential the invention of it was to our work.

kstenerud · 6 years ago
GOTO is an easy target due to its cultural notoriety (regardless of how it actually looked in the past), but the overarching argument is indeed against dogma. To quote Donald Knuth:

"In the late 1960's we witnessed a "software crisis", which many people thought was paradoxical because programming was supposed to be so easy. As a result of the crisis, people are now beginning to renounce every feature of programming that can be considered guilty by virtue of its association with difficulties. Not only go to statements are being questioned; we also hear complaints about floating-point calculations, global variables, semaphores, pointer variables, and even assignment statements. Soon we might be restricted to only a dozen or so programs that are sufficiently simple to be allowable; then we will be almost certain that these programs cannot lead us into any trouble, but of course we won't be able to solve many problems."

It's a problem as old as time itself: A smart person makes an observation based on deep understanding, and the rest, rather than go through the cognitive load of learning its fundamental roots, convert it to an easy statement of morality and dogma, shrouding it deeper and deeper with ceremony and pomp to create a mystique that none dare investigate.

Thinking is hard, and takes much energy. Most people prefer to keep that to a minimum, thus our superstitions, dogmas, cults, and priesthoods.

jerf · 6 years ago
"GOTO is an easy target due to its cultural notoriety (regardless of how it actually looked in the past), but the overarching argument is indeed against dogma."

I agree.

But I think it's worth pointing out that if we're going to use reluctance to use goto as an example of dogma, it strengthens the anti-dogma argument even more to point out that the dogma isn't even correct on its own terms; the goto that the dogma is rejecting historically isn't the same goto that exists today.

Under many dogmas lies a kernel of truth. That kernel can be worth extracting, and is often quite enlightening, unlike the dogma.

wglb · 6 years ago
> The use of GOTO statements he was critiquing doesn't really exist in the wild anymore

Very much this.

The programming world at that time was very much different than it is today. Fortran, which was thought of as a higher level language, had this abomination for IF statements that (a) required an arithmetic comparison and (b) had three possible destinations: one for the less than value, one for the equals value, one for the greater. It was extremely easy to get yourself into a full conceptual overload for any interesting program. Targets of branches were statement numbers. No language in wide use today has this problem.

>the original rant

If you go back and read it, it isn't so much a rant as a very logical reasoned description of the difficulties we were all feeling at the time as programmers. It was the rest of us (me included) that were part of the screaming crowd saying "Down with the GOTO totally!" Some of this unfortunate resulting hype was caused by the title of the article, not chosen by Dijkstra.

Knuth's response, as usual, has good humor in it. He notes a Dr. Eiichi Goto of Japan complained that he was always being eliminated. The concept of GOTO-less languages was also put forth in the XPL compiler written by McKeeman, Horining and Wortman, which incidentally was my introduction to compilers. Knuth also mentions Bliss, a very fascinating language, whose designers ultimately recognized that they had gone too far.

The author of TFA touches on another over zealousness in today's design thinking, and that is of object oriented programming. In another article or talk, Dijkstra is quoted as classifying object-oriented programming, along with other endeavors as part of a flourishing snakeoil business. A position which obviously enraged Alan Kay.

wahern · 6 years ago
This comment, https://news.ycombinator.com/item?id=19962895, from a few days ago explains that any use of goto broke Dijkstra's theoretical formalization of program structure. Accordingly, even today's minimal use of goto would still be harmful as it would break his formalization model.

Fortunately we have better models that can handle goto. So it's not really like the anti-vax movement because the most substantial thing that changed isn't our use of goto (the disease burden in your analogy) but better formalizations (i.e. we have better medicine that makes fewer demands of the patient).

jerf · 6 years ago
No, it explains that any use of the original goto operator would break the formalization model. No current language has the goto model that Dijkstra was advocating against; he won. To put it in modern terms, the goto in question would be exposing to the programmer the raw "jmp" assembly instruction. No high level language does that.

Indeed the best way to understand what they can and can not do is precisely to understand that formalization model and understand when you can't do a certain thing because it breaks it.

ncmncm · 6 years ago
It has been decades since I was tempted to "goto". This not because of dogma or "drinking the kool-aid". It is because I use an expressive language that has constructs that mean what I mean, so don't need to be cobbled up out of such fragmentary primitives.

That so much C code is littered with them just demonstrates a deep weakness in C, and not any kind of fundamental principle. I admit surprise that C# turns out similarly weak.

asveikau · 6 years ago
Would you also consider the assembly code that is generated by your high level language to be so "littered" with jmp instructions, arising from a "deep weakness"?

It's one thing to prefer to work with another abstraction, but this is awfully judgmental phrasing that denies or unfairly maligns a usefulness and necessary ubiquity at a different level.

AnimalMuppet · 6 years ago
Yes, assembly language is a language of deep weakness. There's a reason we don't use it unless we have to - it's too hard to write anything in assembler. In fact, assembler weaker than C - in C, you can usually avoid goto if you want to bad enough, but in assembly, it's impossible.
ncmncm · 6 years ago
Machine instructions are the archetype of fragmentary primitives. We use better languages for good reasons.
wool_gather · 6 years ago
Swift, for example, has a nice construct called `defer` that runs some code immediately after the current scope exits -- no matter the means of exit. This is pretty much all I would want to use `goto` for.

I sort of disagree that C is "weak" -- I think its simplicity, which even includes `goto`, is a strength. But I agree that other languages can certainly, within their own contexts, come up with nice things.

timwaagh · 6 years ago
C# is one of the very few modern languages that has kept goto. A similar comparison in most other modern languages does not make sense because you cannot even use it.
aikah · 6 years ago
It's funny how Go limitations made me go back to using GOTO statement to deal with errors in an http handler.
noelwelsh · 6 years ago
There is some nuance here that the author misses. Goto jumps to a location in program text. Other techniques, like (single shot) continuations, jump to program state. The former is dangerous. Not just because you can write spaghetti code, which was the original critique against goto, but because you can make jumps that have no meaning. For example, you can jump to a location that has not been initialised yet. With continuations you can still write complicated control flow, but you can only make jumps that are meaningful.

So I argue the issue is not with goto per se, it is with the lack of better tools provided by the languages in question to express complicated control flow. Like many things in programming languages, better tools are well studied but not available in most mainstream languages, which are stuck in ~1980s paradigm.