Readit News logoReadit News
cc_ashby · 3 years ago
It's not the number of abstractions, but the quality of abstractions. If abstractions reduce cognitive load, they are good. If abstractions increase cognitive load, they are bad. Often times good abstractions are obvious within the problem domain.

To this end, I think Haskell, in its pursuit of the purest, can lead to incredibly-high cognitive load abstractions that make reading code daunting. It feels good when writing it, but feels awful when reading it. Sometimes applying the DRY principle can be bad if the thing you are generalizing will lead to higher cognitive load for people referencing it.

Also I think abstractions vary in quality depending on the language the abstraction is embedded in. Functional paradigms in C++ are thorny to work with; even though functional paradigms are good abstractions in general, if the language doesn't support it first-class (i.e. the cognitive load of using them is high), then they are not good abstractions.

You can also consider maps in a language like Clojure, where maps are almost "zero-class"; it's like the language was built for maps. Nested maps have incredibly low cognitive load in Clojure (not only because the syntax supports it, but also because functions are standardized). Nested maps in Common Lisp, however, are not as nice as those in Clojure; you have to import non-standard libraries in order to deal with them (and even more obscure libraries to support Clojure-like read macros).

The key question is: when someone else reads this code, how much do they need to "reconstruct" the context? Can they get away with not reconstructing the context? And if they need to, how fast can they do so, via appropriate naming, comments, and types?

vlzdr · 3 years ago
I fully agree with you. You mentioned that sometimes applying the DRY principle is a bad thing. Really, people often adhere to the DRY principle far too dogmatically.

It’s better to have some duplication than to end up with a wrong abstraction.

I like the AHA principle much more. It suggests:

  - Avoid Hasty Abstractions
  - Prefer duplication over the wrong abstraction because duplication is far cheaper than the wrong abstraction
I found it here: https://kentcdodds.com/blog/aha-programming.

The point about duplication being better than wrong abstraction is made here: https://sandimetz.com/blog/2016/1/20/the-wrong-abstraction.

And the principle doesn’t suggest avoiding abstractions altogether. It’s really only about avoiding the hasty ones. Wait with the abstraction until you feel it’s necessary or when duplication itself becomes a problem. At that point, you’ll have a clearer understanding of how the abstraction should function.

ughitsaaron · 3 years ago
I really liked the way Sussman discussed repetition in the SICP lectures. Something like, “If you find yourself repeating the something, that’s often a hint that there could be a useful abstraction.” It’s a far less prescriptive (and more appropriately flexible) way of expressing the same general idea than DRY.
rightbyte · 3 years ago
> people often adhere to the DRY principle far too dogmatically

This is a problem with all rule of thumbs for programmers. It is like we as a group are far more dogmatic than normal. Or our rules are worse. Or both. Dunno.

wudangmonk · 3 years ago
I think abstraction is promoted way too much in programming. People use lisp macros as an example of too much abstraction since it allows you to write a DSL that only you understand yet they celebrate the poor man's DSL they create using their OOP abstractions.

I like the math homework approach to programming. First you get in there with a vague idea of what you need to do, you bang your head against it until it starts to make some sense. You then move on to more complicated examples only to realize that what you thought you knew was wrong. After going through many of these iterations you might start to notice some patterns common to all examples and if you're lucky they might turn out to be true. This is rare and only happens when you have a very good understanding of the problem.

kazinator · 3 years ago

  s/People use/People who have never written a line of code in it use/

AlexCoventry · 3 years ago
The values you're espousing here are the core lesson I took from John Ousterhout's A Philosophy of Software Design.
valcron1000 · 3 years ago
> To this end, I think Haskell, in its pursuit of the purest, can lead to incredibly-high cognitive load abstractions that make reading code daunting.

Hard disagree. The Haskell abstractions in the standard library are (mostly) great. Classes like `Monoid` or `Functor` as as basic as they get (in the sense that they describe a very small set of behaviors) and include laws.

Of course, user defined classes can incur in high cognitive load, but that is applicable to any language that supports abstraction.

TeMPOraL · 3 years ago
I feel the problem is more that Haskell is one of those languages that push against the unmovable barrier of dimensionality. Yes, monads and related ideas that get increasingly adopted by other languages are great. They let you express some things more clearly than before - especially when it comes to cross-cutting concerns like error handling - but only to a point.

At some point, you're trying to optimize readability of code across multiple concerns, each having a good claim to being of prime importance. But you can't. You can try and invent increasingly obscure mathematical abstractions to make some concerns special cases of a more general one - but in the end, there's only so much you can cram into the same piece of plaintext. You have to trade readability for some readers (e.g. those interested in "golden path" business rules) against readability for others (e.g. those interested in error propagation).

Or, you can double down on further abstractions and make everything unreadable for everyone equally :).

The underlying problem is partially about plaintext format itself, but mostly about the fact that we always work on the same, single representation, and demand it to be everything for everyone. I feel the only way to progress is to finally give up the constraint of directly reading and writing the final "single source of truth" code. It will definitely simplify things day-to-day, as it'll allow you to plain ignore and hide things you don't care about at a particular moment, instead of trying to keep them concise with design patterns, clever syntax, advanced algebra, etc.

kmoser · 3 years ago
> The key question is: when someone else reads this code, how much do they need to "reconstruct" the context? Can they get away with not reconstructing the context? And if they need to, how fast can they do so, via appropriate naming, comments, and types?

That is indeed the key question, but the answer will have a lot to do with their experience as a developer, their knowledge of the domain, and their familiarity with the code.

I'll take poor quality abstractions any day, provided they are coupled with amazing documentation, even if in the form of well written comments, that describe the reasoning behind them.

EVa5I7bHFq9mnYK · 3 years ago
I always told my devs: imagine the dumbest programmer you ever knew. Take me, your manager, for example. Will I be able to understand and fix/modify your code 10 years down the road when you all have left?
sebazzz · 3 years ago
That pretty much excludes using any npm library like redux or webpack.
koboll · 3 years ago
The key determinant, imo, is precisely and comprehensively descriptive names

Even if this leads to humourously Java-esque code, it's worth it

An abstraction that has one purpose and is named to make that purpose crystal clear is a good abstraction

lukeramsden · 3 years ago
> Even if this leads to humourously Java-esque code, it's worth it

Some people seem to get mad about the verbose naming common in Java - but it’s one of the biggest blessings I’ve ever experienced. If you name things after what they do, and that name is stupid, then it’s the quickest indicator of bad design I’ve ever seen. Good design is where every name is patently obvious and encompasses the entire purpose of the class / record / method / whatever.

throwuwu · 3 years ago
Doesn’t matter how good your names are if your control flow is incomprehensible or if the data representation is some bloated or mangled monstrosity.
i_am_a_peasant · 3 years ago
and every time i complain about names during code reviews i get accused of bike shedding
ughitsaaron · 3 years ago
I was coming here thinking the same thing. The important thing isn’t abstractions in general, it’s whether those abstractions are useful (defined here as reducing the cognitive load of a reader, but there’s probably dozens of other meaningful definitions of a “useful” abstraction). That can depend on a number of factors, e.g. the particular programming idioms, flavor, style, etc. of a team; the quality of documentation and onboarding of new engineers to a team; the specific task at hand, etc.
tikhonj · 3 years ago
Abstractions, like so many other things, can be "difficult" along two axes: they can be hard to learn, or they can be hard to use. The former increase cognitive load as you're learning them, sure, but they can also pay off massively once you do—abstract math is hard but also lets us manage complexity and express ideas we would not be able to handle otherwise. On the other hand, lots of abstractions aren't particularly hard to learn (usually because they aren't very abstract!), but either don't do much or carry an ongoing amount of complexity as you use them. It doesn't matter how well you've learned about pointers or malloc + free, non-trivial code using them requires a lot of care and is easy to mess up. (This is a controversial point, but it should be clear given the number of bugs and security vulnerabilities we find due to memory safety issues in real code!)

I see it as a trade-off between abstract thought and working memory. You can spend time up-front to reduce the amount of details you need to keep in your head as you go along, or you can continue to juggle more details to learn less up-front.

The problem, of course, is that most people don't differentiate between the two. In some sense, it's only fair: if an abstraction carries some up-front cognitive cost as you're learning it, how do you know it'll get better? Do you even have the time to learn something new right this moment? But, ultimately, it's a massive difference and avoiding abstractions that are hard up-front is self-defeating in the long term.

Haskell abstractions mostly fall in the former camp: hard to learn, but powerful once you have. I've worked with some pretty poor Haskell code and the abstractions that seemed hard when I was a beginner were exactly what made it easier to understand messy code! Turns out that expressive types and effect management mean that I don't have to carefully understand which parts of the code can affect each other indirectly and which parts can't; it's all explicit in the structure of the code. I had to learn the language and the concepts to understand it but, once I did, I could read it directly from the program rather than needing to simulate the code in my head.

I've found that, most of the time, the KISS design philosophy is heavily weighted in the opposite direction: it's all about avoiding concepts and abstractions that a reader would need to learn, but at the expense of having everybody keep more details in their head as they're writing and reading the code. The small pieces might each be easier to understand, but there are a whole bunch more pieces to track for the same amount of logic!

The problem with a blanket "how hard will somebody else find this code to read?" is that so much of it rides on familiarity rather than anything fundamental to the code. And that's what matters in any specific situation, to be sure... but it doesn't answer the broader question of how we should be writing (and reading) code. After all, even if familiarity dominates in the immediate short term, it might still be worth up-front learning to have a better experience in the medium and long term.

hu3 · 3 years ago
Worst offender for me is when I can't access the implementation by just clicking on the type or variable, because it is an interface.

For example with dependency injection in languages where devs put Interfaces in constructors and some magic injects the Class that satisfies that Interface at runtime. When you click the type or variable, the IDE opens the Interface file which is just a bunch of function signatures, no implementation. It's infuriating when all you want is to see some concrete code. You just want to see exactly what code is being executed and then carry on with your day.

I get the benefits of dependency injection but most of the times a single class implements that interface and yet i'm forced to scavenge the code to find it. I also know some tooling is powerful enough to list the Classes that satisfy that Interface. But it's not perfect and it's still a layer of indirection adding to my cognitive load.

_gabe_ · 3 years ago
> I get the benefits of dependency injection

I don't.

I think dependency injection encourages thoughtless code. It makes it easier to "magically" make your dependencies appear wherever you want them without having to think about anything except the lifetime. Once you're project gets significantly complex, it becomes very difficult to understand what your web of dependencies is actually doing, and to comprehend how the lifetimes all interact.

Screw dependency injection. Do it the "hard way" and make the pain points ridiculously clear so that you can see where the flaws in your architecture are revealing themselves.

aakresearch · 3 years ago
I am with you. The analogy I came up with is "map layers". Usually on a geographical map the features are drawn using contours, colours and accompanying names. Such map is, certainly, not a "territory" but is a good representation and easy to work with. Using (capital DI!) Dependency Injection is like a having a map "lobotomized", split into layers where you can never see them together. You either see contours devoid of names, or names hanging in the void. Bonus point if names don't even follow the contours. For any non-trivial number of classes/concepts my brain explodes.

Another bonus point when DI is used for some deep-context "dependencies". Hey, I need a HttpClient object here, to post something somewhere. Why, instead of having a URL and instantiating my client right here, I need to dig through the sands of DI bindings to find a way to "configure" HttpClient for this particular instance? Well, I understand theoretical reasons, but in practice they never make sense and only add pain.

Another bonus point for run-time binding errors. Those sometimes slip through even when there is a "test" in the CI/CD suite for that. I think dependency injection has its place, but as used presently it resembles a cargo-cult.

vsareto · 3 years ago
Visual Studio has “Go To Definition” and “Go To Implementation” - the first will take you to the interface, the second will take you to the class that implements the interface (or show results for multiple implementations if those exist). I would try to find that functionality in your tools because it makes a world of difference.
andix · 3 years ago
My IDE (Jetbrains) has a „jump to implementation“ feature. If there is just one class implementing the interface it just jumps there. If there are multiple you get a drop down.
briHass · 3 years ago
This is one of the nice things about Visual Studio: F12 jumps to the declaration (the interface) and Ctrl-F12 jumps to the implementation (the service) or presents a chooser if there are multiple implementors.

I find that shortcut easy/powerful enough that it's no additional burden on my cognitive load. At least, it's no more taxing than inversion of control is generally.

LeonB · 3 years ago
I agree. In Visual Studio I’ve gotten used to using “Ctrl F12” to go to “implementation” instead of F12 for go to definition. If there’s only one implementation it takes you straight to it, and if there’s multiple it lists them so you can pick one.
titzer · 3 years ago
For me, peak-design-pattern hit right about the time I worked on two different JVM-in-Java projects (2002-ish and 2008-ish). The problem, I finally realized, is that Java programmers are taught that every class is a special snowflake in need of armor and adornments. A good Java class should be honking and hollering! It should never expose a field, never have coupling with another class that couldn't be interposed upon, and should always factor commonality into a superclass. At a minimum it should have fifteen methods, repeat the same names at least 7 times each, and be JavaDoc'd to the point your eyes are bleeding. Even if it just holds three fields and does nothing at all. ESPECIALLY IF IT DOES NOTHING AT ALL. It must pretend to be important! It should be long enough that you spend most of your time scrolling. If you aren't scrolling, you aren't coding!

What. A. Nightmare. It's like trying to build an internal combustion engine and every moving part has three extra degrees of freedom, in case you wanted to pull a piston out in the middle of the freeway and use it to wipe your windshield! Just in case you want to do that, a piston can also function as a wheel, a cigarette lighter, or a potted plant. And of course it has the, ahem, armor (?) to do all of that. In fact, every nut and bolt and rod and wire has extra armor on it. They aren't bolted together, they are held with miniature six-axis robot arms, just in case. And everything has plating. Thick metal plating. Bullet-proof plating. Everything indestructible. Getters and setters everywhere! Can't you see how much better they make things! The design is so much better with getters and setterrrrss!

Needless to say, I don't write code like that anymore. I prefer to write in Virgil, to make my fields public and immutable, initialized in the constructor or the anonymous zone, and when I need six classes to cooperate to do a job, they do, and they don't put armor on for friends. Private applies to a file, everybody's friends there, no surprises.

But I don't expect anyone to follow me or worship it. I won't claim amazing design skills. Just trying to make things work without a lot of extra work or surprises down the line.

lemmsjid · 3 years ago
I have had the same experience, interestingly in the same time frame. I felt that when doing Java projects I needed to enter this special telepathic flow-state with my IDE and cut-paste shortcuts and change huge chunks of code every time the model changed a little bit. Transitioning to immutable data classes with functions operating on the classes, and abstractions focusing on what functions operated on them and in what context made life much easier.
phendrenad2 · 3 years ago
Today this is the microservice. Why reverse a string when you can build a string-reverse-api microservice to handle it?
titzer · 3 years ago
Microservices are a good way to turn a 15 person project requiring 3 subteams and a little project planning into a 150 person project with 50 teams and no planning.
_lx4l · 3 years ago
One of the strangest ironies of my career is that the smartest developers often write the worst code. Their perfect memories enable them to effortlessly work with infinite layers of abstraction and write the most clever solutions imaginable to any problem.
pschuegr · 3 years ago
+1. One of the single best pieces of programming advice I ever received when I was young was my TL saying "I don't like this, it's too clever". I spent a brief period at a FAANG company and IMO the people there are too smart to be good developers.

My coding heuristics now pretty much boil down to:

1) Less code is better code. The only code with no bugs is no code. After you get something working, remove as much code as you can.

2) Simple code is better code. Don't waste time making your code complex for the sake of being efficient until you've used it and proven that it's a bottleneck with a profiler. Otherwise, just do the simplest thing you can think of that will work.

3) Don't think too far ahead unless you specifically have been tasked with it. Otherwise, do what will work now. Refactor when necessary.

peterashford · 3 years ago
Yep. 100% on board with this. The more code and the more complexity, the harder the system is to understand, maintain and extend.
Groxx · 3 years ago
Yeah. I summarize 1+2 as "optimize for reading" personally - none of us is as dumb as we are in the past/future, explain yourself and don't be clever without an explicit need and a way to recover from it later, etc.
briHass · 3 years ago
The key is consistency across the organization. Even with a complex (perhaps even over-engineered) standard template, it's nice to be able to open an application/service you've never worked on and have a pretty good idea how to find the key bits.

The worst is a huge monolith that has dozens of styles and favorite patterns sprinkled all over from years of opinionated developers leaving their mark. For anything non-trivial, it might take an hour or two just to get your bearings and enter the mindset of the original developer. Then, you have to make the decision of whether your new function is going to follow that (broken) pattern or add yet another new one.

Instantix · 3 years ago
Are you sure those "smartest developers" would be still smart if they have to maintain someone else abstract code? Easy to memorize layers of abstraction when it follow your own logic and you build it step by step.
Hermitian909 · 3 years ago
Still happens. I work in a codebase that is definitely on the upper end of complexity for the industry. ~18 months ago we hired a very smart developer who had one of the more rapid onboardings to our codebase I've seen, he was very productive in under a month. It turned out he was too productive and people were starting to find new layers being added to the code base. A few of the more senior folks had to come in and shut him down.
leetrout · 3 years ago
Bingo. It's different when you grow up with it.
aleksiy123 · 3 years ago
Gotta dumb yourself down to write your best code sometimes.

In this vain this often posted article is fun https://grugbrain.dev/

bqmjjx0kac · 3 years ago
And that's why my shitty memory is kind of a super power. Except for the way it makes everything else difficult.
mjevans · 3 years ago
Long ago I read something like...

A debugger needs to be smarter than the person who wrote the code being debugged.

If someone is at their smartest when writing a section of code, they are thus unable to successfully debug problems involving that code.

alwaysbeconsing · 3 years ago
Indeed; this is called Kernighan's Law:

> Everyone knows that debugging is twice as hard as writing a program in the first place. So if you're as clever as you can be when you write it, how will you ever debug it?

https://en.wikiquote.org/wiki/Brian_Kernighan

Deleted Comment

analog31 · 3 years ago
Being a savant at creating and then solving a particular kind of puzzle is certainly a form of smartness. Figuring out whether it's a form that you want to have involved in your business is another form of smartness.

The principal-agent problem looms large.

ldjkfkdsjnv · 3 years ago
If they work with developers of a similar intelligence its not a problem, and even has massive returns
baconforce · 3 years ago
a lot of it is hubris as well, they just assume that others will be able to understand it because it's easy and intuitive to them, and if others cannot understand it they must not be good developers
tacitusarc · 3 years ago
This is partly true, I think. But it's more a problem of focus: the interesting part to many smart developers (and developers in general) is the initial solving of a problem. Very clever developers can go much longer without refactoring, since their cognitive capacity is higher. They can get through the initial problem solve in a single go, potentially, and have a complete mess on their hands. This is only bad if they stop there, though (many do). If they then put that cleverness to use _simplifying_ the code, they can produce some of the cleanest and best models for problems that you'll see. There's often no upside to it, though, so many just skip that step. Unfortunately.
biorach · 3 years ago
Nope, those are not the smartest developers. Those are the cleverest developers.

The best developers solve the hard problems without writing the worst code.

tester756 · 3 years ago
how do you define smartest developers?
ngngngng · 3 years ago
I mean clearly the highest IQ people I’ve ever worked with. The smartest people I’ve worked with that are developers.
intelVISA · 3 years ago
ability to solve Hard Problems rather than create them
epolanski · 3 years ago
I find these arguments going back in circles and concluding nothing.

All teams are different, and a good rule of thumb is to write code the least skilled or experienced person can understand on its own without guidance. Since teams vary so do these limits change. I had teams where it would've been odd to even try use any abstraction, let alone algebraic data types, and teams practicing lots of abstraction and architectural patterns all on top of functional effects. The teams widely different in proficiency of the best and weakest members and so did the codebase patterns.

It's like in sports, different teams demand different approaches because the qualities and proficiencies differ, there are no universal truths but solutions that fit better different teams.

m000 · 3 years ago
> All teams are different, and a good rule of thumb is to write code the least skilled or experienced person can understand on its own without guidance.

This. Also, if your team develops on framework X and people are hired based on that, stick to the damn patterns used by your framework!

I had once worked with a developer that one day unilateraly decided that the abstractions provided by the framework we used were not good enough. Instead, we should use the abstractions from a "design pattern" described in a shitty blog post they dug up, and "everyone should read". There wasn't enough pushback, so we had ended up with some seriously unmaintainable code.

pmontra · 3 years ago
I write code that I can understand when I'll come back to it after one year or two. Sometimes I don't understand what my old code does and I realize that I failed. It wasn't simple enough.
thex10 · 3 years ago
OP, I think you’d really enjoy the book “A Philosophy of Software Design” by John Ousterhout. It discusses this topic at length. With respect to reducing complexity, you pretty much drew the same conclusion he did.
avindroth · 3 years ago
I came into this thread thinking of recommending this book. Are there any other books like it that discuss managing complexity, maybe not just for software engineering?
khazhoux · 3 years ago
In C++ the over-abstraction starts in your first lessons. I know and understand all the arguments, but I will never type this:

    std::cout << "You have " << itemCount << " items of type " << itemType << "." << std::endl;
When I could do this:

    printf("You have %d items of type %s.\n", itemCount, itemType.c_str());

LaLaLand122 · 3 years ago
I will never type this:

    printf("You have %d items of type %s.\n", itemCount, itemType.c_str());
When I could do this:

    std::println("You have {} items of type {}.", itemCount, itemType); 

    https://en.cppreference.com/w/cpp/io/print

khazhoux · 3 years ago
That's perfect -- as easy to write and to read as printf. I feel like an idiot for not using it all these years.

Edit: Oh, it's in C++23 standard. Only took them 38 years! ;-)

waiseristy · 3 years ago
C++, OOP, and whatever crazy async shit is going on in JS ruined the conceptual model of “code” for going on 2 generations of programmers.

We do students a disservice teaching them how to “code” first. And not instead how to solve problems with code

aswanson · 3 years ago
Yeah, that async stuff is brutal.
QuantumSeed · 3 years ago
I would go a step further and teach them how to read code before teaching them how to write it
pmontra · 3 years ago
Many languages have string interpolation, which looks more similar to C++'s << than to printf

  puts "You have #{itemCount} items of type #{itemType}."

  print(f"You have {itemCount} items of type {itemType}.")

  console.log(`You have ${itemCount} items of type ${itemType}.`)

epgui · 3 years ago
I’m just a guy, so take what I say with a grain of salt.

This problem is broadly solved by a combination of domain-driven design (DDD) and functional programming (FP).

Yes, I know you’re tired of hearing about FP and you think FP engineers are all white bearded monad salespeople. I don’t care.

Odds are, if you’re an engineer, you’re not doing purely abstract work: your code has some correspondence to “stuff in the real world”. FP is unencumbered by the class/object model and you can abstract over anything trivially using plain old functions: don’t “choose” your abstractions, use these, and your domain models, to figure out what the correct/real(est) abstractions are. It’s not so much about too much or too little abstractions, it’s about how suitable the abstractions are.

keyle · 3 years ago
One of the best post on FP I read was John Carmack's realisation that FP had a strong place in any code base [0].

It's a really valuable lesson to keep most function as pure and stateless, but transformation of the existing state.

Build your entire codebase around this idea and frankly, abstraction will take care of itself.

[0] http://www.sevangelatos.com/john-carmack-on/

syntheweave · 3 years ago
Carmack's ideas did a lot for my code. He has good insights around what to take from different problem domains in order to get a result that gets right to the point and solves a specific category of error. You should never settle for "squishy" code that looks pretty just because: the code must do this, it cannot do that. Write it at the top of the function, and the best code will just go right to the goal without grabbing on to any boilerplate.

That's something I was really missing in my earlier programming years, when all I saw were Java, Python and Ruby bloggers regurgitating OOP and Agile terms and I would try out their idea, marvel at the visible complexity of the abstraction, but then see it fail in a practical scenario. WTF.

leetrout · 3 years ago
Speaking for myself I think FP is going to have its moment and it will be when average devs can ignore the syntax and theories in the abstract and just solve problems.

Pretty much everything is better with FP and walking a reasonable line when showing benefits _without_ immediately diving into all the esoteric parts wins people.

We need the MVC of FP type of trendy bandwagon. The FP rails.

I am optimistic about Roc and hope Rust also continues to help make FP style programming more commonplace even if I still dont know how to implement a monad.

epgui · 3 years ago
Re: monad

For my part, the most confusing thing was that I was trying to understand monads by looking at both haskell and category theory.

However, a monad in computer code is a different beast than a monad in category theory (I believe this may be true in every programming language currently, but if you have a counter-example I'd love to hear about it). They look vaguely similar, but aren’t the same thing. Nobody told me this, but it greatly simplified comprehension once I had that epiphany.

A lot of monad explanations just confuse monads and functors too (functors are your burritos / wrapper data types; and functors should probably be understood before monads). IMO seeing monads as being primarily about function composition, kind of like the functions `compose` or `pipe` or `thread_first` or `thread_last`… but in a context that makes use of functors, is the best way to really see the core simplicity of the idea.

avindroth · 3 years ago
I like FP, but I don't think the situation is solved by FP. There's a lot more room for discussion of abstraction and complexity within FP.
epgui · 3 years ago
My comment said it was “broadly solved” by a combination of FP and DDD, not by FP on its own, and not completely solved either (the problem described is multifaceted and you can find many ways to screw it up regardless of what tools or frameworks you pick up… but certain tools are a bit harder to point at your foot).
intelVISA · 3 years ago
Functional is generally accepted as The Best Abstraction imo
epgui · 3 years ago
Generally accepted… But by whom? If you mean “the software engineering community at large”, I’m not so sure… I’d say from my experience: [*]

- >98% of engineers are mostly familiar with OOP

- about 1% of engineers “write FP code imperatively” without knowing that they’re kind of missing the point

And among the >98%:

- 20% think FP is about using map/filter/reduce (not a bad start, but misses the main ideas)

- 35% are openly hostile to FP for a whole variety of reasons, some better than others but none great

- The rest don’t really know what FP is, and if they’d have to guess they’d say it’s when you use functions.

We no longer have the same performance constraints that computer scientists had to deal with in the very early days of computing, when the theory was being written and the von neumann style of programming began to take hold. It will still take a long time (I’d wager decades) before the findings of academics are accepted in the industry.

[*] These figures are all made up and are presented only as a rough way to communicate my perspective based on my experience… But again I am not particularly qualified to speak to this.