Readit News logoReadit News
taurath · 4 years ago
In my mind, state is the real enemy impacting: comprehension, brittleness towards making changes, and the surface area exposed to potential bugs. OOP as frequently implemented, while claiming to encapsulate state, ends up creating so much more.

In accordance with this view, I think project architecture should be approached with an emphasis around how much state is necessary for it to run. This is why simulations like say someone making a game or simcity with like relatively independent entities that map to something in real life use OOP. If you're writing a service doing requests, you want as minimal state as possible. Singletons are state. Initialized/non-static objects are state. The smaller amount you have the easier it is to reason about the system.

As I write this however, I worry a little that my view is overly simplistic, or maybe applicable only to domains that I have worked in. If anyone wouldn't mind poking holes in this argument or offering examples I would appreciate it.

agency · 4 years ago
There's a really wonderful talk that I've recommended to almost everyone I've ever worked with called Simple Made Easy[1] by Rich Hickey. I also struggled to explain why I hated state so much. You can talk about races with shared mutable state but even single threaded code I found I couldn't stand it, that it made things harder to reason about and change. It's because state is complex, in the sense Rich discusses in the talk: State intertwines "value" and "time", so that to reason about the value of a piece of state you have to reason about time (like the interleaving of operations that could mutate the state).

I don't know if it's just me but I watched that talk a couple years into my career and it was like something clicked into place in my brain. It changed the way I think about software.

[1] https://www.infoq.com/presentations/Simple-Made-Easy/

butwhywhyoh · 4 years ago
The problem I have with talks like this is that they sound fantastic on the surface. They almost sound self-evident! "Duh! I want to make simple things, not easy things! That was great!"

But where are the examples? Not a single example of something easy versus simple, or how something "easy" would resist change or be harder to debug. All of these concepts sound fantastic until you begin to write code. How do I apply it? It's a great notion to carry around, but I often wonder if this is just someone's experience/opinion boiled down to a really well done talk, and not much else.

usrusr · 4 years ago
That time part is what you are wrestling with when you are battling with state. So it's natural to think about it that way. But there's also this somewhat dumbed down version of the argument: every piece of state a method reads is like an additional function argument and every state it writes an additional return value. What a mess.
qazpot · 4 years ago
> State intertwines "value" and "time", so that to reason about the value of a piece of state you have to reason about time (like the interleaving of operations that could mutate the state)

Chapter 3 of SICP deals with this topic in great detail.

troupe · 4 years ago
I think I was at that talk. If I remember right the Sussmans were there as well and Gerry was the first to his feet giving Rich a standing ovation after that talk.
allenu · 4 years ago
This is one of my favorite talks. It also helped things click for me regarding state. I try to use immutability wherever I can now and when there are unavoidable state changes, I try to understand and constrain the factors that could lead to such a state change. It's simplified things so much for me.
dwohnitmok · 4 years ago
I enjoyed the talk and agree with it in many ways, but perhaps a contrarian stance will stimulate some interesting discussion. Here's the steelman I can think of against that talk.

Hickey's fundamental contention is that whether something is easy is an extrinsic property whereas whether something is simple is an intrinsic property. Whether something is easy is dictated often by whether it is familiar, whereas simplicity lends us the more ultimately useful property of being understandable.

To which I'll counter with Von Neumann's famous quote about mathematics : "You don't understand things [simple]. You just get used to them [easy]."

There is no fundamental difference between ease and simplicity. Simplicity (of finite systems) is ultimately a function of familiarity. There's a formal version of this argument (which is effectively that most properties of Kolgomorov complexity when applied to finite strings are defined by your choice of complexity function, even in the presence of an asymptotically optimal universal language. In particular there is not a unique asymptotically optimal universal language, that is the Invariance Theorem is overhyped), but the informal version is that both simplicity and easiness arise from familiarity.

Indeed the fact that there is "ramp-up" speed for simplicity suggests that in fact what is going on is familiarity. E.g. splitting state into "value" and "time" is one way of thinking about it. But I could easily claim that in fact "time" complects "cause" and "state." Rather state machines where the essential primitives are "cause" and "effect" are the proper foundations from which "value" and "time" then flow (you can think of "effect" nondeterministically, a la infinite universes, and then "value" and "time" fall out as a way of identifying a single path among a set of infinite universes). Likewise Hickey claims that syntax mixes together "meaning" and "order" whereas I would could just as easily say that "order" complects syntax and semantics!

What of the idea of "being bogged down?" That "simple" systems allow you to continue composing and building whereas merely "easy" systems collapse and are impossible to make progress on past a certain threshold? I claim that these are not intrinsic properties of a system. They are rather extrinsic properties that demonstrate that the system no longer aligns well with the mental organization of a human programmer. However this is dependent on the human! A different human might have no problem scaling it.

Now hold on, perhaps, while simplicity is perhaps dependent on the human mind and humans all more or less have the same mental faculties. Perhaps we can't find a truly intrinsic property that we call simplicity, but perhaps there's one that's "intrinsic enough" and relies only on the mental faculties common to all humans. That is, returning to the idea of "being bogged down," there are systems whose complexity puts them beyond the reach of all, or at least most, humans. We can then use that as our differentiator between "simple" and "easy."

To which I would reply that this is probably true in broad strokes. There are probably systems which are are so arcane as to be un-understandable by any human even after a lifetime of study. But at a more specific level, the way humans think is very varied. The ways we learn, the ways we develop are hugely different from person to person. Hence I find this criteria of "bogging down" far too weak to support Hickey's more concrete theses, e.g. that queues are simpler than loops or folds.

When you're talking about things like love, hate, and fear, sure maybe those are universal enough among humans to be called "objective" or to have associated "intrinsic properties," but when you're talking about whether a programming language should have a built-in switch statement, I don't buy it.

For the purposes of programming languages, simple is not made easy. Simple is easy. Easy is simple. The search for the Platonic ideal of software, one that relies on a notion of intrinsic simplicity, is a false god. Code is an artifact made for consumption by humans and execution by machines and therefore any measure of its quality must be extrinsic to the humans that consume it.

Sometimes X is simple. Sometimes it's not. It all depends on the person.

As empirical evidence of this I leave this final exchange between Alan Kay and Rich Hickey where the two keep talking past each other, no matter how simple their own system is: https://news.ycombinator.com/item?id=11945722

Deleted Comment

simongray · 4 years ago
If you're going to reference a Rich Hickey take-down of OOP, I think "Are We There Yet?" is the most pertinent: https://www.youtube.com/watch?v=ScEPu1cs4l0

Of course, Simple Made Easy is excellent too, probably his most influential talk.

kazinator · 4 years ago
Time does not go away from the concept of value when you remove state.

What state takes away is access to a given value at any other time but now.

It's always now; every value is the current value and no other version of that value exists.

kraf · 4 years ago
Not just you, I had the same experience. I rewatched it several times over the years and understood something new every time.
mycall · 4 years ago
> State intertwines "value" and "time"

Reminds me of deterministic finite automaton. Is that what you mean?

cutler · 4 years ago
Me as well but I was already sold on Clojure by then.
Supermancho · 4 years ago
> ends up creating so much more.

This is primarily because of inheritance, which seems counter-intuitive. In a meta-analysis of OOP-based designs, inheritance is used as the primary form of composition with other strategies being either last-resort or added later when the inheritance is already deeply embedded as part of the design.

Inheritance is a brittle form of a composition (no-reinherit) that nests state in a deep tree-like type system, rather than isolating it into attachable modules. Most OOP-based languages have slowly had to adopt additional forms of composition, as inheritance is not suited well for cross cutting concerns. Ironically, almost anything added after the base class (and maybe some abstracts above that) is a cross cutting concern added after the core functionality is established.

eternalban · 4 years ago
> In a meta-analysis of OOP-based designs, inheritance is used as the primary form of composition

Whose meta-analysis came up with that? Like to see that.

> Ironically, almost anything added after the base class (and maybe some abstracts above that) is a cross cutting concern added after the core functionality is established.

That's a bold statement (unless you have a novel definition of "cross cutting concerns") and actually backwards: The super provides the generalization and subs specialize. A cross cutting concern is a 'general' concern. AFAIK, cross cutting concern is a term originated by the inventor of AOP, and the typical garden variety CCC deals with matters that rarely have anything to do with the types to which it is applied. (Debug log in-args is a garden variety example.)

taurath · 4 years ago
> This is primarily because of inheritance, which seems counter-intuitive

I agree that inheritance creates a lot more problems, but the usages of non-static methods and internal state even in classes with no usage of inheritance can feel just as bad, when you have a high level method utilizing instantiated objects. Internal state as a whole can be avoided fairly often

interpol_p · 4 years ago
I would say a big contributor is also reference semantics for classes being the default behaviour in many languages. You end up sharing the state and increasing the surface area of your code which can touch the state with every pass-by-reference in the code base

I know there are mechanisms to avoid this, but many times they are opt-in rather than opt-out, and so it encourages this access-to-state propagation through the codebase where you have far reaching consequences

stickfigure · 4 years ago
If only we could completely eliminate state! Thankfully, I am working on a plan for this. It should take around 10^106 years... give or take.

The serious comment here is that the real world imposes a minimum floor on the amount of mutable state that you have to model. Databases are giant piles of mutable state. Maybe we should start talking about "essential state" and "accidental state" the way we talk about complexity.

jkaptur · 4 years ago
I agree and would add that the UI is also a pile of mutable state. Even if you model the DOM using a pure function, there's still scroll position, selection, animation, history, and so on.

At the end of the day, users interact with state. We need languages and techniques that manage it well.

mmis1000 · 4 years ago
There is a common pattern in OOP land that you use ooo.setXXX(yyy) to dupe states across objects/fields. Instead of using some kind of getters to map them. (probably due to difficulty in languages to link between objects?)

You end up get some states that should just the same but in multi places.

And this is almost one of a biggest source of bugs. Because you WILL do it wrong. As the code grows, the place you need to manual synchronize data grows. You end up miss it, create a lot of bugs.

On the other end, in most FP languages. It is pointless to dupe state most of the time, so this kind of problems did not happen so much in the first place.

BTW: Personally I like the idea of the [computed](https://v3.cn.vuejs.org/api/computed-watch-api.html#computed) primitive of vue, because it make the getter/setters first party encouraged. And getter/setters never add states. If you use it properly, states can be reduced by a lot and with much shorter code. While it looks the same as manually dupe them on surface, so you don't need refactor everything just to use it.

Too · 4 years ago
I like the term accidental state. This is also the type of state you see in a lot of OOP code, as referred to by parent.

Beginners want to keep functions short and the way to do that is to chunk up a bigger method into several smaller, then realize oops, that you needed that variable in both functions. Store it into “this” and now instead of one decoupled function you have two coupled functions.

Contrived example written on phone but code like below is extremely common, especially from Java coders who have been mislead to make classes for everything and haven’t learned the static keyword yet. Here obviously the self.stuff is the accidental state creating coupling between the functions, that now carefully have to be called in correct order and any of their mutations to self can impact the other.

    class Worker:
    def init()
        self.setup()
        self.foo()
        self.bar()
    def foo()
        self.stuff = fluff
    def bar():   
        do_work(self.stuff)
Rather than just do_work(bar(foo(fluff))).

taurath · 4 years ago
I agree with this - and of course without any state whatsoever the program is unlikely to be useful. A database, a network connection pool, and initialized configurations are required pieces of state for just about any backend service, and you can't really get rid of them. But having clear lines around how state is stored and utilized and minimizing it in business logic to me creates a much more sane program.
ajuc · 4 years ago
This is true, and another thing about state is also true - relational databases are much better suited to handle state cleanly and to minimize the amount of it than programming languages (except maybe prolog). Especially OOP languages are bad at state minimalization. For example there's no commonly used equivalent to normal forms in oo. There are no indexes and no materialized views.

The funny effect is - if you want to minimize state and you're serious about it - keep everything you can in database and make a 2-layered (relatively thin) client <-> relational db architecture. With stored procedures. We were there in 90s and we moved away because web and OOP became fashionable.

So we took our clean, normalized, minimal state from db and made it messy and complicated with ORMs to satisfy OOP gurus :)

kaba0 · 4 years ago
I don’t think ORMs are fair choice as prototypical examples of (good) OOP. But I agree that having data (record) primitives is very important in some given domains (and that relational databases are really powerful). There are other cases as well which are not as data-oriented though.
hardolaf · 4 years ago
People also moved away from that paradigm because databases are slow. I work in the world of optimizing TLP level communications over PCI-e buses. To me, a database access is already in the world of "why bother?".
carlmr · 4 years ago
>project architecture should be approached with an emphasis around how much state is necessary for it to run. This is why simulations like say someone making a game or simcity with like relatively independent entities that map to something in real life use OOP.

In the beginning of my career I did a lot of engineering simulations (Simulink), to me signal flow diagrams have always been a very obvious way to model programs. All the state is explicit in that it becomes a delayed output->input mapping of the signal flow graph. Each block behaves the same, because it has no internal state.

I always thought about programs in the same way. What goes in, what goes out, what goes back in (if multiple iterations). Only later did I find out about functional programming, which basically is the same idea, and that instantly clicked.

Except for the auto-completion after writing the . on an object, I've never really seen OOP (in the Java way, not the Erlang way) be intuitive or simple. Always keeping state in mind, class hierarchies spanning tens of files where the only way to know what your object really does is step through with a debugger, interfaces for everything because otherwise you can't mock the classes for the test, the list goes on.

lostcolony · 4 years ago
I think in a lot of ways you're correct. FP and imperative code tends to makes state explicit; OOP hides state. The latter MAY make things "easy"; it never makes it simple.
taeric · 4 years ago
To continue my hot take from earlier, OO wasn't supposed to "hide" it anymore than FP was supposed to. Rather, OO was supposed to be about changing the metaphor of the program as you are writing it.

This is most easily seen if you consider a TON of the domain specific languages out there. Logo, PostScript, DVI, GCode, etc. Many of these are "move to X" "put down pen", "move to Y", "pick up pen", etc. Very imperative and how you would talk to someone on how to do something.

So, if your objects give meaningful verbs to control the state that they maintain, it works rather naturally to reduce the code that you have.

Now, most OO today, that I see, embraced objects as records. And goes out of its way to not encode any language of behavior in the code they let you write. But, I don't think that is enough of an argument to say that abstracting some active objects into an OO paradigm is a waste and can never help.

zozbot234 · 4 years ago
"Hiding" state is necessary to endow it with well-defined invariants. This can be done in many FP languages, too. The semantics-side implications of "encapsulated" state w/ proper invariants have yet to be explored, though, and this is where newer PL formalisms like "homotopy types" might end up being quite helpful.

Deleted Comment

halpert · 4 years ago
Sorry, but state is everything. If you don’t have state, then you’re essentially doing useless work computing an answer that is already known. Computation is only useful because of state.
momentoftop · 4 years ago
Here's a pure computation:

    import Data.List (nubBy)

    refuteGoldbach :: Integer
    refuteGoldbach = head $ [ n
                            | n <- [4,6..]
                            , not $ n `elem` [ p1 + p2 | p1 <- primesTo n, p2 <- primesTo n ]
                            ]
      where primesTo n = takeWhile (< n) $ nubBy isMultiple [2..]
            isMultiple m n = n `rem` m == 0
If you think you already know the answer to this computation, get yourself a Field's medal.

And then there are pure functions. Every time you compute a function using an input no-one has tried before, you are probably computing something that is not already known. You do this routinely even with a calculator.

lloydatkinson · 4 years ago
This is a really poor take and you know it. State can exist in functional systems - as results of computations passed to other computations. Recursion where the new argument is the state.

No one was saying "don't use state" they were saying we need to adjust how we use it.

jay_kyburz · 4 years ago
This is what I don't understand about the mutable vs immutable, my functions only exist to mutate state.
ahartmetz · 4 years ago
I have come to the same conclusion. State is the problem. State should be:

- minimal (amount and lifetime)

- well conceptualized (~= easy to understand the organization)

- well named

- minimally exposed

- coherent by construction (make inconsistency impossible by design of the format or by offering updating functions that ensure the invariants)

OOP can actually help with some of these things! I develop mainly in C++, which doesn't encourage a purely OOP style like Java.

rawoke083600 · 4 years ago
I like your "bullet points" and agree with them all.

Wat are your thoughts on (super simplified example):

* 1 state-var with 3 values ?

* 2 state-vars with 2 values each ?

Sometimes I steer my design too much to first example and then other times to the last example. Both extremes can make things ugly

WA · 4 years ago
All well and good, but where do you put the damn state?
rbanffy · 4 years ago
> OOP as frequently implemented

This is the real issue with OOP. Abusing a type system, using inheritance when composition or interfaces would be more appropriate, etc.

I've seen pretty much every programming silver bullet implemented in the most horrifying ways by people who didn't understand the reasoning behind each approach.

You can write great FORTH and terrible LISP. You can write readable FORTRAN or APL (I'm stretching it a bit here) and elegant 6502 assembly. You can even write resilient and reusable JavaScript and PHP if you have the discipline to do it.

> If you're writing a service doing requests, you want as minimal state as possible.

The service can have a lot of state. What you really don't want is your client trying to keep track of it. When tempted to do so, you need to change the service.

xupybd · 4 years ago
I like the Elmish way of explicitly managing state. You have one model that changes. Any model state can be rendered. The state is obvious and clear. If you want to test something just creat that state and test it. No need to click 10 buttons just to get the UI into the state where your bug was found.
rosenjcb · 4 years ago
Elegant Objects by Yegor Bugayenko actually argues against mutation in OOP for the same reasons that FP advocates due (bad for concurrency, hard to test, hidden state is hard to keep in your head, et cetera), but then all you get are namespaces with functions that act on a (usually) single data type (i.e. the class itself and its properties).

OOP itself could be good for problems where you need state machines. The Erlang Actor model is successful for a reason, but I wouldn't apply OTP to general programming.

pishpash · 4 years ago
Nothing that the typical junior engineer does with OOP can't be done with functions and well organized files. If they can't be trusted to do that well they shouldn't be writing OOP code.
Lutger · 4 years ago
Your view isn't simplistic, I think a vast majority of developers would agree with it. State, however, is necessary for the purpose of creating useful and in some cases performant programs. Furthermore, in some cases state can make a program quite a bit easier to write and even read.

I think there are much more interesting things to be said about state than just to minimize it. For example, you can limit stateful computations inside a function in such a way that the function itself is still referentially transparent (it behaves as if it wouldn't have state). In this way, you can still do a for-loop, or do a quick-sort on a copy of the input data, without losing the benefits of pure functions. In the D programming language, this can be expressed in the type system.

We need to 'deal with' all the risks that state involve, reducing it is the first thing to do but then there are a lot of other options as well.

eschaton · 4 years ago
Wow, now we just need to stop interacting with anything stateful, like the real world!
reificator · 4 years ago
> This is why simulations like say someone making a game or simcity with like relatively independent entities that map to something in real life use OOP.

I don't think this is the case, or at least it hasn't been for quite awhile.

Any gamedev I've known in the last decade or so would reach for an ECS[0] if they wanted to clone SimCity, in other words they would design it somewhat like a normalized database.

Each character, building, zone, or whatever in the game would be an Entity, represented by a unique ID.

Then there would be collections of Components, which are basic structs like Position or Sprite. A set of components that are all tied to the same ID would represent a single Entity the same as if it were an instance of an Entity class. These components together hold all of the game data, to the point where a naive save game system could just serialize all the (non-pointer) component data and be done. How these are stored varies by implementation and configuration, but a table in a database is a reasonable mental model to understand the core concept.

The game logic is executed in Systems, which are functions that read and write component data on some schedule. The simplest example is a velocity system, that would find all the entities with both a Position component and a Velocity component and update the Position accordingly.

In the case of a SimCity style game, the ECS approach is much more cache friendly, for both instructions and data, because you're handling all of the same work at the same time, instead of updating each entity one at a time which leads to cache miss after cache miss. This can bump the max number of agents in your simulation by multiple orders of magnitude.

Some other benefits are:

* Empower designers to iterate more quickly by giving them an editor where they can change components out without changing code. Say you have Players and Enemies and they both have Health, and Walls which are simply props with Collision. If you want to try destructible environments you can simply add a Health component to your Walls in the editor rather than try to move Walls into your LivingEntity inheritance chain or modify everywhere that does damage to check for WallEntity in addition to LivingEntity.

* Easier to parallelize. If you're using objects and you want to start multithreading you quickly start feeling like mutexes are the only answer. But with an ECS if your Systems only operate on their arguments, then you can run any systems in parallel that you want, as long as anything you mutate is not referenced in any other currently running systems. For instance every Rust-based ECS I've ever seen does this out of the box, because they can tell what fields are mutable from the function signature.

* Easier to test. If all your movement system cares about are entities with both Position and Velocity, then that's all you need to setup to perform a test. No MockPlayerInput or headless rendering required, except where those are actually the thing under test.

[0]: https://en.wikipedia.org/wiki/Entity_component_system

taurath · 4 years ago
I appreciate that info! Its an abstraction that I had used in the past to demonstrate to other engineers that I have thought seemed somewhat useful about OOP, but as I was typing I thought to myself I'd bet that modern performant games wouldn't use active mutable entities but rather abstract into systems that change state at certain ticks so state can be much more easily managed, reasoned about and optimized.

This sort of begs the question: where does classical OOP, the one taught to all undergrad CS majors in programs that use Java or C++, really fit in nowadays?

rawoke083600 · 4 years ago
Even if you not a game-dev. I urge you to look into ECS just as a mental-exercise. There are some excellent YouTube talks on the subjects.

Many of us are 'stucked' wring the same old CRUD-Variation-Apps stuff. Learning about ECS is a great way to get those "original-programming-excitement-juices" flowing :)

The gaming industry is renowned for some fantastic programming solutions. To ekt out every bit of performance.

Anywhoo - makes for a nice change to arguing with colleagues over ORMs, Fat-Vs-Thing models :P

jay_kyburz · 4 years ago
I doubt any professional gamedev would reach for ECS. No major game engine has a finished ECS system yet. You would have to roll it all yourself and shoehorn it into the engine somehow.

Unity has not shipped DOTS. They actually removed it from the Package Manager last year. Joachim says it has a bright future, but I suspect it will never ship in Unity itself.

Epic has nothing in Unreal yet, though apparently a few months ago somebody spotted some changes in repository that suggests one may be on the way.

AtlasBarfed · 4 years ago
Does Freeciv use this approach? Or any other significant open source game or roguelike?

ECS sounds somewhat like how I was thinking of a civ-like game engine while falling asleep a couple months ago when I was thinking why Civ scaled so poorly under some circumstances: traditional games used to have everything in memory, but civ type stuff could just have a database with some indexes and spatial indexes for quick lookups, but otherwise just sweep the various tables. Thus the size of your game was more constrained by disk space than RAM.

Per your discussions with cache conservation/thrash avoidance, does ECS work well for mapping entities to specific processors so that cache-hopping doesn't occur in modern multicore processors and NUMA stuff?

a-dub · 4 years ago
what about when the abstractions age and a new feature for a system needs the data from the components from another system?

the long term answer is probably a refactor, but what's the common quick fix? copying/duplicating data between component types? systems that examine other components as well as their native ones? merging systems?

asking the hard question... how does it stand up to the unpleasant cases?

taeric · 4 years ago
Hot take: OO is more powerful when you embrace stateful objects. As long as you are dealing with stateless objects, many other techniques have plenty of advantages.

But, consider, OO grew in a time when the likes of Logo was strong. How do you draw a square in Logo? Usually, some form of:

    pen down
    repeat 4
        straight 10
        right 90 degrees
But this /only/ works if you keep track of the state of the system in your mind, when your are figuring it out. Which, works really well if you are being taught that your program doesn't exist in and of itself, but to manipulate something else.

Functional advocates lose many learners because they don't allow that writing a local function can be done with the more global state in mind.

snovv_crash · 4 years ago
But then you're mixing up the state of the system with the shape you want to draw. If you were now working with 2 pens, you'd have to rewrite your shape from scratch too, not just your rendering, to speed up the output.

Better to separate the shape data, which is immutable (and basically declarative), and the rendering method, which does need to know about the previous work which was already completed and what it is doing right now.

toomanydoubts · 4 years ago
A polygon is a set of lines

A square is a polygon with four lines of the same length, forming 4 equal angles of 90°

A drawing can be made given a pen and a shape

The output is a drawing of square made with pen

grumpyprole · 4 years ago
I personally think "encapsulation" as us used in OOP, is a misnomer. State is usually not encapsulated, it is just hidden. Proper state encapsulation would be to use mutable state internally for efficiency, but for that state to be unobservable externally.

OOP does unfortunately encourage introducing mutable state into the domain model. The canonical example being the back account, with a mutable back balance!

The good parts of OOP are interfaces and first-class modules. Obviously we should try and keep those.

dvt · 4 years ago
> Proper state encapsulation would be to use mutable state internally for efficiency, but for that state to be unobservable externally.

This is literally how private/public keywords work, so I think your criticism is unfounded. However, I do agree with the overall sentiment that OOP implementations tend to "leak" way too much state than they need to.

Deleted Comment

a-dub · 4 years ago
if you're writing something like a wrapper service around a database, there should be no state at all. (i'd argue it's high time that databases moved forward with respect to security and hardening such that they can be accessed nearly directly or... directly)

if you're building a thing where state is required, then yes, it should be minimized. but i'd argue against dogmatic use of immutable data records and really think about what that sort of design is attempting to achieve: reduction of sprawl in places in the code where a piece of data is updated.

the goal is to put all that stuff in one easy to find place. that can be manifested or violated (sometimes with great acrobatics) regardless of whatever rules are followed. (even oop!)

scns · 4 years ago
> it's high time that databases moved forward with respect to security and hardening such that they can be accessed nearly directly or... directly

Check out Hasura, Postgraphile and Postgrest.

mabbo · 4 years ago
> I worry a little that my view is overly simplistic, or maybe applicable only to domains that I have worked in. If anyone wouldn't mind poking holes in this argument or offering examples I would appreciate it.

I think this line alone is proof that you're doing this right.

hota_mazi · 4 years ago
I used to think that the solution to mutable state was to prohibit it and code with immutable structures all the time, but after a few years of Rust, I think it's the wrong approach.

The right way to handle mutable state is not to pretend it doesn't exist but to accept it as a reality of complex systems and to encode its management in the type system of the language. And that's exactly what Rust does.

With Rust, I no longer feel dirty whenever I have mutable state and I trust Rust to not just keep my code bug free but also to make me think carefully about mutable state and how to design my code with it in mind.

zozbot234 · 4 years ago
Rust prohibits shared mutable state as part of the basic language (or rather its 'safe' subset), relegating it to special-cased "interior mutability" constructs. This is essentially "as immutable as you can get" in a low-level, systems programming language. (Other languages can thread state mutations explicitly as part of a generally "immutable" design, but that doesn't give you support for the expected low-level features, so instead it's part of the language in Rust.)

Deleted Comment

Fire-Dragon-DoL · 4 years ago
It's definitely that, but to be fair the problem is caused by classg-based programming, not OOP itself.

Put state on an object only if there is a hard requirement for it. The occurrence is incredibly rare, state is mostly introduced to save re-typing method arguments...

ummonk · 4 years ago
Games are actually moving away from OOP by separating out state into a data oriented system.
_9omd · 4 years ago
But aren't they doing that mostly for performance via better memory layout and thus better cache locality? ie: arrays of objects vs objects of arrays. It's a sacrifice of code architecture for performance. I feel like games are actually a good example where OOP makes sense, since there is inherently so much state and encapsulation is useful.
nomel · 4 years ago
I think I need help understanding this.

My understanding of data oriented systems is there's a desire to put related data physical close, in memory. It doesn't remove state, it just packs it differently.

To be honest, I'm confused by most of the comments here.

Wohlf · 4 years ago
In my experience many game companies were never big on OOP in the first place.

Deleted Comment

kaba0 · 4 years ago
Weren’t games already written with entity component systems?
762236 · 4 years ago
I believe that we're interacting on a server that was written to these principles.

Deleted Comment

vbezhenar · 4 years ago
What is state? Every local variable is state. I can convert any global variable to a local variable or function argument with trivial changes. Does it mean that every local variable is the real enemy? Only empty program is perfect. But useless.

I'd say that not just state is the real enemy. It's all about life time of a particular bytes. `(x, y) => (x + y)` is a good function. Its state is discarded pretty fast. `(x, y) => (z) => (x + y * z)` probably is a bad function, its state is preserved for a long time. Global variable state is preserved during the whole program run. Database state is preserved during the whole software lifecycle or may be even more.

And that's not even about state is enemy. It's about how much time do you need to design this particular code. When you're thinking about database structure, that's a real deal. Take a lot of time, that's important and decisions will have impact for years or even decades. When you're designing pure function which does not leak any state, you don't need to think at all, just slap something and move on, you can easily replace it later if need arises.

TLDR: prefer state with short life time, be very careful with code that works with long time state.

pklausler · 4 years ago
Mutable state, you mean, yes?
taurath · 4 years ago
I think so, though I had in mind internal (hidden/abstracted) vs explicit states. And yeah, only in a very few cases would you want mutable states - abstracting naturally transient entities like network connections, times where memory or performance constraints demand entity resuse, etc. Though if you end up in that situation with no guardrails around that state (IE: should usually be a finite state machine) you're definitely going to have to work a lot harder.
sunjester · 4 years ago
Preach it brother.
sirwhinesalot · 4 years ago
I have not read the article but I've seen other blog posts on the subject. The issue with "OOP is bad" is that OOP means different things to different people.

Abstract Data Types are sort of a subset of OOP and are massively useful, I certainly don't think it's a good idea to expose the internal implementation of a data structure most of the time. Any sort of plugin system works in an OO matter. It is a useful tool, no question.

Even things like Actors like in Erlang (which are much closer to the Alan Kay style OO) are massively useful in a distributed setting where state is naturally distributed.

Where you get into trouble is when you go the whole "lets model a taxonomy of the world" style with inheritance hierarchies or go ham on design patterns, layers upon layers of abstraction and other nonsense like that.

The saddest thing about the "object–relational impedance mismatch" is that the focus went totally to the wrong side. The relational model is a much nicer way to model relations than a graph of objects (that's the whole point after all). SQL sucks but that's a separate issue, Entity Component Systems are a form of relational modeling for example and work really well, or even better Datalog.

almostdeadguy · 4 years ago
The problem with the "inheritance isn't part of OO" take is that every single one of the languages we call object-oriented that gained mass adoption have inheritance, including Alan Kay's own smalltalk. There have been entire books written about object-oriented design describing ways to use (or not use) inheritance in program design. So whether or not it was intended to be important by the author of the term is immaterial, it for all intents and purposes is a crucial part of object oriented languages.

Inheritance is not the only problem w/ OO either, many things are straight up awkward to express when you must couple data and code together. Many 2+-argument function w/ disparate parameter types create confusion about which class "owns" the definition of the function.

You also don't need to marry data and code to get the benefits of encapsulation. Many functional languages in the ML family have developed clever solutions to encapsulate the definition of a datatype while exposing it to module-local functions. There's no need for the datatype to carry a method around with it to support this.

dools · 4 years ago
"The saddest thing about the "object–relational impedance mismatch" is that the focus went totally to the wrong side. The relational model is a much nicer way to model relations than a graph of objects (that's the whole point after all)."

I found the same thing. When I was using ORMs I always found them clunky for all but the simplest tasks, where I would long for an easy way to use SQL and have it "just work" for objects, so I created this:

https://github.com/iaindooley/PluSQL

It's obviously not been maintained but I think it's a model that has legs: that is, simply the creation of SQL with some convenience methods, allow the use of completely arbitrary SQL, and then intuit the object mapping automatically without loading the entire result set into memory.

Semaphor · 4 years ago
There are many such MicroORMs, C# has a few extensions for Dapper, as well as several newer projects that directly have different querybuilders and convenience methods.
nyanpasu64 · 4 years ago
Do relational models support sum types? I find them an essential feature in programming languages, nearly as important as structs or rows.
sirwhinesalot · 4 years ago
Unfortunately many don't. Standard Datalog for example doesn't have disjunction which you need to model sum types as relations.

My preference is to have sum types modeled separately from relations as just a complex type that can be related as well. This is the approach taken by Souffle, a C++ Datalog implementation which supports sum types.

zamalek · 4 years ago
My issue with OOP is: Design Patterns: Elements of Reusable Object-Oriented Software.

I don't take issue with the authors, with their insights, or anything related to the content of the book. It's that the book exists at all: it's a book filled with solutions to imaginary problems.

When using a procedural language the first thing you do is start implementing a solution. When using OOP, you first have to solve the imaginary problems created by OOP, only then can you start down the path of typing out a solution.

It's significantly more difficult to refactor OOP (even with great tools) than it is procedural.

If you could spend 20% less brain power, possibly 40% fewer keystrokes to describe the exact same solution, why wouldn't you?

wackro · 4 years ago
>solutions to imaginary problems

This is a fundamental misunderstanding of what patterns are. The GOF book is used to this though.

A design pattern is someting that will naturally crop up if you adhere to certain design principles. If you follow a principle of separating instantiation logic from other logic then you will start to see factories. If you combine multiple complex parts of your code into simpler ones then you will see facades.

GOF is a reference book for some patterns that have been observed as being common, with examples that are essentially academic. It is not a how-to guide for OOP.

rolisz · 4 years ago
But many programmers use it as a how-to guide. I've had people tell me "but that's how it's done in GOF", even if the design pattern was a poor fit for the problem at hand, just because there was some superficial resemblance to one of the examples in the book.
zamalek · 4 years ago
That exact sentiment exists deep in my comment history here: I think I used the word "blueprint" if you care enough to fact check that. That is why I said I don't hold issue with the content of the book.

My issue is that the book is useful. It helps solve the artificial complexity introduced by OOP.

agumonkey · 4 years ago
But even factories and all these terms are very vacuous and only present because objects are giving bad solutions.
jcelerier · 4 years ago
I'm sorry but this is absolutely ridiculous. I sometimes teach programming to uni students, we make toy software like paint programs with a GUI in Java, networked game in C, that kind of thing.

I always have a good proportion of the class coming up with many patterns entirely on their own without any prior exposure, and I love their looks when I show them that the nebulous concept they came up with actually has a name and is well defined (and the best solution to the problems they encounter given the tools they have).

activitypea · 4 years ago
If a good proportion of an undergrad class is intuitively coming up with solutions that have some commonality, is that commonality really valuable? Observing it is neat, I just take issue with people treating design patterns as a northstar for quality software development
ecshafer · 4 years ago
Design patterns as a concept exist without OOP. Even in procedural code or functional code you have concepts like a strategy pattern, or how do I get version X vs Y of object A. These things arise in any code you are writing a sufficient complexity.
sunny3 · 4 years ago
My experience with the book and its patterns has been that there's an issue and some developers recognize that it could be best resolved with x, y, or z pattern. Then the developers go to extreme length to make sure the implementation adhere to the the 'principles' or 'spirits' of those patterns, so much so that it creates unnecessary complexity (e.g., unneeded layers of abstraction) and friction in other components (to accommodate the 'purity' of that implementation). What's worse, it's very difficult to challenge such implementation in code reviews, since it's originated by GOF, which automatically makes it legitimate. The worst part, though, is that you have to maintain the implementation, any deviation from those patterns is seen as corner-cutting.
_pmf_ · 4 years ago
> it's a book filled with solutions to imaginary problems.

Have you even read it? I'd say it has more deep real world examples than almost any other SW engineering book I've read.

fulafel · 4 years ago
IME it happens often that these categories overlap, when software gets entangled in incidental compexity.
zozbot234 · 4 years ago
This is basically a survey of a bunch of posts, and doesn't do much to provide a consistent critique.

Regardless, the true weak point of OOP is arguably implementation inheritance, which just doesn't leave you with a consistent semantics that's open to extension and changes in the basic/derived classes (that is, the well-known "fragile base class" problem is still a showstopper). But that has always been a pretty ad-hoc feature anyway. The other components of what people call "OOP", including encapsulation and interface inheritance. are all pretty well defined and not as prone to misuse.

ctxc · 4 years ago
I still don't understand why it's bad! Feels like spaghetti sentences tied together as a single article.

Why is OOP so bad, anybody? With scenarios, code samples or alternate implementations?

jerome-jh · 4 years ago
Inheritance in itself conflicts with encapsulation. First because it gives access to protected members of the super class. So it breaks encapsulation to the letter. Secondly because it hurts code locality so much. To such an extend it is difficult for the average programmer to determine what is going on by simple looking at the program. We have a recrutement test on this. Although candidates usually have a good intuition at what the test program does, it is amusing how easy it is to make them doubt by asking simple questions. Dynamic dispatch principle in itself is well understood but it opens so many questions about member access, non virtual methods access, overloaded method access to which people usually have no firm answer.
creakingstairs · 4 years ago
I think it’s hard to explain it because the very definition of OOP varies per person. Often people who hates it worked with a codebase that took some OOP principles to the extreme. E.g. I worked in a code base with extreme levels of abstraction where literally everything is a class.

That said, any principle including FP, taken to an extreme can produce a very tiring codebase (I’m guilty of doing this :p). So its not really OOP’s fault

hutzlibu · 4 years ago
In my opinion it is just the current anti-hype.

A few years ago OOP was supposed the be the magic bullet for everything, which it was apparently not. And now people critizice OOP for not delivering magic bullets. At the same time everybody seems to mean different things with OOP.

So no, it is not bad. Certain practices from OOP, like deep inheritance turned out to be worse in reality than expected in theory. But nothing dramatic. And by my definition of OOP - it is still a major part of every major language.

unnouinceput · 4 years ago
OOP is not bad. Also the entire discussion is pointless. Problems require solutions. You do the solution in OOP or not, nobody's gonna care, just that the solution solves the problem. Rest is just whispers in the wind.
jcranberry · 4 years ago
There is a famous paper called 'out of the tar pit' which may be somewhat related.
LadyCailin · 4 years ago
I stopped reading after they said they don’t know much about what unit tests are really. It’s clearly not a well informed opinion, even if it happens to be right, which, from what I did read, it very well may not be.
throwawaylinux · 4 years ago
I'm not sure that it is so bad.

But I still don't understand why it's good! Why is it so good? It is said to be better than (non-OOP) alternatives. Where's the evidence for that?

lamontcg · 4 years ago
I think the major problem is that base classes are really two different interfaces (public and protected) combined with a default implementation, all exposed as a public symbol that anyone can reference.

So if you have a Vehicle base class with Car and Truck that inherit from it people will naturally externally do things like pass around List<Vehicle> and will extend functionality with Lorry : Vehicle and start using it. This creates a problem because you cannot change the base class of a Car or else it will no longer fit in a List<Vehicle> but if you modify Vehicle you may break people who inherit from Vehicle.

If you wrote it out explicitly without using inheritence though you'd have something a bit more sane and split up into the three different concerns:

    class Car : IPublicVehicle {
      IProtectedVehicle _vehicle = new Vehicle();

      public void Drive() {
        _vehicle.Drive()
      }
    }
Now you have a public interface where you'd create List<IPublicVehicle>'s and toss them around, but you would be free to write a VehicleV2 class that could be injected into _vehicle (you could even do it dynamically and go nuts with dependency injection). Then other people using Vehicle() wouldn't have their behavior changed and everyone would still live happily inside of List<IPublicVehicles>'s next to each other.

At the same time by having to write down the interface IProtectedVehicle (not shown) you'd be more likely to do actual design about the interfaces between your subclasses and your base class. This is the real problem which is that people do lazy shit design over the base class interface and just use it to shove shared code into the base and don't consider it a private interface and then go and break behavior as they mess around with more crap in there. And the "DRY" principle pushes software devs to ALWAYS push shit into their base classes, without even thinking beyond "because DRY", which is guaranteed to be bad design. With good design you may have to tolerate some level of necessary repetition in your base classes. Then you wind up realizing that you need derived classes that do not share behavior (e.g. consider adding a Boat subclass of Vehicle when previously you had taken the SinksInWater() implementation of Car(), Truck() and Lorry() and pushed that into the Vehicle() baseclass. Oops. Now your boats all SinksInWater() unless you override that. And if you override it you may be heading down the road of violating Liskov.

So that's the source of the objection that OO programmers will just tell you to write interfaces everywhere for loose coupling and only depend on interfaces not types (not abstract/virtual base classes).

But so far I don't know what Go or rust programmers do that is any different. And Go has a really fairly slick system for delegating interfaces to contained structs by composition which is just the same model I outlined above but where the two interfaces are the same. I'm still searching for where other programming languages produce a fundamentally better model. And as far as I've been able to ascertain Go's model is to just replace inheritance with interfaces and delegation -- which you can do in any OO model just by not using inheritance and writing interfaces and delegating. But then the complaint against OO is that proponents will just tell you to write interfaces, and for some reason that is bad. IDK if I'm missing something here...

contravariant · 4 years ago
I've come around to Julia's point of view that it makes more sense for the methods to be separate from the object they're acting on.

I just wish Julia had a succinct way of saying "This object needs to have implementations for functions X,Y,Z", rather than duck typing everything and just seeing if it works. Maybe it isn't too bad in practice I just don't like it when a function can fail because the implementation changed even though the type signature didn't.

Anyway at the very least the approach of keeping methods separate helps to prevent objects that do not have any state and do not in fact represent any entity at all. Seriously who came up with naming a class "MyObjectHelper"? What on earth is it? It could represent literally anything. Does it even have state? And why?

memco · 4 years ago
There was a presentation pycon 2021 describing how protocols enables better typing in several cases: https://m.youtube.com/watch?v=kDDCKwP7QgQ&list=PL2Uw4_HvXqvY.... They sound similar to what you describe in that the protocol allows you to describe the features of the thing you need and the type checker can then help you determine that the thing you need has those features. Doing this kind of feature detection allows you to check ahead of time whether a call is likely to succeed but especially for a language like python you still need runtime guards of some kind to limit the impact of unexpected cases.
crabmusket · 4 years ago
This is known as "multiple dispatch" or "multimethods":

https://en.wikipedia.org/wiki/Multiple_dispatch

mdoms · 4 years ago
Who is Julia?
hashimotonomora · 4 years ago
Julia is quite hot right now.
indymike · 4 years ago
I'm convinced the big win with OOP was more about modularity and encapsulation than OOP. The whole object.method() or object.variable model was pretty nice after spending years dealing with global soup or using naming conventions. The not-so-good part in particular happened when you inherited one too many times... very easy to write, and very hard to debug, maintain, and sometimes test.

A lot of OOP's success it was timing - OO showed up right around the time that we started building large gui apps. Now we have a lot of languages that do a great job doing encapsulation at the module level, which seems to be "good enough" and functional and procedural code seems to work pretty well, and be pretty easy to maintain at the module level.

deepsquirrelnet · 4 years ago
This is my thought as well. There are still good reasons for using inheritance. In the right use cases, overloading methods can be a life saver. But generally, I have gravitated away from using those patterns, and toward things like composition, which are simpler and seem to be much less fragile.

Recently, I've enjoyed the simple utility of python's functools library. Small-inheritance-like functionality seems to find a happy medium in a lot of cases, while avoiding a lot of unnecessary abstraction and boilerplate.

activitypea · 4 years ago
"Why isn't functional programming the norm" discusses your points. If you haven't seen it, it makes a great in-depth argument for why you're right :)
indymike · 4 years ago
Thank you for confirming my biases :-)

Dead Comment

throwawaylinux · 4 years ago
Back in the 90s OOP was touted as revolutionary. The next big thing, would completely change programming. If something wasn't object oriented, it was looked down upon. SQL even got on the bandwagon. It was said that very complex inheritance structures, operator overloading, and all this other stuff would (somehow) make it far easier to write and understand complex projects. Many seemed to have taken and repeated this on faith and not much else. I'd never seen any real justification for these assertions, it was never explained to me why those things would perform as claimed with sound reasoning, let alone actual data.

Were there ever any real objective [hah] studies done about how much it improved software development? And did they show a significant improvement? Even if you're still pro-OOP today, you would have to admit it fell vastly short of its promises even if it does help a little bit.

Today it seems like there's been very little accountability or learning from all this. Some people have sheepishly climbed down off the bandwagon, but there's been very little overall reflection. I'm not talking about witch hunts -- there will always be more snake oil salesmen -- I mean learning as individuals and an industry to demand data and reason rather than handwaving and assertions. The sad thing too is that a lot of the baseless hype came out of academia too (microkernels are another one that comes to mind).

I still see this today. The new languages and language features. New database concepts like NoSQL. "AI". Blockchain. All the way down to the CPU (transactional memory, various "security" features, etc). Proponents can make extremely compelling-sounding cases for these things, and make it sound like they'll solve all the world's problems. And some may well turn out to be a net win in the end. But the only thing that actually matters is the real world results, and you can only evaluate that by studying the data.

In general, if something sounds too good to be true, it usually is. Maybe the incredible trajectory of the computing industry has dulled peoples' common sense when it comes to detecting this kind of hype. It's absolutely rife in the computing industry and academia.

LadyCailin · 4 years ago
Wait, what’s wrong with NoSQL? It’s not good for shoving relational paradigms into, but it’s basically infinitely horizontally scalable, which, as far as I’m aware, isn’t possible with relational DBs, not at the same performance at massive scale, anyways.

A bit annoying when people shove a relational DB into a NoSQL schema though.

vkazanov · 4 years ago
Why do you think it's impossible to scale relations (aka tables) to infitine scale? It is totally possible, just look at various analytical SQL-ish DB-likes (Apache Hive, Presto, BigQuery, Snowflake, etc).

Now, what's harder is to provide some of the stronger ACID guarantees, say, fully atomic distributed commits. Most of the time it's just a question of time it takes to reach full concensus in a distributed context.

But this has nothing to do with the relational data model itself, which is just tables of uniform rows referencing each other. Say what you like about SQL, but the core model is perfectly fine.

Nursie · 4 years ago
> Wait, what’s wrong with NoSQL?

For a few years back there, it was going to take over the world and we were all going to throw away 'old fashioned' DBMSs because they were slow, clunky and overcomplicated.

Like many of these overhyped technologies, when the dust cleared about 5 years down the line, we are left with something useful that definitely has its place, but isn't like wow huge it's taken over everything maaaaan. Meanwhile SQL is still with us and still good at what it does too.

throwawaylinux · 4 years ago
I don't believe I said anything was wrong with it or anything else there. Most of the things I listed have their uses. That was completely not my intention to say they're bad, I hope that's not the point people are getting from my post.

The point is how uncritically some of these things get taken, and how easily people will believe fantastic, unfounded claims. And not just a few gullible idiots, but huge swaths of academia and industry.

deepsquirrelnet · 4 years ago
Nothing is wrong with NoSQL except for how it (often) gets used. NoSQL is just a dumping grounds for less-structured data that allows startups to accumulate tech debt more rapidly, while providing enough functionality to be useful.

Where I've seen it used is to delay the decision making process of adding structure to data, or a prototype database, before you are certain what your application's needs are. For simple disconnected data in low performance applications, they provide a low barrier to entry. But eventually people start embedding foreign keys into documents and the whole thing goes South.

gls2ro · 4 years ago
> A bit annoying when people shove a relational DB into a NoSQL schema though

This is what is annoying with NoSQL the same as it was with OOP and now with FP.

People learn this as the new better way of doing something mostly because they heard at a conference a FAANG dev sharing it and then everything should be built with it.

I saw a lot of projects where the developer(s) used NoSQL just because it was available or it was hot or it was what they learned in a bootcamp/article. But then they added relations so now a User has Projects and each project has categories and with constraints on relations and more ...and everything is glued together with NoSQL and suddenly they are reimplementing relational DBs logic in code with NoSQL being only a pure data storage.

wainstead · 4 years ago
> Were there ever any real objective [hah] studies done about how much it improved software development? And did they show a significant improvement?

I think years of hard experience across the industry found out that, for example, multiple inheritance and operator overloading caused more problems than they solved. Both features were taught and advocated back in the day, and now "there be dragons" signs have sprung up and most of the literature today warns the journeyman programmer to avoid them.

jcranberry · 4 years ago
I've actually never really encountered issues with operator overloading. Is it just ADL, or are there any other canonical operator overloading issues?
throwawaylinux · 4 years ago
Right. What I want to know is, what was the basis for claiming all this would be so great in the first place? It appears to have been almost entirely free of any evidence, as far as I've been able to tell.

It's mind boggling to me when we see the kinds of people in the industry and their demands for data and evidence when it comes to other subjects.

spacechild1 · 4 years ago
> operator overloading caused more problems than they solved

[citation needed]

Just because operator overloading can be abused doesn't mean that it isn't a massive boon in certain problem spaces (e.g. math libraries, SIMD libraries, etc.)

Dead Comment

spaetzleesser · 4 years ago
I can pretty much guarantee that whatever the "OOP is bad" guys think people should do instead will be viewed as "bad" in a few years. I was around when OOP got into the mainstream and it started out as a very nice and practical approach. Then the ideologues came in and complained loudly "this is not OOP" so suddenly you had to wrap simple functions into meaningless objects. Then you had to inherit a lot and create these huge inheritance trees. And so on. Complaints that things got too complex were squashed.

The same will happen with every technique or process that becomes popular and consultants, authors and mediocre but loud people take over. Happened with OOP, Agile and will happen with other things too.

When I look at the Kubernetes, microservices, "need to scale just in case we may grow by 100000% soon" monstrosities we are building to deploy simple CRUD apps I don't think we have learned much.

People still take useful techniques, make them into a religion and push the techniques to the point where they are becoming a liability.