That's the point: You can reuse code without paying that price of inheritance. You DON'T have to expect co-recursion or shared state just for "code-reuse".
And, I think, is the key point: Behavior inheritance is NOT a good technique for code-reuse... Type-inheritance, however, IS good for abstraction, for defining boundaries, to enable polymorphism.
> I'd say this is a fact of life for all pieces of code which are reused more than once
But you want to minimize that complexity. If you call a pure function, you know it only depends on its arguments... done. If you can a method on a mutable object, you have to read its implementation line-by-line, you have to navigate a web of possibly polymorphic calls which may even modify shared state.
> This is another reason why low coupling high cohesion is so important
exactly. Now, I would phrase it the other way around though: "... low coupling high cohesion is so important..." that's the reason why using inheritance of implementation for code-reuse is often a bad idea.
I actually can't imagine for the life of me why I'm defending OOP implementation hierarchies here- I guess I got so used to them at work, I've changed my strategy from opposing them to "it's okay as long as you use them sparingly". I have found that argument to do a lot better with my colleagues...
That is: an instance of a subclass calls a method defined on a parent class, which in turn may call a method that's been overridden by the subclass (or even another sub-subclass in the hierarchy) and that one in turn may call another parent method, and so on. It can easily become a pinball of calls around the hierarchy.
Add to that the fact that "objects" have state, and each class in the hierarchy may add more state, and modify state declared on parents. Perfect combinatory explosion of state and control-flow complexity.
I've seen this scenario way too many times in projects, and worse thing is: many developers think it's fine... and are even proud of navigating such a mess. Heck, many popular "frameworks" encourage this.
Basically: every time you modify a class, you must review the inner implementation of all other classes in the hierarchy, and call paths to ensure your change is safe. That's a horrendous way to write software, against the most basic principles of modularity and low coupling.
This is why hierarchies should have limited depth. I'd argue some amount of "co-recursion" is to be expected: after all the point of the child class is to reuse logic of the parent but to overwrite some logic.
But if the lineage goes too deep, it becomes hard to follow.
> every time you modify a class, you must review the inner implementation of all other classes in the hierarchy, and call paths to ensure your change is safe.
I'd say this is a fact of life for all pieces of code which are reused more than once. This is another reason why low coupling high cohesion is so important: if the parent method does one thing and does it well, when it needs to be changed, it probably needs to be changed for all child classes. If not, then the question arises why they're all using that same piece of code, and if this refactor shouldn't include breaking that apart into separate methods.
This problem also becomes less pressing if the test pyramid is followed properly, because that parent method should be tested in the integration tests too.
but why would they? what problems are they solving by being able to paste text into your web browsers address bar? or load a pdf into an LLM? or some other incredibly specific-to-you ability youve added?
if simply adding a lisp interpreter to a program is enough to impress people, why not add it to something other than 1970s terminal text editor? surely an LLM plus lisp can do more of these inane tricks than a 70s text editor plus lisp?
You're saying this with derision, but the ability to quickly add "incredibly specific-to-you" features is precisely what is so cool about it!
For one thing, the output was an algorithm, not a theorem (except in the Curry-Howard sense). More importantly though, AlphaEvolve has to be given an objective function to evaluate the algorithms it generates, so it can't be considered to be working "without human guidance". It only uses LLMs for the mutation step, generating new candidate algorithms. Its outer loop is a an optimisation process capable only of evaluating candidates according to the objective function. It's not going to spontaneously decide to tackle the Langlands program.
Correct me if I'm wrong about any of the above. I'm not an expert on it, but that's my understanding of what was done.
You're right of course that this was not without human guidance but to me even successfully using LLMs just for the mutation step was in and of itself surprising enough that it revised my own certainty that llms absolutely cannot think.
I see this more like a step in the direction of what you're looking for, not as a counter example.
LLMs have seen huge improvements over the last 3 years. Are you going to make the bet that they will continue to make similarly huge improvements, taking them well past human ability, or do you think they'll plateau?
The former is the boring, linear prediction.
Surely you meant the latter? The boring option follows previous experience. No technology has ever not reached a plateau, except for evolution itself I suppose, till we nuke the planet.