Readit News logoReadit News
Chyzwar · 6 years ago
In mature OOP you have ways to write nice models and have good validations. https://guides.rubyonrails.org/active_record_validations.htm...

I will argue that the complexity of software development is not because of OOP vs Functional. Tooling, documentation, quality of libraries and people are what matter most. Ruby was a massive success is largely attributed to above. We are humans, we can understand and deal with a fixed amount of complexity if I can offload some of it to a framework, library or tool I will have more time to work on my problem.

Every time I try to play with anything Functional I got hit by a bus of undocumented frameworks(Erlang), multiple standard libraries (ocaml), competing half finished implementations (lisp), arcane tooling (scala), no tooling (Haskell) and broken tooling (F# on Linux).

jes5199 · 6 years ago
but have you ever had to maintain a ruby project after the first year? The cost just goes up and up, and I think it’s because the language is so hostile to static analysis.
losteric · 6 years ago
Yes. I've been working on an corporate internal RoR tool that launched in 2010. Various engineers over the years continued releasing new features and updating language/framework development... when I earlier this year, development/maintenance cost was no higher than any other software of that nature and age.

Cost only goes up and up when engineers go overboard with "clever" Ruby magic... which is human error, don't blame the tool.

jumpinalake · 6 years ago
Couldn’t agree more. I was part of a $30 million Rails project that got unmanageable and burned after 2-3 years. Golang is so much more forgiving to human error.
taeric · 6 years ago
Have you maintained anything after the first year where that wasn't the case? Especially if you had a high degree of churn on the developers.

I'll grant that it is easier in more stable API environments. But we are our own worst enemies in that race.

lsd5you · 6 years ago
And probably also the non-local effects of using a framework - and a fairly magic one at that. All the effects of which are understood when creating the functionality, but the maintainer (even when it is the author) has a much harder time with probably only a partial understanding with certain things being out of mind.
gitgud · 6 years ago
I would stay Dynamic languages like Ruby are inherently more flexible and powerful, which more easily leads to complexity in the system... (if left unchecked)
cottsak · 6 years ago
Agree. You can write terse, maintainable, well encapsulated, testable and readable code in most any language. The problem is about the humans not the language they select.

But yes, complexity is also a major problem.

alkonaut · 6 years ago
What I have learned is that what makes a language (or platform, or tool) "good" isn't how easy it is to write good code, but how hard it is to write bad code. Does a beginner following the path of least resistence, on a tight schedule, end up with maintainable code or not?

I'd argue that this is the weakness with (the traditional) OO languages. An experienced developer with plenty of time can write good software with almost any tool. But that's not what's interesting. I want to see tools that not just lets but rather guides inexperienced developers into making maintainable software.

Things like "no nulls" or "immutable by default" in Rust and most functional languages are two examples of such designs. OO itself doesn't necessarily mean the developers get trapped in poor code, but the traditional 3 (C++, Java, C#) sure do give developers lots of guns to shoot at their feet. Perhaps not mainly because they are OO, but because they inherited some poor fundamental decisions about mutability and nulls (from C) and about inheritance as a default method of abstraction (from C++) etc.

fnordsensei · 6 years ago
On the other hand, it seems a bit wild to propose that tools don’t have inherent affordances of their own.

What complicates assessment is that for many attributes, it’s impossible to assess the tool and the user in isolation. This is not unique to programming languages.

jacknj · 6 years ago
When you mention "no tooling (Haskell)" what do you mean? I am using Haskell (mostly for hobby projects) and am wondering what I am missing out on, since I feel the tools available is sufficient.
apta · 6 years ago
This is already achievable using annotations in Java or attributes in C#, in a less verbose way. You just tag your method parameter with `@Valid` or `[Valid]` or what have you, and the framework you're using automatically ensures that the validations you specified on the data model are valid at that point in time.
pjmlp · 6 years ago
The irony is that Smalltalk already offered many FP patterns.

In the end multi-paradim languages will win.

What many FP advocates fail to acknowledge is that all FP languages that got some kind of mainstream adoption, might be FP first, but they are actually multi-paradigm.

tabtab · 6 years ago
No FP language has gotten mainstream adoption. Mainstream languages may have had FP features added, but that's different.

I confess I don't "get" the benefits of FP for the type of applications I work on. Most examples are for a domain completely different, make unrealistic assumptions about domain patterns of change that I actually see, or fill in for weaknesses of a given language's OOP model.

merlincorey · 6 years ago
> competing half finished implementations (lisp)

As well as one of the most complete specifications in ANSI Common Lisp.

agumonkey · 6 years ago
I'm curious about lispers. I thought quicklisp made a lot of things frictionless.
walshemj · 6 years ago
I think the argument is that for a lot of cases OOP isn't the right paradigm
flukus · 6 years ago
I'm sure this is partly because I don't read F#, but it looks like they've moved all the complexity into meta-programming madness, this is just being way to clever to play code golf at a high level, this is exactly the sort of complexity we should be fighting against.

Even the initial c# version was over complicated. The complex fluent interface with lambdas and callbacks could be done with a few if statements that would be simpler, faster and require no knowledge of the FluentValidation library. Unnecessary getters and setters to satisfy the encapsulation gods.

If you want to fight complexity got back to basics, you can have a static method returning a validation result with code like this:

  if (!string.IsNullOrEmpty(card.CardNumber) && CardNumberRegex.IsMatch(card.CardNumber))
    validations.add("Oh my");
Converting if statements to more elaborate constructs is creating complexity not fighting it.

UK-AL · 6 years ago
The problem is when you want validation errors which contains the field name, and a descriptive error message. Oh you want them localised as well?

You then want each field to be validated individually. So you get an error for each field which is wrong.

So you have if statements for each field creating a localised validation error object then placing in a list.

You have 8 fields coming in on your request. It's starting to look like a big method now with 8 if statements creating these localised validation objects.

You also want to share your validation rules between different use cases.

FluentValidation makes that quite quick and terse to achieve compared to simple if statements.

flukus · 6 years ago
> The problem is when you want validation errors which contains the field name, and a descriptive error message. Oh you want them localised as well?

So the above example would become something like this:

  if (!string.IsNullOrEmpty(card.CardNumber) && CardNumberRegex.IsMatch(card.CardNumber))
    validationContext.add("CardNumber", Localizer.MessageFor("InvalidCCNumber"));
  if (x.ExpirationMonth < 1 || x.ExpirationMonth > 12)
    validationContext.add("ExpirationMonth", Localizer.MessageFor("InvalidCCExpiration"));
Throw in some lambda's for the property name and static strings for the message names if you really need to be type safe. Also I'm not sure if FluentValidator handles this, but you need somewhere for root level errors, not all errors map neatly to a property.

> You have 8 fields coming in on your request. It's starting to look like a big method now with 8 if statements creating these localised validation objects.

There's no local state, the errors are stored in a glorified dictionary, it's a simple imperative series of if statements that anyone who's gone beyond hello world in any language can understand. Big methods are not bad just because they're big (not that 8 if statements is big), they're bad when there is a lot of mutable state that the programmer has to track in their head, validation logic rarely has this problem. It would be fine if there were 1000 properties because the complexity is flat.

> You also want to share your validation rules between different use cases.

So you make a function. It doesn't look like FluentValidator offers any improvement here, it seems like custom rules with this library basically just wrap a function call: https://fluentvalidation.net/start#including-rules or you create a "function" at runtime with rulesets: https://fluentvalidation.net/start#including-rules

> FluentValidation makes that quite quick and terse to achieve compared to simple if statements.

From the examples I'm not sure it's any quicker or more terse after you include the extra boilerplate setup. All it seems to do is turn if statements into where/must calls, for loops into RuleForEach calls and functions into custom RuleSets. It also adds the complexity of using a library.

kazinator · 6 years ago
The main source of complexity is requirements. Gatekeeping against the influx of requirements will keep complexity down.

Then there is unnecessary complexity from doing incomplete refactorings and rewrites. If some code cannot handle the addition of a new requirement, it should be replaced. Otherwise you add complexity that roughly takes the logical (if not actual) form if (these cases) { new code } else { old code }. And there is overlap! new code has taken over requirements for which old code still exists, but because of some lingering requirements that only the old code handled, all of it is still there (due to laziness, dependencies or whatever). It's not obvious that some of that code is never used; someone diving into it faces the complexity of figuring out what is the real payload in production now and what is the historic decoy.

marcc · 6 years ago
It's easy for us to blame "requirements" as the main source of complexity. This isn't accurate. Software exists to serve the needs of the business. Depending on the maturity and stage of the business or the software itself, it's possible that there's a changing set of requirements. As developers, it's out job to figure out how to deliver, not to say "no" to new enhancements and requirements.

The main source of complexity is how we write software, not that the software has requirements.

celticmusic · 6 years ago
I think it's fair to say that changing requirements coupled with a limited amount of time results in non-incidental complexity.

It's a systemic problem that results when the entire leadership stack isn't aware of how good software is created. Because of the limited amount of time and resources given, quite often it's a business/management problem.

And that's not to say that it isn't also a software dev problem. We've all seen some horrific things. But I've also seen horrific things because there was no one senior there because they wouldn't pay enough for it.

it's all intertwined, there's not a simple explanation. But changing requirements is definitely a source of complexity.

kazinator · 6 years ago
One source of complexity is accretion. We have certain requirements for our application. Some of those requirements, we don't implement ourselves; we need libraries and frameworks. For instance, we don't make a GUI toolkit from scratch because we need a GUI. But those third-party components are built to requirements of their own. Those requirements outnumber ours. Many of them aren't required for our use cases (like everything that is implemented in any API function we don't use, directly or indirectly). Many are. A simple requirement like "provide a screen where the user can edit their profile settings" translates to numerous detailed requirements, down to how the pixels are pushed to the frame buffer.
Aeolun · 6 years ago
I feel like I’ve responded this before, but I feel like people often attribute their increased knowledge of how to develop systems without bugs to the new fancy language they switched to.

Fact is they could build better software in the old language as well, assuming they started from scratch.

DecoPerson · 6 years ago
I strongly disagree. We use a strongly typed Lua-like language at my company and it has everything you need to build decent applications, but we hit so many bugs. It took me 12 hours over 5 days to make a simple modification to the business logic (half of that was figuring out and fixing bugs). It took me 4 hours to write something far more complex in Rust with virtually zero bugs; I attribute this almost entirely to sum types (Rust enums), a better type system, an unforgiving compiler, async/await, a better module system, lifetime checking, and an ecosystem of easy-to-grok libraries.

These things just make bugs disappear.

When it comes to IDE experience, the parts that I use often are mostly the same between Rust and our language.

Edit: I'd say it's both, in a multiplicative manner. You need experience and a good set of tools (the language itself being the most important tool) to write good code fast.

gitgud · 6 years ago
I attribute this almost entirely to sum types (Rust enums), a better type system, an unforgiving compiler, async/await, a better module system, lifetime checking...

This seems like an example of a language effectively abstracting common complexities and pain points, which were probably discovered in earlier languages...

lifthrasiir · 6 years ago
> We use a strongly typed Lua-like language at my company [...]

Is this homegrown? In my experience this alone has a major impact in productivity because it is generally hard to create a good implementation and a new language and/or implementation only pays when the existing solution is too bad. (Source: I have made a Lua type checker at work. It worked, but fell into disuse as I moved on and the entire org abandoned Lua in spite of my work.)

codr7 · 6 years ago
Which is a longish way of giving the answer no one wants to hear: Experience is everything.
lsd5you · 6 years ago
Erm surely ability is important as well. Programming is no different in this regard to pro sports - some people have more talent (which probably in turn breaks down into innate attributes, determination and learning opportunity earlier in life).
jumpinalake · 6 years ago
Are you arguing that all tools are equal? Or that some tools are better than others but the difference is negligible compared to experience? I don’t agree that experience is everything (if by everything you mean that all other factors have zero influence).
collyw · 6 years ago
In my opinion it's usually worse using a new framework or language as you don't know the ins and outs of it. Your first project is going to be a learning experience in that technology. I have seen plenty of Python code that was clearly written by Java developers wanting to try out something new.
hinkley · 6 years ago
Figuring out the Mikado method was one of the bigger shocks of my career. I thought I already knew all this stuff, and of course once I saw it I could explain it all. But knowing something is true and seeing it first hand can be a very different experience.

The simpler solution is often hard to see. We get attached to the wrong details or suffer sunk cost fallacies.

When you switch languages the cost of porting is higher, so it shouldn’t be a surprise that you end up with something much simpler. And if the target language attracted you because it makes some part of the problem simpler, that’s important but maybe not the dominant contributing factor to the experience.

kh7uky · 6 years ago
I'd agree, but some tools just make it easier to design and build a complex system whatever the level of experience of the programmer(s), designers and so on. This programming language and IDE https://en.wikipedia.org/wiki/Clarion_(programming_language) is behind some of the biggest databases in the world. Because of the openess of the IDE, in one instance it was possible to migrate one country's main cancer charity app from ISAM files to MS SQL, and rewrite it from procedural to OOP code in just two hours! Admittedly it took a week to build a program to do the coding changes, but that program became a tool in its own right to migrate other programs, but the original devs thought it would take a human 3months to do the work, which is already several months less than if the program was written in another language!

These are just some of the big corps who use Clarion. https://en.wikipedia.org/wiki/LexisNexishttps://en.wikipedia.org/wiki/DBT_Online_Inc. https://en.wikipedia.org/wiki/Experian

Various banks and other stock market listed companies. Even various military use it for their own top secret work.

The key to its success is the template language, which enables the programmers to work at a higher level of abstraction which for some reason just doesnt seem popular amongst many programmers. You can use the templates to write code in other languages, including Java, PHP, ASP.net, javescript and more.

Its safe to say, that everyone in the Western world will have some of their details stored in a Clarion built database, and its not just limited to building databases, its even been used to build highly scalable webservers. Theres also C/C++, Assembler and Modula-2 built into the compiler, so you can get right down to low level coding if required, and there's a Clarion.net version which is mainly like C# but has some of the data handling benefits of F#.

mixedCase · 6 years ago
I can attest to this but it's definitely misleading. Some languages have features and patterns that are straightforward and easy to learn in that language, that once you've learnt you can emulate/replicate in almost every other language but only because you already know how it works, why it works and where the limits of your abstraction are.

And these abstractions will be often be overlooked or misused by developers who have not used them in languages where they're native; making them a net negative instead of an obvious benefit.

davesmith1983 · 6 years ago
"Started from Scratch". I have never seen any project where something has been started from scratch actually turn out well.
cookiecaper · 6 years ago
People are going to jump on the imprecise wording here but this is generally correct. People need to be very wary of "let's just throw this out and start over". There are cases where it makes sense, but 95% of the time it amounts to "the old code is complicated and it's a lot more fun to start over than it is to figure out the old stuff some other guy wrote". The old code is complex because it actually works and has taken the punches of production deployment.

Most often, people start down this path bright-eyed and bushy-tailed, and end up realizing after about 4 months that actually all that complication was doing something pretty useful. People need to be careful before they dismiss real working-in-the-wild code.

m0zg · 6 years ago
Now read that again. Every single piece software that exists, successful or otherwise, was at some point started from scratch.
agumonkey · 6 years ago
It does play but no. Languages and paradigms do make a difference. I had to come up with a tiny DP answer for a java interview and it was a massive burden. Even though I could write the same (quality and perf) version in other languages in 2 minutes.
kazinator · 6 years ago
This can be validated or refuted by going back to that language and building something. If the same old roadblocks reappear, it was the darned language, after all.
twodave · 6 years ago
On the C# API I develop for we overcome these issues in a few ways.

1. Our way of implementing DDD helps us organize code into infrastructure and domain. Domain objects typically aren’t allowed to have external dependencies. Infrastructure code is primarily for data access and mapping. Our API code (controllers and event handlers) ties the two together.

2. Given the above we are able to write a) very clear and concise unit tests around domain objects and API endpoints and b) integration tests that don’t have to bother with anything but data and other external dependencies.

The result is that when we go to ask, “How does the system respond given X?” we can either point to a test we have already written or else add a new test or test case to cover that scenario.

We can even snapshot live data in some critical areas that we can then drive through our high level domain processes (we process payroll so it’s a lot of data to consider). If someone wants to know how a particular pay run would play out, they can just craft the data snapshot and write some assertions about the results.

We also use FluentValidation (on API objects only) and test those as well (but only if the rules are non-trivial).

realshowbiz · 6 years ago
I’m quite happy to be seeing conversations about the benefits of simplicity, boring tech, etc lately.

It’s a breath if fresh air from the sadly too common (IMO) flavor of the month new tech promotion.

tw1010 · 6 years ago
Sometimes it's hard to disentangle if a conversation is having an upward trending trajectory, or if you just happen to pay attention more to the links that mention some subject you happened to have caught an interest in.
jes5199 · 6 years ago
Yeah, I’ve been hearing a lot about the Baader-Meinhof effect recently
UK-AL · 6 years ago
Domain modelling made functional is fantastic book(What this article is inspired by). Taught me a lot about encoding business logic using functional programming techniques.

Functional programming is basically my goto when there is complicated business logic involved now.

redact207 · 6 years ago
I really love the concepts provided by Domain Driven Design (DDD), regardless if you choose OOP or FP to implement it with.

It's fine if you want to choose C#, and there're better ways of addressing the approaches to validation in OOP than were provided in the examples. Value objects are a nice way to ensure strong immutable types like credit cards can be created and passed around without requiring separate validation classes or wild abstract base classes.

I like exceptions in C# - when I used to code that I'd make a lot of domain/business exceptions that the code would throw anytime there was a violation. Here I think Java is a lot stronger in that you are forced to declare what types of errors can be thrown from a function so you have a chance of handling them. In C#, Typescript, I'm finding myself having to lean on codedoc "@throws" to do the same thing (though not as reliably).

That said, I generally am fine for most exceptions to not be handled and instead bubble up "globally". If it happened because of an API request? Let middleware map it back to a 400 Bad Request with the error body. If it happened because of a message handled? Log it, retry the message until it gets dumped to the DLQ. If it's not a violation, then it may not be an exception in the first place, in which case it can be returned with a compensating action performed.

I really like F#, but I struggled to find the actual benefit of it in this article from a DDD perspective.

UK-AL · 6 years ago
I find doing things like having types that flow through different stages -> UnvalidatedEmail, ValidatedEmail, VerifiedEmail is a lot better in f#.

In c# you need to create a lot of value object classes for that. Or have some kind of property inside the value object which indicates current state of the email.

Even then you won't be able to exhaustive pattern matching on it to guarantee each situation is handled.

Flow · 6 years ago
Maybe I'm missing something, but what stops you from having Unvalidated<T> etc wrapper classes in C#?