Readit News logoReadit News
Posted by u/curious16 2 years ago
Ask HN: Why has functional programming become so popular oustide academia?
Why is functional programming the new thing suddenly? Why are industries also vouching for it and slowly shifting from pure imperative code to as much functional as possible.

And I am not talking about functional languages like Haskell, OcaML, etc. It is the style of avoiding state wherever possible.

I get it that academia has been programming functionally for quite a long time. Academia do many things that seem weird at the moment but later may become fruitful in practical settings.

xupybd · 2 years ago
Because side effects cause bugs and make code harder to reason about.

The ways we avoid side effects can be inefficient. Tools such as immutable data types. In the past it wasn't really worth the performance trade off. Now days we're often running in environments where raw speed is not achieved on single core performance but over distributed systems. All of a sudden immutability seems to make more sense. Not only does it help with removing side effects but it also helps parallelize workflows.

I think that's why its happening now.

I've recently switched to using F# where I can and it's been a huge win. The bug count is way down, I can refactor with confidence, and My code is more readable.

The learning curve was hard. I still am not sure I truly understand what a Monad is but I can see the pattern and use it as a tool. It feels more constrained at first but it brings order to chaos.

bboygravity · 2 years ago
Can it be summarized with: multi-threading in non-functional languages is the most confusing and most time consuming thing?
xupybd · 2 years ago
Not exactly. I wasn't just talking about multi threaded systems but multi machine systems too. Where there is no shared memory. If you need to serialize your data to send out to a cluster for processing you've already constrained yourself to immutable data.
badpun · 2 years ago
Most webapps are actually embarassingly parallel (https://en.wikipedia.org/wiki/Embarrassingly_parallel) and require little synchronisation. Most of the hard synchronisation problems are solved for you in the DB (at least if you've chosen a good one).
ActorNightly · 2 years ago
Its neither time consuming or confusing if you understand the concept of mutex, which is only one thread can access a resource at any given time. Anything beyond that is irrelevant for 99% of the use cases for multiple threading.

And furthermore, its highly likely that your project is better suited to async programming instead.

YourDadVPN · 2 years ago
> I still am not sure I truly understand what a Monad is but I can see the pattern and use it as a tool. It feels more constrained at first but it brings order to chaos.

Ha, I learned Haskell over 6 years ago and I still can't explain what a monad is other than in programming terms. It's a container which defines two operations: put something inside the container (Haskell's "return" keyword), and chain two operations on the container together (Haskell's >>= operator).

Laaas · 2 years ago
That's because there's not much more to it.

The Monad tutorial problem is entirely due to academically oriented people attempting to explain monads through category theory etc.

noloblo · 2 years ago
noloblo · 2 years ago
As far as haskell

Declarative by nature and get parallelism and concurrency from the world class ghc runtime ; avoid call back style js, locks mutex, semaphores

Type classes

Type inference

Io capturing the messy real world of state, side effects, io, async concurrency, parallelism

Pure functions or expressions as opposed to imperative Statements

Pattern matching

Monads >>=

Monadic transformers

Maybe Monad equivalent to option type in rust ocaml to avoid null (f# might still have nulls)

Hackage cabal stack and world class libraries

Green threads like erlang Otp elixir

Cloud haskell is haskell's answer to erlang

Lazy default evaluation with strict constraints if needed

Perform unsafeio but explicitly

Immutable by default

Stm software transactional memory when need mutations that compose well

Heavily optimized ghc compiler with fast gc

Type safety Compilers giving errors on type mismatches : trusting the compiler and refactoring with abandon - if it compiles it works

Haskell native compiled code can be Very fast as in golang / ocaml

Tldr haskell makes easy things hard with a steep learning curve and hard things easy as composing is way more useful than Oops. Theoretically, functional programs should: have fewer bugs, be easier to optimize for performance, allow you to add features more quickly, achieve the same outcome with less effort, require less time to get familiarized with a new codebase, etc.

As venerable prof Joe armstrong says you need haskell erlang for fp ; you need c for high performance ; you really don't need cpp Java python c#

Most of these points apply to varying degrees to common lisp racket erlang elixir rust

badpun · 2 years ago
> Most of these points apply to varying degrees to common lisp racket erlang elixir rust

... and Scala, with libraries explicitly apeing Haskel, such as Cats.

ActorNightly · 2 years ago
>Because side effects cause bugs and make code harder to reason about.

Not really. All functional programming does is basically force you to shoehorn your computation into defined pipelines. For some cases this works, for others, you are doing a lot more work than necessary compared to imperative programming, and the cycle of you developing said code is usually longer than writing the code imperatively, and then debugging.

In general if you look at the history of tooling, there has been a certain pattern within languages/tools where some description alludes to programmers being incompetent.

For example, in the case of functional programming, your statement above essentially means "As an incompetent programmer, you may write code that sets a variables value unexpectedly, which is turn used by some other thread, which will create a bug. So you need functional programming to avoid making this mistake"

History has shown that this doesn't work in practice, because you create arbitrary boundaries, friction, and inefficiencies that slows down development by a large amount. There is a reason why Python is top language (only second to Javascript, because of web stuff) on github, and as far as it goes, its a free-for-all in terms of how you code.

xupybd · 2 years ago
> "As an incompetent programmer, you may write code that sets a variables value unexpectedly, which is turn used by some other thread, which will create a bug. So you need functional programming to avoid making this mistake"

Ouch, don't hold back tell us what you really think.

But that is not the entire meaning. If I have a function that has side effects such as a typical OO setter function it may change the state of the object in ways that are encapsulated from the consumer of the function. If someone has to come in years later and change that function it's not always obvious what state changes the function is required to make in order for the other functions in the class to operate. So we have regression tests to ensure any misunderstandings are caught.

Pure functions take in state as data. So we lose the encapsulation but gain clarity.

You are correct however, if we programmers could only do our jobs competently we'd have no need of such things.

Personally I like to setup tools that catch my mistakes, because I make lots of them.

orwin · 2 years ago
A bug you didn't catch can cost the entire project, sometimes more.

And nothing to do with competency. Carmack himself let a lot of 'easy' bugs in his games.

Let's say you want performances. The easiest way to avoid bug in critical software would be to code in Ada (at least it is in my country).

If you still want some abstraction, at the cost of security, let's say for the program that will run the 3M experiment that your Ada code just launched into space, you'll use Ocaml, despite the fact than a huge part of your scientists know python better (real world examples BTW).

Also, code in python use more and more generator everywhere (even when more imperative solutions exists), I don't think I've seen any complex objects that did not use itertools in years (in professional context), and the lambda keyword is used more than ever (even saw it last week in a pydantic code, when it's typically the kind of code where imperative is better).

Also, React-based frameworks encourage functional style, and as people will understand the paradigm more, it will be used more.

For some stuff imperative is better BTW, I'm not saying functional is better. I've work for three year for a PaaS company, selling big data capacities to universities and some companies, I don't think I needed functional programming once. But clearly, even if you code in python, if you want to do big data stuff, you'll do functional. If you work in datascience, you'll do functional. I'm in $bigbusiness now, and in the first big library I've done, one of the 3 PR comment was 'you should just use zip here', so I guess in big business, you'll have to do some functional programming too.

jillesvangurp · 2 years ago
I think John Carmack called it right. It just makes sense if you are dealing with a lot of complicated state in situations where side effects on state can cause some really hairy issues (threading, memory, real time behavior, etc.).

So, a lot of languages have adopted the more popular mechanisms for doing things in a functional style from each other. Even Java has had lambda functions and streams for a while now. And it seems like a popular feature. And of course John Carmack was talking about this in the context of C++. If you are going to do stuff with elements in a list, you might as well use something called map. Is that functional programming or just common sense?

So, the short answer is "because it works and adds value". Of course, people tend to get carried away with being pure and pedantic a bit around this topic. And never mind things like monads, which brings out all the arm chair philosophers. But some of that stuff is pretty neat and not that complicated.

The role of academia is to move the field forward and experiment with different ways of doing things. Not all of those things work well in the real world. E.g. logic programming (prolog) is cool but ultimately never really caught on. And there have been quite a few dead ends with whole families of languages just never really getting a lot of traction only to see later languages embrace bits and pieces of it. The influences of other languages on javascript for example is fairly interesting.

nequo · 2 years ago
> Not all of those things work well in the real world. E.g. logic programming (prolog) is cool but ultimately never really caught on.

It does have its niches though. For example, there is a trait solver for Rust called Chalk that uses a Prolog-inspired language because trait bounds basically define a logic:

https://github.com/rust-lang/chalk

SleepyMyroslav · 2 years ago
"suddenly" HN is discussing Carmack 2012 papers that were not results but wishes. People started trying to control side effects have almost nothing to do with FP. It is like adding epsilon to zero and calling it moving in positive direction. It would be nice if people updated their articles and said sorry it did not worked out.
noloblo · 2 years ago
Erlang is prolog inspired
noloblo · 2 years ago
in a sense erlang is the most used dialect of prolog in industry
0xB31B1B · 2 years ago
IMO, there are a few things going on here: 1) iteration patterns. It’s much simpler, easier, and less bug prone to reason about programs that use map, reduce, and foreach to iterate over collections than it is to reason about programs iterating over collections with for loops. This makes it easier to test and maintain programs over time with minimal regressions

2) distributed computing, our general frameworks and performance reasoning has led us to a place where local state is generally not used in the types of places it used to be. Things (like query results etc) are cached in distributed data stores, not in heap memory. This leads to a more functional style.

3) people feel burned by the old school j2ee style enterprise object oriented programming and do not want to create or maintain “XXXfactorywhatever” classes or have deep preinstantiated object pools to do a simple thing. FP is a way to avoid those design patterns.

__rito__ · 2 years ago
I can tell you about my experience using JAX- a functional Deep Learning framework.

My work is full-time PyTorch. I use tflite for hobby work in Edge AI.

I really like using PyTorch and the experience is far superior than using TF.

But JAX is on another league altogther. Purely functional approach to Deep Learning makes so much sense, and I personally find this approach really better and gives me more peace of mind.

When working with large datasets, and many kinds of transformations, and passing those through functions, thousands of times- one can't help appreciating the whole functional framework. It is so much simple and effective. And I don't need to tell you about the advantages with parallel processing.

And it is not with Haskell or an ML language I had my functional enlightenment- but it was with JAX.

In my limited exposure, much of Haskell examples and use cases I found are related to some form of text processing- an area in which I am least interested. (Please be kind if I am saying something inaccurate.)

So, JAX is not only a good approach to Deep Learning, but finally made me appreciate the functional paradigm.

When getting started, I also wrote a Notebook [0] where I wrote a beginners' guide for JAX. Head over there if you are interested. It was written after my first exposure to JAX.

[0]: https://www.kaggle.com/code/truthr/jax-0

snovv_crash · 2 years ago
State makes things hard to manage. The more state there is in a program, the more things there are that can go wrong. If you can minimize state and express as much as possible in pure functions, then it's also much easier to write tests with well define inputs and outputs, and if anything causes problems it's easy to see which section caused it because all state originated from a single source.

Thus, stateless design style is easier to test, and even high level tests are more useful because of their high specificity when analysing test failures.

sweezyjeezy · 2 years ago
100% pure functions are the way to go where possible, writing simple code on state is already hard, asking the developer to invent their own abstractions to tie methods / state together is often asking for disaster when the specification changes and the abstraction no longer holds.
__rito__ · 2 years ago
^ This.

Even when I didn't know what functional programming was, or even there was a thing called "programming paradigm", writing programs with a lot of functions made intuitive sense to me.

This approach was much simpler.

necovek · 2 years ago
Seeing how you are only referring to the most basic of functional programming tenets (stateless functions), it's through a lot of experience with complex, heavily abstracted systems that plenty of software engineers have been bitten by silly bugs resulting purely from inability of different people to sanely follow otherwise complex code.

Move to natively distributed architectures (eg just running two single threaded app instances on the cloud gets you there) has organically driven that complexity up by a lot.

After you realize that it's perfectly possible to build complex and performant systems out of purely stateless components, things like DDD and functional programming suddenly start to make a lot more tests.

Oh, and 2h test suites which are incompatible with TDD approach ;)

masklinn · 2 years ago
And the move to concurrent system which we’re not stepping back from ever.

In concurrent system, mutable state is pain, you want to limit it.

When most langages make mutability ubiquitous it becomes very hard to understand that the issue is because 5 hours earlier a mutable array got unwittingly shared between two threads as it got sent as a message through a queue (thread-safe!) and now they’re stepping on each other’s toes when the stars align, and you end up with fishmen everywhere you go.

austin-cheney · 2 years ago
Functional programming is not new.

OOP still exists only because of institutionalization. Outside of embedded systems there is no other good reason to continue OOP for new applications at this point.

OOP was invented with SIMULA76 in 1976 but was popularized with C++ in 1982. The primary benefit of OOP allows developers to scale applications via inheritance which conserves memory. The idea is to instantiate an object in memory and extend that object as necessary on child instances with memory duplication. At that time, 1982, memory was about $1700 per meg and most computers were operating with 128kb total memory.

Most modern languages use garbage collection: Java, JavaScript, Go, C#, and many more. In a garbage collected environment you, the programmer, are completely removed from memory management. Modern languages also prioritize speed over memory conservation because modern computer have tremendous amounts of available memory.

OOP increases complexity compared to functional programming. Complex means many, not challenging. This is objectively provable by counting.

The reason why OOP remains popular is because it is taught by universities and most programmers are produced by universities. Universities continue to teach OOP primarily in response to a broken feedback loop. Universities produce developers primarily reliant upon OOP because that’s what industry says it needs because that’s what’s available from the universities.

There is no motivation for universities to prioritize other programming paradigms when their stature is determined by ratings based upon industry demand. Ratings, and thus rankings, determine how much money a university can generate. Although most universities are nonprofit organizations revenue remains a primary consideration of all activities both athletic and academic.

It is nearly impossible to impose functional considerations as first principles in the work place. In open source or personal projects the owner of risk is the person writing the code. At work it’s all about hiring flexibility and least common denominator.

codewiz · 2 years ago
> Outside of embedded systems there is no other good reason to continue OOP

I work on embedded systems, and I see few good reasons to use OOP in this domain, at least not the classic version with runtime polymorphism.

In the codebase I'm currently working on, someone thought it would be a good idea to abstract the hardware behind a HardwareInterface, so you could have a concrete implementation and a mock for testing. And now every access to a GPIO pin goes through a virtual call that can't be inlined, even though the production firmware contains only the concrete Hardware class.

A better design would be compile-time polymorphism with templates. Or even separate header files for each implementation, which is how the Linux kernel does portability to a dozen CPUs.

Deleted Comment

Deleted Comment

colinjoy · 2 years ago
As someone who dabbles in code without too much of an idea of what he is doing, using pure functions whenever possible helps me reduce cognitive complexity and makes the code more easily testable.
xupybd · 2 years ago
I'd say this answer is the most succinct and accurate here.