Readit News logoReadit News
denismenace · 3 months ago
I think in the end step by step functional programming will be widely adopted. Concepts that have a strong theory behind them tend to last longer. Similar to how relational databases and SQL are still the gold standard today.
roenxi · 3 months ago
> Concepts that have a strong theory behind them tend to last longer.

The programming paradigms (functional, imperative, declarative) as implemented generally don't have strong theories behind them. We have no sharp line to detect when a language sits in any of those categories for any major language apart from SQL. One can write something that is recognised as functional in an imperative language for example. And even the best example of purity, SQL, in practice needs to be hosted in an imperative or functional environment to get useful results. The declarative part of the language isn't competent at getting complex performance-friendly results.

One of the interesting things about being a member of the functional programming community is I genuinely can't tell what the claim to fame of the community actually is beyond a sort of club-like atmosphere where people don't like mutability. Are we claiming that C++ programmers don't know how to write pure functions? I hope not, they clearly know. Many are good at it.

ux266478 · 2 months ago
I see this misconception a lot. Functional programming has nothing to do with mutability or immutability. Just like product types (or associative arrays) don't make for object oriented programming, the paradigm is about the patterns you use and the total structure of the program. In the case of functional programming, it's that your entire program is built around function composition and folds. This changes how you build your program on a fundamental level.

Additionally, functional programming does have a strong theory behind it. That of the lambda calculus. Everything is formalization and constructions built on top of that. We don't have a "hard line" to detect if a language is "functional" because it's not a language paradigm, but a property of the code itself. Grammar is orthogonal to this stuff. You can, however, make a pretty good guess by looking at the standard library of a language. There is no ambiguity that Idris or Haskell are a languages designed for functional programming, for example.

I think this stems from a problem that there are a lot of programmers, who through no fault of their own, have engaged with tools for functional programming, but skipped building things using the alien and weird stuff so in the end there's no actual learning of functional programming itself. Every new programming paradigm is a shift of your core perspective, they require you to relearn programming from the ground up. It shouldn't feel like "this is just ABC with XYZ", you are only truly beginning to learn when you are put back into your shoes as a middle schooler unlocking the power of nested conditionals. If you don't get that feeling, you're just learning a different language and not a programming paradigm.

> And even the best example of purity, SQL

Purity is not a claim of functional programming. Also SQL is not anywhere close to being a functional language, it's not only a non-programming language, but even if it were (and it can be with extensions) its paradigm is far more in line with logic programming.

implicit · 2 months ago
I agree. It's happening.

Lambdas are now a requirement for any modern programming language. Monadic promises are how asynchronous programming is done. Rust is basically OCaml except with typeclasses and without a GC.

Inch by inch, ideas from FP bubble up into mainstream programming.

I predict that algebraic effects (a la Koka) are next, but it'll be a while before the industry is ready for it.

layer8 · 2 months ago
To play the devil's advocate: From an OOP perspective, lambdas are just syntactic sugar for a special form of objects. Regarding monads, we had smart pointers and the like in C++ already in the 1990s. Both concepts are orthogonal to mutability.

I'm convinced that imperative programming will endure alongside functional code, because being stateful is how the real world works, and most programs have to repeatedly interact with the real world over the course of their execution.

VeejayRampay · 2 months ago
python will be the last man standing with basically no functional goodies

people will keep on trucking with their "pythonic" for loops containg appends to a list that was initialized right before, their disgust for recursion, their absence of lambdas, the lack of monadic return types or containers

cubefox · 3 months ago
That seems improbable. Pure functional languages are very unpopular according to TIOBE. In particular, interest in Haskell seems to be in decline. Functional features are mostly just an add-on to programming languages with mutable state. By far the most popular language, Python, doesn't even have static typing.
Fraterkes · 3 months ago
I know powerful typing features are very important for Haskell in particular, but is static typing considered a functional feature more broadly? Most lisps don't have static typing as far as I know. Clojure was one of the main functional languages that actually saw industry use for a while, and it is dynamically typed.
ahf8Aithaex7Nai · 3 months ago
That's already been happening for quite some time. Mainstream programming has done little else in recent years than converge toward functional programming.

Also, they wrote "functional," not "purely functional."

By the way, from a formal perspective, imperative programming is an extension of purely functional programming.

HelloNurse · 2 months ago
Haskell is suitable for, and designed for, bleeding edge experiments, not for practical usage. Its low popularity says very little about the "market penetration" of better engineered functional languages.
bheadmaster · 3 months ago
> By far the most popular language, Python, doesn't even have static typing.

Arguably, Python is statically typed - it's just that it only has one type: object.

Deleted Comment

motorest · 2 months ago
> Similar to how relational databases and SQL are still the gold standard today.

...except they aren't. The world is gradually slipping towards nosql databases and stepping away from normalized data stores, to the point where tried and true RDBMS are shoehorning support for storing and querying JSON documents.

All flagship database services from major cloud providers are nosql databases.

Perhaps you're confusing pervasiveness with representing gold standard. Yes, RDBMS are pervasive, but so are non-compliance with the standard and vendor-specific constructs. The biggest factor was not what was the theoretical background behind each implementation but ease of use and ease of adoption. SQLite is an outstanding database not because of its compliance with SQL standards and adherence to theoretical purity, but because it's FLOSS and trivial to setup and use.

I expect the same factors to be relevant in adoption of functional programming. This stuff exists for decades on end but mass adoption was between negligible to irrelevant. What started to drive adoption was the inception of modules and components that solved real-world problems and helped developers do what they wanted to do with less effort. In JavaScript the use of map, filter and reduce is not driven by any new-founded passion for function programming but because it's right there and it's concise and simple and readable and capable. In C#, linq is gaining traction for the same reasons but is largely kept back by performance issues.

The key aspect is pragmatism, and theoretical elegance is by far a mere nice-to-have.

efitz · 2 months ago
> ..except they aren't. The world is gradually slipping towards nosql databases and stepping away from normalized data stores

The 2010s called and want their database paradigm back.

NoSQL is definitely not the zeitgeist today. Graph DBs, KV stores, document DBs, and OLTP DBs are what is popular today- I would say that the two shifts I see in the 2020s is the rise of cache-as-database (eg redis) and an “all of the above” mentality rather than one-size-fits-all.

jghn · 2 months ago
> The world is gradually slipping towards nosql databases

My observation has been that this pendulum is swinging back the other direction. You're correct that some NoSQL-isms found their way into RDBMS, but at least in the circles in which I travel, people are by and large heading back to RDBMS stores.

taffer · 2 months ago
> to the point where tried and true RDBMS are shoehorning support for storing and querying JSON documents

That was in 2012. Nowadays, NoSQL companies are shoehorning SQL into their products.

> All flagship database services from major cloud providers are nosql databases.

AWS Aurora, Google's Spanner and Big Query and Snowflake are all SQL DBs.

randomNumber7 · 3 months ago
Why is type theory always tied to functional programming? Wouldn't it be possible to use some of the ideas in imperative languages?
athrowaway3z · 3 months ago
FP has a small set of primitives that encode everything you can computationally do. One of the first thing you'd learn is how to use those primitives, to 'emulate' anything an imperative language does.

Most definitions of imperative include "unpure editing of global state", which is just the equivalent of passing along a state object implicitly to every function, which is just extra verbose for whatever fundamental type-theory point you're trying to make and would not affect the argument in terms of those fundamental primitives.

pjmlp · 3 months ago
Not necessarily, in our compiler design lectures we also used for type systems in imperative languages.

Eiffel reference is one of the few manuals to fully specify the type system with denotational semantics, for example.

Here is an example of type theory in OOP languages,

https://www.sciencedirect.com/science/article/abs/pii/009605...

zelphirkalt · 3 months ago
I am guessing, that it is, because imperative processes are very hard to describe properly using type theory. For example: If you have an object, that has some members, that are initially not set, that is a different type than when they are set. So you cannot say that object has exactly one type. After every setting of a member, you have a new type. That's not something that you can statically work with easily. Mutation destroys the type guarantees. The type system can only be on a more limited level, that is rougher and doesn't give as many guarantees, unless you restrict yourself in what you do to your objects. Like for example not mutating them. Or not having nullable members. Things like that.

So I think FP is more suitable for type theory research and pondering, and people who are researching in the matter will not even consider doing that for imperative languages, because they have understood long ago, that these languages lack the guarantees, that you can get with FP, and therefore are not worth spending the time on.

brabel · 3 months ago
That’s just not true. Rust has a wonderful type system and it can encode mutability, which is a must for anything performance oriented. That is a benefit that FP misses out on. You can still analyze Rust programs with a great deal of assurances. The fact that it’s procedural seems to not matter much at all. An example of a language that allows mutability and has a research-level type system is Flix.

Deleted Comment

js8 · 3 months ago
I think it comes down historically to Church's https://en.m.wikipedia.org/wiki/Simply_typed_lambda_calculus . It originated the most popular form of type theory and also used the simplest functional programming language, lambda calculus.
lou1306 · 3 months ago
I recently started to look at this the other way around. A functional paradigm allows to describe very precisely what a function does through its type. In imperative languages, OTOH, the type signature of a function (which really should be called a procedure) only gives you little information to what happens when you call it, due to mutable state, side effects, etc.
4ad · 3 months ago
It's very simple, it's because pure, typed functional programming is not arbitrary but rather fundamental. Natural deduction from logic corresponds to various typed lambda calculi, and functional programming is but a practical manifestation of lambda-calculus.

Under Curry-Howard correspondence simply typed lambda calculus is the term calculus for intuitionistic propositional logic. System F (polymorphic lambda calculus) corresponds to impredicative second-order propositional logic. System Fω corresponds to a kind of higher-order logic. Dependent types correspond to intuitionistic predicate logic, etc.

Other correspondences that are based on sequent calculus instead of natural deduction are more exotic, for example classical logic corresponds to μ~μ-calculus, a calculus of continuations which (very) roughly can be understood as continuation-passing style (but in a principled and careful way). Classical linear logic corresponds to a form of session-typed process calculus. Intuitionistic linear logic corresponds to either a process calculus or to a lambda calculus that is using futures (which can be though as mutable shared memory concurrency using write-once cells).

Note however that languages corresponding to sequent calculus, especially ones that come from a dual calculus (classical logic or classical linear logic) contain some sort of commands, choices that you request from a value, which more or less makes them object-oriented languages, albeit without imperative, mutable assignment. In some sense you can escape functional programming by moving to a dual calculus, but you can't escape purity as long as you care about having propositions as types.

From a Curry-Howard point of view no logic corresponds to a general imperative calculus. Imperative programming is simply not fundamental and generally undesirable when doing logic (so when doing type theory). Mutable state with imperative updates can easily be encoded into FP when needed, e.g. via monads, by using linear types, or by having algebraic effects.

That doesn't mean that types are not useful to imperative languages, of course they are. But types in imperative programming are very weak and logically not very interesting however useful they might be for engineering purposes. Also note that type theory does not mean type system. Many languages have type systems, some more ad-hoc than others, but type theories are special, very specific mathematical objects that embody logic (under the Curry-Howard correspondence). All programs written in a type theory terminate, and this is fundamental. Usual programs, which are not concerned with mathematical proofs certainly don't always terminate.

Of course understanding type theory is a very good way of producing (weaker) type systems that are useful in practical programming, including imperative programming (see for example Rust, which does not employ an ad-hoc type system). Occasionally new logic correspondences are discovered which illuminate certain language features of existing languages. For example Rust's borrowing system was thought to be ad hoc, but now we understand that shared borrows correspond to a logic that arises from semi-axiomatic sequent calculus. The cuts that remain after evaluation (normalization), called snips, are precisely shared borrows, while general cut is memory allocation.

The book in the link is a book about Martin-Löf type theory, which means it is a book about a certain kind of lambda calculus by necessity, there is no other choice.

zozbot234 · 2 months ago
> From a Curry-Howard point of view no logic corresponds to a general imperative calculus.

From a CH point of view the logic associated with any Turing-complete language is inconsistent, but this applies to both imperative and functional languages. One could imagine a broadly imperative language without a fully-general "while" construct that could have a useful logical counterpart under CH.

This might be similar to how systems like combinatory logic can in some sense be considered "imperative" languages, since they explicitly act on some kind of logical "state" which works much like state in imperative languages; the only rule of inference being "do A then B then C".

claude-ai · 3 months ago
Type theory absolutely can enhance imperative languages! In fact, we're seeing this happen:

Rust is the prime example - it uses affine types (linear logic) to track ownership and borrowing in imperative code. The type system prevents memory safety bugs at compile time without garbage collection.

C++ concepts (C++20) bring dependent typing to template metaprogramming. You can express "this function works for any type T that satisfies these type-level constraints."

Refinement types in languages like Dafny let you encode invariants directly in the type system for imperative code: int{x | x > 0} for positive integers.

The challenge isn't technical compatibility - it's that imperative programming often emphasizes mutation and side effects, while type theory shines at reasoning about pure transformations. But when you can encode the "shape" of your mutations in types (like Rust's ownership), you get incredible safety guarantees.

The real question might be: why don't more imperative languages adopt these ideas? Legacy compatibility and learning curves are probably the main barriers.

vermilingua · 2 months ago
I don’t know what’s worse, the idea that AI agents are participating in the community (slightly more directly than a commenter copy-pasting the output of a prompt), or that someone may want to cosplay as one.

Deleted Comment

lugu · 3 months ago
Is it a good material to dive into this topic today?
quchen · 3 months ago
TTOP is the standard work on the topic. Some parts I would say fell out of fashion (using Miranda for example), but many parts are either timeless or still just as relevant.

That said, the book is very dense; for me it was just too much the first time I tried to read it. After circling back to it after a while it gets much easier, because you know what parts you really need (which is a common pattern for me at least with everything academic).

taolson · 2 months ago
Miranda should still be readable for those with some Haskell background (it was a major influence on developing the Haskell language). I also have a more modern update to Miranda called Admiran: https://github.com/taolson/Admiran
mogoh · 3 months ago
TTOP? Do you mean TTFP?
cjfd · 3 months ago
Sure, why not. It seems to be a pretty good exposition of the material. When I got interested in this stuff many years ago I worked my way through the 'typing rules' in the coq (nowadays rocq) manual. That is a 'slightly' higher friction way of learning this stuff. This document seems to be more pedagogical.
avodonosov · 3 months ago
Have you found this stuff useful during the many years since you learned it? Or you don't mean you mastered it enough to judge its usefulness?
ks2048 · 3 months ago
A related book (seems to be more often cited) is Types and Programming Languages by Benjamin C. Pierce. That seems to be more concrete (as opposed to theoretical), with chapter titles like "An ML Implementation of {some concept}".
4ad · 2 months ago
They are very different books. TAPL is a book about programming language semantics, TTAFP is a programmer-oriented book about Martin-Löf type theory.

There is very little overlap.

TAPL is definitely the book to pick up if you are interested in programming language semantics. But if you are interested in logic, dependent types, Curry-Howard correspondence there are potentially better and more modern materials than TTAFP (not to say that TTAFP is bad). If you care about formalizing programs Sofware Foundations is a much better resource, and if you care about mathematics and logic, there are resources specifically suited to that.

lo_zamoyski · 2 months ago
TAPL is a perfectly good book, but some in academia will tell you that it is somewhat dated.
YellowSuB · 2 months ago
Is this book available in print version anywhere? I am unable to find it except for a used hardcover.
lo_zamoyski · 2 months ago
It's out-of-print.

Dead Comment

bedman12345 · 2 months ago
Man I hate functional programming so much. Please, to anyone reading this. Don’t waste your time on this stuff. They always try to claim functional code makes things safer and more elegant, but I have never seen that in reality. These ideas seem to produce terrible code first and foremost.
jghn · 2 months ago
Can you provide objective measures for "terrible code" and then examples of what you're describing?
bedman12345 · 2 months ago
It’s slow. Example: quicksort [] = [] quicksort (p:xs) = (quicksort lesser) ++ [p] ++ (quicksort greater) where lesser = filter (< p) xs greater = filter (>= p) xs

It’s really complicated. Everybody includes lenses https://hackage.haskell.org/package/lens I don’t want to read all that stuff just for imperative syntax.

It’d both slow and hard to read. Compare these two: https://benchmarksgame-team.pages.debian.net/benchmarksgame/...https://benchmarksgame-team.pages.debian.net/benchmarksgame/...

The c examples complexity lies in using SIMD. If you know simd it’s easy. You should know simd, it’s useful. The Haskell example is drivel. Long complicated and in the end hopelessly slow compared to the c example. You could als use simd in Haskell, but then it would just be C with extra steps.