Readit News logoReadit News
iso8859-1 · 2 years ago
> We can generalise this idea of being forced to handle the failure cases by saying that Haskell makes us write total functions rather than partial functions.

Haskell doesn't prevent endless recursion. (try e.g. `main = main`)

As the typed FP ecosystem is moving towards dependent typing (Agda, Idris, Lean), this becomes an issue, because you don't want the type checker to run indefinitely.

The many ad-hoc extensions to Haskell (TypeFamilies, DataKinds) are tying it down. Even the foundations might be a bit too ad-hoc: I've seen the type class resolution algorithm compared to a bad implementation of Prolog.

That's why, if you like the Haskell philosophy, why would you restrict yourself to Haskell? It's not bleeding edge any more.

Haskell had the possibility of being a standardized language, but look at how few packages MicroHS compiles (Lennart admitted to this at ICFP '24[0]). So the standardization has failed. The ecosystem is built upon C. The Wasm backend can't use the Wasm GC because of how idiosyncratic GHC's RTS is.[1]

So what does unique value proposition does GHC have left? Possibly the GHC runtime system, but it's not as sexy to pitch in a blog post like this.

[0]: Lennart Augustsson, MicroHS: https://www.youtube.com/watch?v=uMurx1a6Zck&t=36m

[1]: Cheng Shao, the Wasm backend for GHC: https://www.youtube.com/watch?v=uMurx1a6Zck&t=13290s

samvher · 2 years ago
For a long time already I've wanted to make the leap towards learning dependently typed programming, but I was never sure which language to invest in - they all seemed either very focused on just proofs (Coq, Lean) or just relatively far from Haskell in terms of maturity (Agda, Idris).

I went through Software Foundations [0] (Coq) which was fun and interesting but I can't say I ever really applied what I used there in software (I did get more comfortable with induction proofs).

You're mentioning Lean with Agda and Idris - is Lean usable as a general purpose language? I've been curious about Lean but I got the impression it sort of steps away from Haskell's legacy in terms of syntax and the like (unlike Agda and Idris) so was concerned it would be a large investment and wouldn't add much to what I've learned from Coq.

I'd love any insights on what's a useful way to learn more in the area of dependent types for a working engineer today.

[0] https://softwarefoundations.cis.upenn.edu/

bmitc · 2 years ago
When I last looked into Lean, I was highly unimpressed, even for doing math proofs. There's no way I'd invest into as a general-purpose language.

Idris at least does state that they what people building real programs with it and don't want it to just be a research language.

For dependent types, I myself am skeptical about languages trying to continuously push more and more stuff into types. I am not certain that such efforts are a net positive on writing good software. By their very definition, the more typed a language gets, the less programs it can represents. That obviously reduces buggy programs, but it also reduces non-buggy programs that you can implement. Highly typed languages force more and more effort into pre-compile time and you will often find yourself trying to fit a problem into the chains of the type system.

Rather, I think reasonably multi-paradigm languages like F# are the sweet spot. Just enough strict typing and functional core to get you going for most of your program, but then it allows classes and imperative programming when those paradigms are appropriate.

I think the way to go to write better software is better tooling and ergonomics. I don't think type systems are going to magically save us.

iso8859-1 · 2 years ago
Lean aims to be a general purpose language, but I haven't seen people actually write HTTP servers in it. If Leo de Moura really wanted it to be general purpose, what does the concurrent runtime look like then? To my knowledge, there isn't one?

That's why I've been writing an HTTP server in Idris2 instead. Here's a todo list demo app[1] and a hello world demo[2]. The advantage of Idris is that it compiles to e.g. Racket, a high level language with a concurrent runtime you can bind to from Idris.

It's also interesting how languages don't need their own hosting (e.g. Hackage) any more. Idris packages are just listed in a TOML file[3] (like Stackage) but still hosted on GitHub. No need for versions, just use git commit hashes. It's all experimental anyway.

[1]: https://janus.srht.site/docs/todolist.html [2]: https://git.sr.ht/~janus/web-server-racket-hello-world/tree/... [3]: https://github.com/stefan-hoeck/idris2-pack-db/blob/main/STA...

agentultra · 2 years ago
Lean can be used to write software in [0]. I dare say that it may even be the intended use for Lean 4. Work on porting mathlib to Lean 4 is far along and the mathematicians using it will certainly continue to do so. However there is more space for software written in Lean 4 as well.

However...

it's no where near ready for production use. They don't care about maintaining backwards compatibility. They are more focused on getting the language itself right than they are about helping people build and maintain software written in it. At least for the foreseeable future. If you do build things in it you're working on shifting ground.

But it has a lot of potential. The C code generated by Lean 4 is good. Although, that's another trade-off: compiling to C is another source of "quirks."

[0] https://agentultra.github.io/lean-4-hackers/

pmarreck · 2 years ago
One reason I took interest in Idris (and lately Roc, although it's even less mature) is the promise of a functional but usable to solve problems today language with all the latest thinking on writing good code baked-in already, compiling to a single binary (something I always envied about Go, although unfortunately it's Go). There simply isn't a lot there yet in the space of "pure functional language with only immutable values and compile time type checking that builds a single fast binary (and has some neat developer-friendly features/ideas such as dependent types, Roc's "tags" or pattern-matching with destructuring)" (this rules out OCaml, for example, despite it being mature). You get a lot of that, but not all of it, with other options (OCaml, Elixir/Erlang, Haskell... but those 3 offer a far larger library of ready-to-import software at this point). Haskell did manage to teach everyone who cares about these things that managing side-effects and keeping track of "purity" is important.

But it's frankly still early-days and we're still far from nirvana; Rust is starting to show some warts (despite still being a massive improvement over C from a safety perspective), and people are looking around for what's next.

One barely-touched thing is that there are compiler optimizations made possible by pure functional/pure immutable languages (such as caching a guaranteed result of an operation where those guarantees simply can't be given elsewhere) that have simply been impossible until now. (Roc is trying to go there, from what I can tell, and I'm here for it! Presumably, Rust has already, as long as you stick with its functional constructs, which I hear is hard sometimes)

mebassett · 2 years ago
Lean definitely intends to be usable as a general purpose language someday. but I think the bulk of the people involved are more focused on automated theorem proving. The Lean FRO [0] has funds to guide development of the language and they are planning to carve out a niche for stuff that requires formal verification. I'd say in terms of general purpose programming it fits into the category of being "relatively far from haskell in terms of maturity".

[0] https://lean-fro.org/about/roadmap-y2/

saghm · 2 years ago
I took that course as well, and for me, the big takeaway wasn't that I specifically want to use Coq for anything practical, but the idea that you can actually do quite a lot with a non-Turing complete language. Realizing that constraints in a language can be an asset rather than a limitation is something that I think isn't as widely understood as it should be.
lemonwaterlime · 2 years ago
> why would you restrict yourself to Haskell? It's not bleeding edge any more.

I'm not using Haskell because it's bleeding edge.

I use it because it is advanced enough and practical enough. It's at a good balanced spot now to do practical things while tapping into some of the advances in programming language theory.

The compiler and the build system have gotten a lot more stable over the past several years. The libraries for most production-type activities have gotten a lot more mature.

And I get all of the above plus strong type safety and composability, which helps me maintain applications in a way that I find satisfactory. For someone who aims to be pragmatic with a hint of scholarliness, Haskell is great.

iso8859-1 · 2 years ago
> The compiler and the build system have gotten a lot more stable over the past several years.

GHC2021 promises backwards compatibility, but it includes ill-specified extensions like ScopedTypeVariables. TypeAbstractions were just added, and they do the same thing, but differently.[0] It hasn't even been decided yet which extensions are stable[1], yet GHC2021 still promises compatibility in future compiler versions. So either, you'll have GHC retain inferior semantics because of backwards compatibility, or multiple ways of doing the same thing.

GHC2024 goes even further and includes extensions that are even more unstable, like DataKinds.

Another sign of instability is the fact that GHC 9.4 is still the recommended[2] release even though there are three newer 'stable' GHCs. I don't know of other languages where the recommendation is so far behind! GHC 9.4.1 is from Aug 2022.

It was the same situation with Cabal, it took forever to move beyond Cabal 3.6 because the subsequent releases had bugs.[3]

[0]: https://serokell.io/blog/ghc-dependent-types-in-haskell-3 [1]: https://github.com/ghc-proposals/ghc-proposals/pull/669 [2]: https://github.com/haskell/ghcup-metadata/issues/220 [3]: https://github.com/haskell/ghcup-metadata/issues/40

HelloNurse · 2 years ago
> The libraries for most production-type activities have gotten a lot more mature.

Can you provide an example of a "mature" way to match a regular expression with Unicode character classes or download a file over HTTPS?

gtf21 · 2 years ago
> That's why, if you like the Haskell philosophy, why would you restrict yourself to Haskell?

In the essay, I didn't say "Haskell is the only thing you should use", what I said was:

> Many languages have bits of these features, but only a few have all of them, and, of those languages (others include Idris, Agda, and Lean), Haskell is the most mature, and therefore has the largest ecosystem.

On this:

> It's not bleeding edge any more.

"Bleeding edge" is certainly not something I've used as a benefit in this essay, so not really sure where this comes from (unless you're not actually responding to the linked essay itself, but rather to ... something else?).

tkz1312 · 2 years ago
> So what does unique value proposition does GHC have left? Possibly the GHC runtime system, but it's not as sexy to pitch in a blog post like this.

The point is that programming in a pure language with typed side effects and immutable data dramatically reduces the size of the state space that must be reasoned about. This makes programming significantly easier (especially over the long term).

Of the languages that support this programming style Haskell remains the one with the largest library ecosystem, most comprehensive documentation, and most optimised compiler. I love lean and use it professionally, but it is nowhere near the usability of Haskell when it comes to being a production ready general purpose language.

User23 · 2 years ago
When you start mathematically characterizing state spaces it quickly becomes apparent that pure functional languages advantage over imperative ones is more a matter of the poor design of popular imperative languages rather than an intrinsic difference.

That (in)famous goto paper isn't really about spaghetti code, it's about how on Earth do you mathematically define the semantics of any statement in a language with unrestricted goto. If any continuation can follow literally anything then you're pretty much in no man's land. On the other hand imperative code is easy and natural to reason about when it uses a small set of well defined primitives.

If that sounds surprising, consider how mathematical logic itself, especially obviously in the calculational proof style, is essentially a series of assignment statements.

dataflow · 2 years ago
> dramatically reduces the size of the state space that must be reasoned about

True

> This makes programming significantly easier (especially over the long term)

Not true. (As in, the implication is not true.)

There are many many factors that affect ease of programming and the structure of the stage space is just one of them.

giraffe_lady · 2 years ago
> As the typed FP ecosystem is moving towards dependent typing (Agda, Idris, Lean)

I'm not really sure where the borders of "the typed FP language ecosystem" would be but feel pretty certain that such a thing would enclose also F#, Haskell, and OCaml. Any one of which has more users and more successful "public facing" projects than the languages you mentioned combined. This is not a dig on those languages, but they are niche languages even by the standards of the niche we're talking about.

You could argue that they point to the future but I don't seriously believe a trend among them represents a shift in the main stream of functional programming.

xupybd · 2 years ago
This is the F# time that I've seen F# contrasted as the more mainstream option and it warms my heart.
mightybyte · 2 years ago
> That's why, if you like the Haskell philosophy, why would you restrict yourself to Haskell? It's not bleeding edge any more.

Because it has a robust and mature ecosystem that is more viable for mainstream commercial use than any of the other "bleeding edge" languages.

js8 · 2 years ago
> As the typed FP ecosystem is moving towards dependent typing (Agda, Idris, Lean), this becomes an issue, because you don't want the type checker to run indefinitely.

First of all, does ecosystem move to dependent types? I think the practical value of Hindley-Milner is exactly in the fact that there is a nice boundary between types and terms.

Second, why would type checking running indefinitely be a practical problem? If I can't prove a theorem, I can't use it. The program that doesn't typecheck in practical amount of time is in practice identical to non-type-checked program, i.e. no worse than a status quo.

DonaldPShimoda · 2 years ago
No, the FP community at large is definitely not moving toward dependent types. However, much more of the FP research community is now focused on dependent types, but a good chunk of that research is concerned with questions like "How do we make X benefit of dependent types work in a more limited fashion for languages without a fully dependent type system?"

I think we'll continue to see lots of work in this direction and, subsequently, a lot of more mainstream FP languages will adopt features derived from dependent types research, but it's not like everybody's going to be writing Agda or Coq or Idris in 10 years instead of, like, OCaml and Haskell.

throwthrow5643 · 2 years ago
>Haskell doesn't prevent endless recursion. (try e.g. `main = main`)

Do you mean to say Haskell hasn't solved the halting problem yet?

xigoi · 2 years ago
There are languages that don’t permit non-terminating programs (at the cost of not being Turing-complete), such as Agda.
aSanchezStern · 2 years ago
Uhh, endless recursion doesn't cause your typechecker to run indefinitely; all recursion is sort of "endless" from a type perspective, since the recursion only hits a base case based on values. The problem with non-well-founded recursion like `main = main` is that it prevents you from soundly using types as propositions, since you can trivially inhabit any type.
remexre · 2 years ago
The infinite loop case is:

    loopType : Int -> Type
    loopType x = loopType x

    foo : List (loopType 3) -> Int
    foo _ = 42

    bar : List (loopType 4)
    bar = []

    baz : Int
    baz = foo bar
Determining if baz type-checks requires evaluating loopType 3 and loopType 4 to determine if they're equal.

kreyenborgi · 2 years ago
I've used Haskell for a decade or so, and tooling has improved immensely, with ghcup and cabal sandboxing and HLS now being quite stable. Maybe I've been lucky, but I haven't found much missing in the library ecosystem, or maybe I just have a lower threshold for using other languages when I see something is easier to do with a library from Python or whatever (typically nlp stuff). The one thing I still find annoying about Haskell is compile times. For the project itself, one can do fast compiles during development, but say you want to experiment with different GHC versions and base libraries, then you have to wait forever for your whole set of dependencies to compile (or buy some harddrives to store /nix on if you go that route). And installing some random Haskell program from source also becomes a drag due to dependency compile times (I'm always happy when I see a short dependency tree). Still, when deps are all compiled, it really is a fun language to program in.
transpute · 2 years ago
> ghcup and cabal sandboxing

Would you recommend using cabal or stack to package Haskell components in a Yocto layer, for sandboxed, reproducible, cross-compiled builds that are independent of host toolchains?

runeks · a year ago
My advice: use Stack if you're new to Haskell, otherwise cabal. Stack has better UX but isn't as powerful as cabal.
louthy · 2 years ago
I love Haskell the language, but Haskell the ecosystem still has a way to go:

* The compiler is slower than most mainstream language compilers

* Its ability to effectively report errors is poorer

* It tends to have 'first error, breaks rest of compile' problems

* I don't mind the more verbose 'stack trace' of errors, but I know juniors/noobs can find that quite overwhelming.

* The tooling, although significantly better than it was, is still poor compared to other some other functional languages, and really poor compared to mainstream languages like C#

* This ^ significantly steepens the learning curve for juniors and those new to Haskell and generally gets in the way for those more experienced.

* The library ecosystem for key capabilities in 'enterprise dev' is poor. Many unmaintained, substandard, or incomplete implementations. Often trying their best to be academically interesting, but not necessarily usable.

The library ecosystem is probably the biggest issue. Because it's not something you can easily overcome without a lot of effort.

I used to be very bullish on Haskell and brought it into my company for a greenfield project. The company had already been using pure-FP techniques (functors, monads, etc.), so it wasn't a stretch. We ran a weekly book club studying Haskell to help out the juniors and newbies. So, we really gave it its best chance.

After a year of running a team with it, I came to the conclusions above. Everything was much slower -- I kept telling myself that the code would be less brittle, so slower was OK -- but in reality it sapped momentum from the team.

I think Haskell's biggest contribution to the wider community is its ideas, which have influenced many other languages. I'm not sure it will ever have its moment in the sun unfortunately.

catgary · 2 years ago
I kind of agree that Haskell missed its window, and a big part of the problem is the academic-heavy ecosystem (everyone is doing great work, but there is a difference between academic and industrial code).

I’m personally quite interested in the Koka language. It has some novel ideas (functional-but-in-place algorithms, effect-handler-aware compilation, it uses reference counting rather than garbage collection) and is a Microsoft Research project. It’s starting to look more and more like an actual production-ready language. I can daydream about Microsoft throwing support behind it, along with some tooling to add some sort of Koka-Rust interoperability.

the_duke · 2 years ago
Koka is indeed incredibly cool, but:

It sees sporadic bursts of activity, probably when an intern is working on it, and otherwise remains mostly dormant. There is no package manager that could facilitate ecosystem growth. There is no effort to market and popularize it.

I believe it is fated to remain a research language indefinitely.

Deleted Comment

mhitza · 2 years ago
> * It tends to have 'first error, breaks rest of compile' problems

`-fdefer-type-errors` will report those errors as warnings and fail at runtime, which is good when writing/refactoring code. Even better the Haskell LSP does this out of the box.

> * The tooling, although significantly better than it was, is still poor compared to other some other functional languages, and really poor compared to mainstream languages like C#

Which other functional programming languages do you think have better tooling? Experimenting lately with OCaml, feels like Haskell's tooling is more mature, though OCaml's LSP starts up faster, almost instantly.

louthy · 2 years ago
> Which other functional programming languages do you think have better tooling?

F#, Scala

> OCaml's LSP starts up faster

It was two years ago that I used Haskell last and the LSP was often crashing. But in general there were always lots of 'niggles' with all parts of the tool-chain that just killed developer flow.

As I state in a sibling comment, the tooling is on the right trajectory, it just isn't there yet. So, this isn't the main reason to not do Haskell.

lkmain · 2 years ago
I haven't yet felt the need for third party tooling in OCaml. OCaml has real abstractions, easily readable modules and one can keep the whole language in one's head.

Usually people do not use objects, and if they do, they don't create a tightly coupled object mess that can only be unraveled by an IDE.

Deleted Comment

mattpallissard · 2 years ago
> Experimenting lately with OCaml, feels like Haskell's tooling is more mature.

I feel like OCaml has been on a fast upward trajectory the past couple of years. Both in terms of language features and tooling. I expect the developer experience to surpass Haskell if it hasn't already.

I really like Merlin/ocaml-lsp. Sure, it doesn't have every LSP feature like a tool with a lot of eyes on it, such as clangd, but it handles nearly everything.

And yeah, dune is a little odd, but I haven't had any issues with it in a while. I even have some curve-ball projects that involve a fair amount of C/FFI work.

My only complaint with opam is how switches feel a little clunky. But still, I'll take it over pip, npm, all day.

I've been using OCaml for years now and compared to most other languages, the experience has been pretty pleasant.

innocentoldguy · 2 years ago
Elixir’s tooling is awesome, in my opinion.
moomin · 2 years ago
No lies detected. I love Haskell, but productivity is a function of the entire ecosystem, and it’s just not there compared to most mainstream languages.
pyrale · 2 years ago
Most of your comments boil down to two items:

- The Haskell ecosystem doesn't have the budget of languages like Java or C# to build its tooling.

- The haskell ecosystem was innovative 20 years ago, but some newer languages like Rust or Elm have much better ergonomics due to learning from their forebearers.

Yes, it's true. And it's true for almost any smaller language out there.

louthy · 2 years ago
If you boil down my comments, sure, you could say that. But, that's why I didn't boil down my comments and used more words, because ultimately, it doesn't say that.

The thread is "Why Haskell?", I'm offering a counterpoint based on experience. YMMV and that's fine.

troupo · 2 years ago
Counterpoint: Elixir. While it sits on top of industrial-grade Erlang VM, the language itself produced a huge ecosystem of pragmatic and useful tools and libraries.
RandomThoughts3 · 2 years ago
The Haskell community is also very opinionated when it comes to style and some of the choices are not to everyone taste. I’m mostly thinking of point-free being seen as an ideal and the liberal usage of operators.
ParetoOptimal · 2 years ago
Point-free and liberal use of operators are and have long been minority viewpoints in Haskell.

I say this as someone who prefers symbols, point free, and highly appreciates "notation as a tool of thought".

gtf21 · 2 years ago
> The library ecosystem is probably the biggest issue.

I'd love to know which things specifically you're thinking about. For what we've been building, the "integration" libraries for postgres, AWS, etc. have been fine for us, likewise HTTP libraries (e.g. Servant) have been great.

I haven't _yet_ encountered a library problem, so am just very curious.

crote · 2 years ago
A few years ago I tried to use Servant to make a CAS[0] implementation for an academic project.

One issue I ran into was that Servant didn't have a proper way of overriding content negotiation: the CAS protocol specified a "?format=json" / "?format=xml" parameter, but Servant had no proper way of overriding its automatic content negotiation - which is baked deeply into its type system. I believe at the time I came across an ancient bug report which concluded that it was an "open research question" which would require "probably a complete rework".

Another issue was that Servant doesn't have proper integrated error handling. The library is designed around returning a 200 response, and provides a lot of tooling to make that easy and safe. However, I noticed that at the time its design essentially completely ignored failures! Your best option was basically a `Maybe SomeResponseType` which in the `None` case gave a 200 response with a "{'status': 'error'}" content. There was a similar years-old bug report for this issue, which is quite worrying considering it's not exactly rocket science, and pretty much every single web developer is going to run into it.

All of this gave a feeling of a very rough and unfinished library, whose author was more concerned about writing a research paper than actually making useful software. Luckily those issues had no real-world implication for me, as I was only a student losing a few days on some minor project. But if I were to come across this during professional software development I'd be seriously pissed, and probably write off the entire ecosystem: if this is what I can expect from "great" libraries, what does the average ones look like - am I going to have to write every single trivial thing from scratch?

I really love the core language of Haskell, but after running into issues like these a few dozen times I unfortunately have trouble justifying using it to myself. Maybe Haskell will be great five or ten years from now, but in its current state I fear it is probably best to use something else.

[0]: https://en.wikipedia.org/wiki/Central_Authentication_Service

imoverclocked · 2 years ago
I tried building a couple small projects to get familiar with the language.

One project did a bunch of calculation based on geolocation and geometry. I needed to output graphs and after looking around, reached for gnuplot. Turns out, it’s a wrapper around a system call to launch gnuplot in a child process. There is no handle returned so you can never know when the plot is done. If you exit as soon as the call returns, you get to race gnuplot to the temp file that gets automatically cleaned up by your process. The only way to eliminate the race is by sleeping… so if you add more plots, make sure you increase your sleep time too. :-/

Another utility was a network oriented daemon. I needed to capture packets and then run commands based on them… so I reached for pcap. It uses old bindings (which is fine) and doesn’t expose the socket or any way to set options for the socket. Long story short, it never worked. I looked at the various other interfaces around pcap but there was always a significant deficiency of some kind for my use case.

Now, I’m not a seasoned Haskell programmer by any means and it’s possible I am just missing out on something fundamental. However, it really looks to me like someone did a quick hack that worked for a very specific use-case for both of these libraries.

The language is cool but I’ve definitely struggled with libraries.

louthy · 2 years ago
The project was a cloud agnostic platform-as-a-service for building healthcare applications. It needed graph-DBs, Postgres, all clouds, localisation, event-streams, UIs, etc. I won't list where the problems were, because I don't think it's helpful -- each project has its own needs, you may well be lucky where we were not. Certainly the project wasn't a standard enterprise app, it was much more complex, so we had some foundational things we needed that perhaps your average dev doesn't need. However, other ecosystems would usually have a decent off-the-shelf versions, because they're more mature/evolved.

You have to realise that none of the problems were insurmountable, I had a talented team who could overcome any of the issues, it just became like walking through treacle trying to get moving.

And yes, Servant was great, we used that also. Although we didn't get super far in testing its range.

chii · 2 years ago
Probably referring to something like spring (for java), which is a one stop shop for everything, including things like integration with monitoring/analytics, rate-limiting, etc
ants_everywhere · 2 years ago
I completely agree. I'm interested in making the Haskell tooling system better. I would welcome anyone with Haskell experience to let me know what you think would be the highest priority items here.

I'm also curious about the slowness of compilation and whether that's intrinsic to the design of GHC.

cptwunderlich · 2 years ago
The Haskell Language Server (LSP) always needs help: https://github.com/haskell/haskell-language-server/issues?q=...

As for GHC compile times... hard to say. The compiler does do a lot of things. Type checking and inference of a complex type system, lots of optimizations etc. I don't think it's just some bug/inefficient implementation, bc. resources have been poured into optimizations and still are. But there are certainly ways to improve speed. For single issues, check the bug-tracker: https://gitlab.haskell.org/ghc/ghc/-/issues/?label_name%5B%5...

For the big picture, maybe ask in the discourse[1] or the mailing list. If you want to contribute to the compiler, I can recommend that you ask for a gitlab account via the mailing list and introduce youself and your interests. Start by picking easy tickets - GHC is a huge codebase, it takes a while to get familiar.

Other than that, I'd say some of the tooling could use some IDE integration (e.g., VS Code plugins).

[1]...https://discourse.haskell.org/

drblue · 2 years ago
The highest priority is probably making real debugging tools. Right now, the only decent debugging tool is ghc-debug to connect to a live process, and doing anything over that connection is slow, slow, slow. ghc-debug was the only thing which was able to resolve a long standing thunk leak in one of my systems, and I know that unexplained thunk leak caused a previous startup I was at to throw away their Haskell code and rewrite it in Rust. In my case, it found the single place where I had said `Just $` instead of `Just $!` which I had missed the three times I had inspected the ~15k line program. ghc-debug still feels like a pre-alpha though, go compare it to VisualVM for what other languages have.

Also, I have found very little use for the various runtime flags like `+RTS -p`. These flags aren't serious debugging tools; I couldn't find any way to even trigger them internally at runtime around a small section, which becomes a problem when it takes 10 minutes to load data from disk when the profiler is on.

The debugging situation with Haskell is really, really bad and it's enough that I try to steer people away from starting new projects with the language.

tome · 2 years ago
> I would welcome anyone with Haskell experience to let me know what you think would be the highest priority items here.

Simplifying cabal probably, though that's a system-level problem, just just a cabal codebase problem.

lemonwaterlime · 2 years ago
The Brittany code fixer needs a maintainer. The previous one had to step away. It has a unique approach to code formatting that the ormolu/fourmolu formatters doesn’t. There’s lots of the philosophy and such in the docs.

I like it better than the ormolu family because it respects your placement of comments and just formats the code itself. But it isn’t maintained as of a few years ago.

https://hackage.haskell.org/package/brittany

tome · 2 years ago
> It tends to have 'first error, breaks rest of compile' problems

Sort of. It has a "failure at a stage prevents progress to next stage", so a parse error means you won't type check (or indeed, continue parsing). See these proposals for some progress on the matter

* https://github.com/haskellfoundation/tech-proposals/pull/63

* https://github.com/ghc-proposals/ghc-proposals/pull/333

louthy · 2 years ago
I understand why it happens, but it's an absolute killer for refactoring.

I didn't mention refactoring in my list because it may just be personal experience: my style of coding is to write fearlessly knowing that I will also refactor fearlessly. So less upfront thinking, more brute force writing (on instinct) & aggressive refactoring. I find I get my solution much faster and it ends up being more elegant.

Having a parse error or a type inference error in another module causing all other inference to fail kills the refactoring part of that process where there are syntax/semantic errors everywhere for a period of time whilst I fix them up.

It's good to see the issue acknowledged and hopefully resolved in the future.

Additionally, it would be good to see some proper refactoring tooling. Renaming, moving types/functions from one module to another, etc.

Vosporos · 2 years ago
If you are willing / able to report these pain points in detail to the Haskell Foundation, this is going to be valuable feedback that will help orient the investments in tooling in the near future.
adastra22 · 2 years ago
All bug reports are good. But is this not obvious? Do the Haskell developers not use other language ecosystems? This goes beyond “this edge case is difficult” and into “the whole tooling stack is infamously hard to work with.” I just assumed Haskell, like eMacs, attracted a certain kind of developer that embraced the warts.
louthy · 2 years ago
I think tooling is something that is clearly on a good trajectory. When I consider what the Haskell tooling was like when I first started using it, well, it was non-existent! (and Cabal didn't even understand what dependencies were, haha!)

So, it's much, much better than it was. It's still not comparable to mainstream languages, but it's going the right way. So, I wouldn't necessarily take that as the killer.

The biggest issue was the library ecosystem. We spent an not-small amount of time evaluating libraries, realising they were not up to scratch, trying to do build our own, or interacting with the authors to understand the plans. When you're trying to get moving at the start of a project, this can be quite painful. It takes longer to get to an MVP. That's tough when there are eyes on its success or not.

Even though I'd been using Haskell for at least a decade before we embarked upon that path, I hadn't really ever built anything substantial. The greenfield project was a complex beast on a number of levels (which was one of the reasons I felt Haskell would excel, it would force us to be more robust with our architecture). But, we just couldn't find the libraries that were good enough.

My sense was there's a lot of academics writing libraries. I'm not implying that academics write poor code; just that their motivations aren't always aligned with what an industry dev might want. Usually this is around simplicity and ease-of-use. And, because quite a lot of libraries were either poorly documented or their intent was impenetrable, it would take longer to evaluate.

I think if the Haskell Foundation are going to do anything, then they should probably write down the top 50 needed packages in industry, and then put some funding/effort towards helping the authors of existing libraries to bring them up to scratch (or potentially, developing their own), perhaps even create a 'mainstream adoption style guide', that standardises the library surfaces -- there's far too much variability. It needs a keen eye on what your average industry dev needs though.

I realise there are plenty of companies using Haskell successfully, so this should only be one data point. But, it is a data point of someone who is a massive Haskell (language) fan.

Haskell has had a massive influence on me and how I write code. It's directly influenced a major open-source project I have developed [1]. But, unfortunately, I don't think I'll use it again for a pro project.

[1] https://github.com/louthy/language-ext

nh2 · 2 years ago
> The compiler is slower than most mainstream language compilers

Depends on which mainstream languages one compares with; there's always C++.

My project here has 50k lines Haskell, 10k C++, 50k lines TypeScript (code-only, not comments). Counting user CPU time (1 core, Xeon E5-1650 v3 3.50GHz):

    TypeScript 123 lines/s
    Haskell     33 lines/s
    C++          7 lines/s

Liquid_Fire · 2 years ago
Can you clarify what "7 lines/s" means? Surely you are not saying that your 10k lines of C++ take more than 23 minutes to compile on a single core? Is it 10k lines of template spaghetti?

For comparison, I just compiled a 25k line .cpp (probably upwards of 100k once you add all the headers it includes) from a large project, in 15s. Admittedly, on a slightly faster processor - let's call it 30s.

robocat · 2 years ago
It is a shame that the article almost completely ignores the issue of the tooling. I particularly find the attitude in the following paragraph offensively academically true:

  All mainstream, general purpose programming languages are (basically) Turing-complete, and therefore any programme you can write in one you can, in fact, write in another. There is a computational equivalence between them. The main differences are instead in the expressiveness of the languages, the guardrails they give you, and their performance characteristics (although this is possibly more of a runtime/compiler implementation question).
I decided to have a go at learning the basics of Haskell and the first error I got immediately phased me because it reminded me of unhelpful compilers of the 80s. I have bashed my head against different languages and poor tooling enough times to know I can learn, but I've also done it enough times that I am unwilling to masochistically force myself through that gauntlet unless I have a very good reason to do so. The "joy" of learning is absent with unfriendly tools.

The syntax summary in the article is really good. Short and clear.

samatman · 2 years ago
> All mainstream, general purpose programming languages are (basically) Turing-complete, and therefore any programme you can write in one you can, in fact, write in another.

That stuck out to me as well, I said out loud "that is a very Haskell thing to say". It would be more accurate to say that Turing Completeness means that any programme you write in one language, may be run in another language by writing an emulator for the first programme's runtime, and executing the first programme in the second.

Because it is not "in fact" the case that a given developer can write a programme in language B just because that developer can write the program in language A. It isn't even "in principle" the case, computability and programming just aren't that closely related, it's like saying anything you can do with a chainsaw you can do with a pocketknife because they're both Sharp Complete.

I shook it off and enjoyed the rest of the article, though. Haskell will never be my jam but I like reading people sing the virtues of what they love.

gtf21 · 2 years ago
> It is a shame that the article almost completely ignores the issue of the tooling.

Mostly because while I found of the tooling occasionally difficult, I didn’t find Haskell particularly bad compared to other language ecosystems I’ve played with, with the exception of Rust, for which the compiler errors are really good.

> The syntax summary in the article is really good

Thanks, I wasn’t so sure how to balance that bit.

devjab · 2 years ago
> compared to mainstream languages like C#

Out of curiosity does this also hold true for F#?

louthy · 2 years ago
F#’s tooling is worse than C# for sure, but it’s a big step-up from Haskell and has access to the .NET framework.

I listed C# because that’s the mainstream language I know the best, and arguably has best-in-class tooling.

Of course you have to be prepared to lose some of the creature comforts when using a more left-field language. But, you still need to be productive. The whole ecosystem has to be a net gain in productivity, or stability, or security, or maintainability — pick your poison depending on what matters to your situation.

I had hoped Haskell would pay dividends due to its purity, expressive type-system, battle tested-ness, etc. I expected us to be slower, just not as slow as it turned out.

Ultimately the trade off didn’t work for us.

Dead Comment

sesm · 2 years ago
Haskell is an experiment on having laziness at language level. This experiment clearly shows, that laziness on language level is a bad idea.You can get all the benefits of laziness at standard library level, as illustrated by Clojure and Twitter Storm using it in production.

All the other FP stuff (union types, etc) existed before Haskell in non-lazy FP languages.

iainmerrick · 2 years ago
Right, I was surprised I had to scroll down here so far to see the first mention of laziness; it's the core feature of Haskell (copied from Miranda so researchers had a non-proprietary language to build their work on).

From everything I've ready about people's experiences with using Haskell for large projects, it sounds like lazy evaluation unfortunately adds more problems than it removes.

agentultra · 2 years ago
There’s a strong case that laziness should be the default: https://m.youtube.com/watch?v=fSqE-HSh_NU

I’m not sure I’m experienced enough in PLT enough to have a strong opinion myself.

However, from experience, laziness has a lot of advantages both from a program construction and performance perspective.

whateveracct · 2 years ago
> This experiment clearly shows, that laziness on language level is a bad idea.

This is quite the claim. I know plenty of experienced and productive Haskellers who disagree with this (myself included)

kccqzy · 2 years ago
Laziness is but one mistake in Haskell. It should not prevent you from using other parts of the language that are wonderful. There's a reason Mu exists, which is to take Haskell and make it strict by default: there are plenty of good things about Haskell even if you consider laziness to be a mistake.

(Of course a small minority of people don't consider laziness as a mistake as it enables equational reasoning; let's not go there.)

tome · 2 years ago
Having used Mu I concluded that Haskell got function laziness correct. (Data type laziness is a different issue, but that can be solved by `StrictData`).
louthy · 2 years ago
I don't see laziness as a problem in Haskell, especially as you can opt out of laziness altogether, or partially. In practice I found that `StrictData` solves pretty much every issue.
Coolbeanstoo · 2 years ago
I would like to use haskell or another functional language professionally.

I try them out (ocaml,haskell,clojure,etc) from time to time and think they're fairly interesting, but i struggle to figure out how to make bigger programs with them as I've never seen how you build up a code base with the tools they provide and with someone to review the code i produce and so never have any luck with jobs i've applied to.

On the flipside I never had too much trouble figuring out how to make things with Go, as it has so little going on and because it was the first language i worked with professionally for an extended period of time. I think that also leads me to trying to apply the same patterns because I know them even if they dont really work in the world of functional languages

Not sure what the point of this comment is, but I think i just want to experience the moment of mind opening-ness that people talk about when it comes to working with these kinds of languages on a really good team

bedman12345 · 2 years ago
I’ve been working with pure functional languages and custom lisp dialects professionally my whole tenure. You get a whole bag of problems for a very subjective upside. Teams fragment into those that know how to work with these fringe tools and those who don’t. The projects using them that I worked on all had trouble with getting/retaining people. They also all had performance issues and had bugs like all other software. You’re not missing out on anything.
zelphirkalt · 2 years ago
Many problems stem from people not being willing to learn another paradigm of computer programming. Of course teams will split, if some people are not willing to learn, because then some will be unable to work on certain things, while other will be able to do so.

You mention performance. However, if we look at how many Python shops there are, this can hardly be a problem. I imagine ecosystems to be a much bigger issue than performance. Many implementations of functional languages have better performance than Python anyway.

There are many reasons why a company can have issues retaining people. A shitty uninteresting product, bad management, low wages, bad culture ... Lets eliminate those and see whether they still have issues retaining devs. I suspect, that an interesting tech stack could make people stay, because it is not so easy to find a job with such a tech stack.

However, many companies want easily replaceable cogs, which FP programmers are definitely not these days. So they would rather hire low skill easily replaceable than highly skilled but more expensive workforce. They know they will not be able to retain the highly skilled, because they know their other factors are not in line for that.

MetaWhirledPeas · 2 years ago
> Teams fragment into those that know how to work with these fringe tools and those who don’t.

So the teams self-select to let you work with the people you want to work with? Tell me more!

rebeccaskinner · 2 years ago
I’ve been using Haskell professionally off and on, along with other languages, since 2008. Professional experience certainly will help you learn some patterns, but honestly my best advice for structuring programs is to not think too hard about it.

Use modules as your basic unit of abstraction. Don’t go out of your way to make their organization over-engineered, but each module should basically do one thing, and should define everything it needs to do that thing (types, classes, functions).

Use parametric polymorphism as much as you can, without making the code too hard to read. Prefer functions and records over type classes as much as possible. Type classes that only ever have a single instance, don’t have laws, or type classes defined for unit data types are major code smells.

Don’t worry about avoiding IO, but as much as you can try to keep IO code separate from pure code. For example, if you need to read a value from the user, do some calculations, then print a message, it’s far better to factor the “do some calculations” part out into a pure function that takes the things you read in as arguments and returns a value to print. It’s really tempting to interleave logic with IO but you’ll save so much time, energy, and pain if you avoid this.

Essentially, keep things as simple as you can without getting belligerent about it. The type system will help you a lot with refactoring.

Start at the beginning. Write functions. When you see some piece of functionality that you need, use `undefined` to make a placeholder function. Then, go to your place holder and start implementing it. Use undefined to fill in bits that you need, and so on.

Fancy types are neat but it’s easy to end up with a solution in search of a problem. Avoid them until you really have a concrete problem that they solve- then embrace them for that problem (and only that problem).

You’ll refactor a lot, and learn to have a better gut feeling for how to structure things, but that’s just the process of gaining experience. Leaning into the basics of FP (pure functions, composed together) will be the path of least resistance as you are getting there.

cosmic_quanta · 2 years ago
I have also initially struggled with structuring Haskell programs. Without knowing anything about what you want to do, here's my general approach:

1. Decide on an effect system

Remember, Haskell is pure, so any side-effect will be strictly explicit. What broad capabilities do you want? Usually, you need to access some program-level configuration (e.g. command-line options) and the ability to do IO (networking, reading/writing files, etc), so most people start with that.

https://tech.fpcomplete.com/blog/2017/06/readert-design-patt...

2. Encode your business logic in functions (purely if possible)

Your application does some processing of data. The details don't matter. Use pure functions as much as possible, and factor effectful computations (e.g. database accesses) out into their own functions.

3. Glue everything together in a monadic context

Once you have all your business logic, glue everything together in a context with your effect system (usually a monad stack using ReaderT). This is usually where concurrency comes in (e.g. launch 1 thread per request).

---

Beyond this, your application design will depend on your use-case.

If you are interested, I strongly suggest to read 'Production Haskell' by Matt Parsons, which has many chapters on 'Haskell application structure'.

solomonb · 2 years ago
> 1. Decide on an effect system

This shouldn't even be proposed as a question to someone new to Haskell. They should learn how monad transformers work and just use them. 90% of developers playing around effect systems would be just fine with MTL or even just concrete transformers. All Haskell effect systems should be considered experimental at this point with unclear shelf lives.

Everything else you said I agree with as solid advice!

jsbg · 2 years ago
This is excellent advice that unfortunately seems to get lost in a lot of Haskell teachings. I learned Haskell in school but until I had to use it professionally I would have never been able to wrap my head around effect systems. I still think that part of Haskell is unfortunate as it can get in the way of getting things done if you're not an expert, but being able to separate pure functions from effectful ones is a massive advantage.
mattgreenrocks · 2 years ago
I've used Haskell professionally for two years. It is the right pick for the project I'm working on (static analysis). I'm less sold on the overall Haskell ecosystem, tooling, and the overall Haskell culture.

There are still plenty of ways to do things wrong. Strong types don't prevent that. Laziness is a double-edged sword and can be difficult to reason about.

jerf · 2 years ago
People love to talk about the upsides and the fun and what you can learn from Haskell.

I am one of these people.

People are much more reluctant to share what it is that led them to the conclusion that Haskell isn't something they want to use professionally, or something they can't use professionally. It's a combination of things, such as it just generally being less energizing to talk about that, and also some degree of frankly-justified fear of being harassed by people who will argue loudly and insultingly that you just Don't Get It.

I am not one of those people.

I will share the three main reasons I don't even consider it professionally.

First, Hacker News has a stronger-than-average egalitarian streak and really wants to believe that everybody in the world is already a developer with 15 years of experience and expert-level knowledge in all they survey from atop their accomplished throne, but that's not how the real world works. In the real world I work with coworkers who I have to train why in my Go code, a "DomainName" is a type instead of just a string. Then, just as the light bulb goes off, they move on from the project and I get the next junior dev who I have to explain it to. I'm hardly going to get to the point where I have a team of people who are Haskell experts when I'm explaining this basic thing over and over.

And, to be 100% clear, this is not their "fault", because being a junior programmer in 2024 is facing a mountain of issues I didn't face at their age: https://news.ycombinator.com/item?id=33911633 I wasn't expected to know about how to do source control or write everything to be rollback-able or interact with QA, or, well, see linked post for more examples. Haskell is another stack of requirements on top of a modern junior dev that is a hell of an ask. There better be some damn good reasons for me to add this to my minimim-viable developer for a project. I am not expressing contempt for the junior programmers here from atop my own lofty perch; I am encouraging people to have sympathy with them, especially if you also come up in the 90s when it was really relatively easy, and to make sure you don't spec out projects where you're basically pulling the ladder up after yourself. You need to have an onboarding plan, and "spend a whole bunch of time learning Haskell" is spending a lot of your onboarding plan currency.

Second, while a Haskell program that has the chef's kiss perfect architecture is a joy to work with, it is much more difficult to get there for a real project. When I was playing with Haskell it was a frequent occurrence to discover I'd architected something wrong, and to essentially need to rewrite the whole program, because there is no intermediate functioning program between where I was and where I needed to be. The strength of the type system is a great benefit, but it does not put up with your crap. But "your crap" includes things like being able to rearchitect a system in phases, or partially, and still have a functioning system, and some other things that are harder to characterize but you do a lot of without even realizing it.

I'd analogize it to a metalworker working with titanium. If you need it, you need it. If you can afford it, great. The end result is amazing. But it's a much harder metal to work with for the exact same reason it's amazing. The strength of the end part is directly reflected in the metal resisting you working with it.

I expect at a true expert level you can get over this, but then as per my first point, demanding that all my fellow developers become true experts in this obscure language is taking it up another level past just being able to work in it at all.

Finally, a lot of programming requirements have changed over the years. 10-15 years ago I could feasibly break my program into a "functional core" and an external IO system. This has become a great deal less true, because the baseline requirement for logging, metrics, and visibility have gone up a lot, and suddenly that "pure core" becomes a lot less appealing. Yes, of course, our pure functions could all return logs and metrics and whathaveyou, and sure, you can set up the syntax to the point that it's almost tolerable, but you're still going to face issues where basically everything is now in some sort of IO. If nothing else, those beautiful (Int -> Int -> Int) functions all become (Int -> Int -> LoggingMetrics Int) and now it isn't just that you "get" to use monadic interfaces but you're in the LoggingMetrics monad for everything and the advantages of Haskell, while they do not go away entirely, are somewhat mitigated, because it really wants purity. It puts me halfway to being in the "imperative monad" already, and makes the plan of just going ahead and being there and programming carefully a lot more appealing. Especially when you combine that with the junior devs being able to understand the resulting code.

In the end, while I still strongly recommend professional programmers spend some time in this world to glean some lessons from it that are much more challenging to learn anywhere else, it is better to take the lessons learned and learn how to apply them back into conventional languages than to try to insist on using the more pure functional languages in an engineering environment. This isn't even the complete list of issues, but they're sufficient to eliminate them from consideration for almost every engineering task. And in fact every case I have personally witnessed where someone pushed through anyhow and did it, it was ultimately a business failure.

cosmic_quanta · 2 years ago
> I'd analogize it to a metalworker working with titanium. If you need it, you need it. If you can afford it, great. The end result is amazing. But it's a much harder metal to work with for the exact same reason it's amazing.

What a beautiful, succinct analogy. I'm stealing this.

ninetyninenine · 2 years ago
> I'd analogize it to a metalworker working with titanium. If you need it, you need it. If you can afford it, great. The end result is amazing. But it's a much harder metal to work with for the exact same reason it's amazing. The strength of the end part is directly reflected in the metal resisting you working with it.

I’d say you missed one of the main points of Haskell and functional programming in general.

The combinator is the most modular and fundamental computational primitive available in programming. When you make a functional program it should be constructed out of the composition of thousands of these primitive with extremely strict separation from IO and multiple layers of abstraction. Each layer is simply composed functions from the layer below.

If you think of fp programming this way. It becomes the most modular most reconfigurable programming pattern in existence.

You have access to all layers of abstraction and within each layer are independent modules of composed combinators. Your titanium is given super powers where you can access the engine, the part, the molecule and the atom.

All the static safety and beauty Haskell provides is actually a side thing. What Haskell and functional programming in general provides is the most fundamental and foundational way to organize your program such that any architectural change only requires you replacing and changing the minimum amount of required modules. Literally the opposite of what you’re saying.

The key is to make your program just a bunch of combinators all the way down with an imperative io shell that is as thin as possible. This is nirvana of program organization and patterns.

robertlagrant · 2 years ago
I, like probably many people, like the idea of Haskell, but don't need a bottom-up language tutorial. Instead, I need:

- how easy is it to make a web application with a hello world endpoint?

- How easy is it to auth a JWT?

- Is there a good ORM that supports migrations?

- Do I have to remodel half my type system because a product owner told me about this weird business logic edge case we have to deal with?

- How do I do logging?

Etc.

gtf21 · 2 years ago
> - how easy is it to make a web application with a hello world endpoint?

If that's all you want it to do, it's very easy with Wai/Warp.

> - How easy is it to auth a JWT?

We don't use JWTs, but we did look at it and Servant (which is a library for building HTTP APIs) has built in functionality for them.

> - Is there a good ORM that supports migrations?

There are several with quite interesting properties. Some (like persistent) do automatic migrations based on your schema definitions. Others you have to write migration SQL/other DSL.

> - Do I have to remodel half my type system because a product owner told me about this weird business logic edge case we have to deal with?

I think that's going to really depend on how you have structured your domain model, it's not a language question as much as a design question.

> - How do I do logging?

We use a library called Katip for logging, but there are others which are simpler. You can also just print to stdout if you want to.

robertlagrant · 2 years ago
Thank you! What I was more saying was that an article like this would do better showing some practical simple examples, that would let people do things, rather than bemoaning how Haskell is viewed in 2024.
valenterry · 2 years ago
This doesn't work.

Imagine you talk to someone who has done assembly his whole life and now wants to write something in, let's say, Java.

What would you think if he asks the question in the way you did?

Sometimes, when you learn a language that is so different you really really should NOT try to assume that your current knowledge just translates.

robertlagrant · 2 years ago
I'm not advocating for removing the existing articles that introduce people to Haskell.
kccqzy · 2 years ago
You can't do any of that without having first understood a bottom-up introduction. There are so many web frameworks from Yesod to Scotty to Servant (these are just the ones I've used personally) but you can't use any of them without at least an understanding of the language.

Deleted Comment

justinhj · 2 years ago
That sounds valuable too but maybe it comes after the basic concepts or you may find people immediately dismiss it. There is all kinds of extra syntax and baggage that may seem pointless at first.

Deleted Comment

Ericson2314 · 2 years ago
https://haskell-beam.github.io/beam/ is fantastic, but good luck understanding it if you don't already know some Haskell
cpa · 2 years ago
Haskell has had a profound impact on the way I think about programming and how I architect my code and build services. The stateless nature of Haskell is something that many rediscover at different points in their careers. Eg in webdev, it's mostly about offloading state to the database and treating the application as "dumb nodes." That's what most K8s deployments do.

The type system in Haskell, particularly union types, is incredibly powerful, easy to understand for the most part (you don't need to understand monads that deeply to use them), and highly useful.

And I've had a lot of fun micro-optimizing Haskell code for Project Euler problems when I was studying.

Give it a try. Especially, if you don't know what to expect, I can guarantee that you'll be surprised!

Granted, the tooling is sh*t.

setopt · 2 years ago
> Haskell has had a profound impact on the way I think about programming and how I architect my code and build services.

> And I've had a lot of fun micro-optimizing Haskell code for Project Euler problems when I was studying.

Sounds a lot like my experience. I never really used Haskell for "real work", where I need support for high-performance numerical calculations that is simply better in other languages (Python, Julia, C/C++, Fortran).

But learning functional programming through Haskell – mostly by following the "Learn you a Haskell" book and then spending time working through Project Euler exercises using it – had a quite formative effect on how I write code.

I even ended up baking some functional programming concepts into my Fortran code later. For instance, I implemented the ability to "map" functions on my data structures, and made heavy use of "pure functions" which are supported by the modern Fortran standard (the compiler then checks for side effects).

It's however hard to go all the way on functional programming in HPC contexts, although I wish there were better libraries available to enable this.

nextos · 2 years ago
> But learning functional programming through Haskell [...] had a quite formative effect on how I write code.

I think it is a shame Haskell has gained a reputation of being hard, because it can be an enriching learning experience. Lots of its complexity is accidental, and comes from the myriad of language extensions that have been created for research purposes.

There was an initiative to define a simpler subset of the language, which IMHO would have been great, but it didn't take off: https://www.simplehaskell.org. Ultimately, one can stick to Haskell 98 or Haskell 2010 plus some newer cherry-picked extensions.

l5870uoo9y · 2 years ago
Pure functions are a crazy useful abstractions. Complex business logic? Extract it into a type-safe pure function. Still to "unsafe"? Testing pure functions are fast and simple. Unclear what a complex function does? Extract it into meaningful pure functions.
ayakang31415 · 2 years ago
Haskell sounds like a good language to hone your programming skills. What kind of projects is Haskell suited for to get started (besides Euler project)? I use Python primarily for scientific research (mostly numerical computation).
adastra22 · 2 years ago
Haskell also changed the way I think about programming. But I wonder if it would have as much of an impact on someone coming from a language like Rust or even modern C++ which has adopted many of haskell’s features?
mmoll · 2 years ago
True. I often think of Rust as a best-of compilation of Haskell and C++ (although I read somewhere that OCaml had a greater influence on it, but I don’t know that language well enough)

In real life, I find that Haskell suffers from trying too hard to use the most general concept that‘s applicable (no pun intended). Haskell programs happily use “Either Err Val” and “Left x” where other languages would use the more expressive but less general “Result Err Val” and “Error x”. Also, I don’t want to mentally parse nested liftM2s or learn the 5th effect system ;-)

setopt · 2 years ago
I think it does, actually. Python also has many of Haskell's features (list comprehensions, map/filter/reduce, itertools, functools, etc.). But I only started reaching for those features after learning about them in Haskell.

In Python, it's very easy to just write out a for-loops to do these things, and you don't necessarily go looking for alternative ways to do these things unless you know the functional equivalents already. But in Haskell you're forced to do things this way since there is no for-loop available. But after learning that way of thinking, the result is then more compact code with arguably less risk of bugs.

Deleted Comment

TylerE · 2 years ago
Heck, even coming from Python (2) it felt very underwhelming and hugely oversold. (Edit: To be fair, I'd done a bit of Ocaml years earlier so algebraic data types weren't some huge revelation).

Laziness is mostly an anti-pattern.

odyssey7 · 2 years ago
Check out Swift, too!
gtf21 · 2 years ago
> Granted, the tooling is sh*t.

I hear this a lot, but am curious about two things: (a) which bit(s) of the toolchain are you thinking about specifically -- I know HLS can be quite janky but I haven't really been blocked by any tooling problems myself; (b) have you done much Haskell in production recently -- i.e. is this scar tissue from some ago or have you tried the toolchain recently and still found it to be lacking?

n_plus_1_acc · 2 years ago
Everytime I use cabal and/or stack, it gives me a wall of errors and i just reinstall everyrhing all the time.
jillesvangurp · 2 years ago
I think the tooling being not ideal is a reflection of how mature/serious the community is about non academic usage. Haskell has been around for ages but it never really escaped its academic nature. I actually studied in Utrecht in the nineties where there was a lot of activity around this topic at the time. Eric Meyer who later created F# at MS was a teacher there and there was a lot of activity around doing stuff with Gopher which is a Haskell predecessor, which I learned and used at the time. All our compiler courses were basically fiddling with compiler generator frameworks that came straight out of the graduate program. Awesome research group at the time.

My take on this is that this was all nice and interesting but a lot of this stuff was a bit academic. F# is probably the closest the community got to having a mature tooling and developer ecosystem.

I don't use Haskell myself and have no strong opinions on the topic. But usually a good community response to challenges like this is somebody stepping up and doing something about it. That starts with caring enough. If nobody cares, nothing happens.

Smalltalk kicked off a small tool revolution in the nineties with its refactoring browser. Smalltalk was famous for having its own IDE. That was no accident. Alan Kay, who was at Xerox PARC famously said that the best way to predict the future was to invent it. And of course he was (and is) very active in the Smalltalk community and its early development. Smalltalk was a language community that was from day one focused on having great tools. Lots of good stuff came out of that community at IBM (Visual Age, Eclipse) and later Jetbrains and other IDE makers.

Rust is a good recent example of a community that's very passionate about having good tools as well. Down to the firmware and operating system and everything up. In terms of IDE support they could do better perhaps. But I think there are ongoing efforts on making the compiler more suitable for IDE features (which overlap with compiler features). And of course Cargo has a good reputation. That's a community that cares.

I use Kotlin myself. Made by Jetbrains and heavily used in their IDEs and toolchains. It shows. This is a language made by tool specialists. Which is why I love it. Not necessarily for functional purists. Even though som Scala users have reluctantly switched to it. And the rest is flirting with things like Haskel and Elixir.

pyrale · 2 years ago
> I think the tooling being not ideal is a reflection of how mature/serious the community is about non academic usage.

I'd say it's more of a reflection of how having a very big company funding the language is making a difference.

People like to link Haskell's situation to its academic origins, but in reality, most of the issues with the ecosystem are related to acute underfunding compared to mainstream languages.

odyssey7 · 2 years ago
“Greece, Rome’s captive, took Rome captive.”

The languages of engineering-aligned communities may appear to have won the race, though they have been adopting significant ideas from Haskell and related languages in their victories.

Nelkins · 2 years ago
Pretty sure F# was created by Don Syme, not Erik Meijer.
cies · 2 years ago
> Haskell has had a profound impact on the way I think about programming and how I architect my code and build services.

Exactly the same for me.

> Granted, the tooling is sh*t.

Stack and Stackage (one of the package managers and library distribution systems in Haskell-land) is the best I found in any language.

Other than that I also found some tools to be lacking.

dario_od · 2 years ago
What makes you say that stack is the best you found in any language? I use it daily, and in my experience I'd put it just a bit above PHP's composer
BoiledCabbage · 2 years ago
> Give it a try. Especially, if you don't know what to expect, I can guarantee that you'll be surprised!

And I will as strongly as possible emphasize the opposite you should not.

If you are are already experienced in functional programming, as well as in statically typed functional programming or something lovely in the ML family of languages then only then does Haskell make sense to learn.

If you are looking to learn about either FP in general, or staticly typed FP Haskell is about the single worst language anyone can start with. More people have been discouraged from using FP because they started with Haskell than is probably appreciated. The effort to insight ratio for Haskell is incredibly high.

You can learn the majority of the concepts faster in another language with likely 1/10th the effort. For general FP learn clojure, Racket, or another scheme. For statically typed FP learn F# or Scala or OCAML or even Elm.

In fact if you really want to learn Haskell is is faster to learn Elm and then Haskell than it is to just learn Haskel. Because the amout or weeds you have to navigate through to get to the concepts in Haskell are so high that you can first learn the concepts and approach in a tiny language like Elm and it will more than save the amount of time it would take to understand those approaches from trying to learn Haskell. It seems unbelievable but ai found it to be very try. You can learn two languages faster than just one because of how muddy Haskell is.

Now that said FP is valuable and in my opinion a cleaner design and why in general our industry keeps shifting that way. Monoids, Functors, Applicative are nice design patterns. Pushing side effects to the edge of your code (which is enforced by types) is a great practice. Monads are way overhyped, thinking in types is way undervalued. But you can get all of these concepts without learning Haskell.

So that's the end of my rant as I've grown tired of watching people dismiss FP because they confuse the great concepts of FP with the horrible warts that come with Haskell.

Haskell is a great language, and I'm glad I learned it (and am in no way an expert at it)- but it is the single worst language for an introduction to FP concepts. If you're already deep in FP it's and awesome addition to your toolbox of concepts and for that specific purpose I highly recommend it.

And finally, LYAH is a terrible resource.

the_af · 2 years ago
> And finally, LYAH is a terrible resource.

Could you elaborate? I know LYAH doesn't teach enough to write real programs, and does not introduce necessary concepts such as monad transformers, but why is it so terrible as an introduction to Haskell and FP? (In my mind, incomplete/flawed != terrible... Terrible means "avoid at all costs").

As for your overall point, I remember articles posted here on HN about someone teaching Haskell to children (no prior exposure to any other prog lang) with great success.

dietlbomb · 2 years ago
Is it worth learning JavaScript before learning Elm?
0x3444ac53 · 2 years ago
Would you mind explaining what you mean by stateless?
jgwil2 · 2 years ago
Haskell functions are pure, like mathematical functions: the same input to a function produces the same output every time, regardless of the state of the application. That means the function cannot read or write any data that is not passed directly to it as an argument. So the program is "stateless" in that the behavior does not depend on anything other than its inputs.

This is valuable because you as the developer have a lot less stuff to think about when you're trying to reason about your program's behavior.

Deleted Comment

moomin · 2 years ago
Sum types are finally coming to C#. That’ll make it the first “Mainstream” language to adopt them. Will it be as solid and simple as Haskell’s implementation? Of course not. Will having a backing ecosystem make up for that deficiency? Yes.
SkiFire13 · 2 years ago
What counts as mainstream for you?

Java has recently added sealed classes/interfaces which offer the same features as sum types, and I would argue that Java is definitely mainstream.

Kotlin has a similar feature. It might be used less than Java, but it's the default language for Android.

Swift has `enum` for sum types and is the default language for iOS and MacOS.

Likewise for Rust, which is gaining traction recently.

Typescript also has union/sum types and is gaining lot of traction.

n_plus_1_acc · 2 years ago
Rust is mainstream, just not use in enterprise applications
pjmlp · 2 years ago
Not really, other mainstream languages got there first.
pid-1 · 2 years ago
Python has sum types

optional_int: int | None = None

axilmar · 2 years ago
My question for Haskellers is how to do updates of values on a large scale, let's say in a simulation.

In imperative languages, the program will have a list of entities, and there will be an update() function for each entity that updates its state (position, etc) inline, i.e. new values are overwriten onto old values in memory, invoked at each simulation step.

In Haskell, how is that handled? do I have to recreate the list of entities with their changes at every simulation step? does Haskell have a special construct that allows for values to be overwritten, just like in imperative languages?

Please don't respond with 'use the IO monad' or 'better use another language because Haskell is not up for the task'. I want an actual answer. I've asked this question in the past in this and some other forums and never got a straight answer.

If you reply with 'use the IO monad' or something similar, can you please say if whatever you propose allows for in place update of values? It's important to know, for performance reasons. I wouldn't want to start simulations in a language that requires me to reconstruct every object at every simulation step.

I am asking for this because the answer to 'why Haskell' has always been for me 'why not Haskell: because I write simulations and performance is of concern to me'.

tome · 2 years ago
I'm not sure why you say not to respond with 'use the IO monad' because that's exactly how you'd do it! As an example, here's some code that updates elements of a vector.

    import Data.Vector.Unboxed.Mutable
    
    import Data.Foldable (for_)
    import Prelude hiding (foldr, read, replicate)
    
    -- ghci> main
    -- [0,0,0,0,0,0,0,0,0,0]
    -- [0,5,10,15,20,25,30,35,40,45]
    main = do
      v <- replicate 10 0
    
      printVector v
    
      for_ [1 .. 5] $ \_ -> do
        for_ [0 .. 9] $ \i -> do
          v_i <- read v i
          write v i (v_i + i)
    
      printVector v
    
    printVector :: (Show a, Unbox a) => MVector RealWorld a -> IO ()
    printVector v = do
      list <- foldr (:) [] v
      print list
It does roughly the same as this Python:

    # python /tmp/test28.py
    # [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
    # [0, 5, 10, 15, 20, 25, 30, 35, 40, 45]
    def main():
        v = [0] * 10
    
        print(v)
    
        for _ in range(5):
            for i in range(10):
                v_i = v[i]
                v[i] = v_i + i
    
    
        print(v)
    
    if __name__ == '__main__': main()

contificate · 2 years ago
I have a rather niche theory that many Hindley-Milner type inference tutorials written by Haskellers insist on teaching the error-prone, slow, details of algorithm W because otherwise the authors would need to commit to a way to do destructive unification (as implied by algorithm J) that doesn't attract pedantic criticism from other Haskellers.

For me, I stopped trying to learn Haskell because I couldn't quite make the jump from writing trivial (but neat) little self-contained programs to writing larger, more involved, programs. You seem to need to buy into a contorted way of mentally modelling the problem domain that doesn't quite pay off in the ways advertised to you by Haskell's proponents (as arguments against contrary approaches tend to be hyperbolic). I'm all for persistent data structures, avoiding global state, monadic style, etc. but I find that OCaml is a simpler, pragmatic, vehicle for these ideas without being forced to bend over backwards at every hurdle for limited benefit.

rebeccaskinner · 2 years ago
> In imperative languages, the program will have a list of entities, and there will be an update() function for each entity that updates its state (position, etc) inline, i.e. new values are overwriten onto old values in memory, invoked at each simulation step.

> In Haskell, how is that handled? do I have to recreate the list of entities with their changes at every simulation step? does Haskell have a special construct that allows for values to be overwritten, just like in imperative languages?

You don't _have to_ recreate the list each time, but that's probably where I'd suggest starting. GHC is optimized for these kinds of patterns, and in many cases it'll compile your code to something that does in-place updates for you, while letting you write pure functions that return a new list. Even when it can't, the runtime is designed for these kinds of small allocations and updates, and the performance is much better than what you'd get with that kind of code in another language.

If you decided that you really did need in-place updates, then there are a few options. Instead of storing a vector of values (if you are thinking about performance you probably want vectors instead of lists), you can store a vector of references that can be updated. IO is one way to do that (with IORefs) but you can also get "internal mutability" using STRefs. ST is great because it lets you write a function that uses mutable memory but still looks like a pure function to the callers because it guarantees that the impure stuff is only visible inside of the pure function. If you need concurrency, you might use STM and store them as MVars. Ultimately all of these options are different variations on "Store a list of pointers, rather than a list of values".

There are various other optimizations you could do too. For example, you can use unboxed mutable vectors to avoid having to do a bunch of pointer chasing. You can use GHC primitives to eek out even better performance. In the best case scenario I've seen programs like this written in Haskell be competitive with Java (after the warmup period), and you can keep the memory utilization pretty low. You probably won't get something that's competitive with C unless you are writing extremely optimized code, and at that point most of the time I'd suggest just writing the critical bits in C and using the FFI to link that into your program.

lieks · 2 years ago
You... don't. You have to rely on compiler optimizations to get good performance.

Monads are more-or-less syntax sugar. They give you a structure that allows these optimizations more easily, and also make the code more readable sometimes.

But in your example, update returns a new copy of the state, and you map it over a list for each step. The compiler tries to optimize that into in-place mutation.

IMO, having to rely so much on optimization is one of the weak points of the language.

kreetx · 2 years ago
You do, and you'll have to use do destructive updates within either ST or IO monad using their respective single variable or array types. It looks roundabouty, but does do the thing you want and it is fast.

ST and IO are "libraries" though, in the sense that they not special parts of the language, but appear like any other types.

whateveracct · 2 years ago
Fast immutable data structures don't rely on compiler optimizations. They just exist lol.
bedman12345 · 2 years ago
An example of how to use the io monad for simulations https://benchmarksgame-team.pages.debian.net/benchmarksgame/... It’s one of the nicer to read ones I’ve seen. Still is terrible imo.
tikhonj · 2 years ago
I mean, Haskell has mutable vectors[1]. You can mutate them in place either in the IO monad or in the ST monad. They fundamentally work the same way as mutable data structures in any other garbage collected language.

When I worked on a relatively simple simulation in Haskell, that's exactly what I did: the individual entities were immutable, but the state of the system was stored in a mutable vector and updated in place. The actual "loop" of the simulation was a stream[2] of events, which is what managed the actual IO effect.

My favorite aspect of designing the system in Haskell was that I could separate out the core logic of the simulation which could mutate the state on each event from observers which could only read the state on events. This separation between logic and pure metrics made the code much easier to maintain, especially since most of the business needs and complexity ended up being in the metrics rather than the core simulation dynamics. (Not to say that this would always be the case, that's just what happened for this specific supply chain domain.)

Looking back, if I were going to write a more complex performance-sensitive simulation, I'd probably end up with state stored in a bunch of different mutable arrays, which sounds a lot like an ECS. Doing that with base Haskell would be really awkward, but luckily Haskell is expressive enough that you can build a legitimately nice interface on top of the low-level mutable code. I haven't used it but I imagine that's exactly what apces[3] does and that's where I'd start if I were writing a similar sort of simulation today, but, who knows, sometimes it's straight-up faster to write your own abstractions instead...

[1]: https://hackage.haskell.org/package/vector-0.13.1.0/docs/Dat...

[2]: https://hackage.haskell.org/package/streaming

[3]: https://hackage.haskell.org/package/apecs

whateveracct · 2 years ago
apecs is really nice! it's not without its issues, but it really is a sweet library. and some of its issues are arguably just issues with ECS than apecs itself.
louthy · 2 years ago
In your imperative language, imagine this:

    World simulation(Stream<Event> events, World world) =>
       events.IsComplete
           ? world
           : simulation(applyEventToWorld(events.Head, world), events.Tail);

    World applyEventToWorld(Event event, World world) =>
       // .. create a new World using the immutable inputs
That takes the first event that arrives, transforms the World, then recursively calls itself with the remaining events and the transformed World. This is the most pure way of doing what you ask. Recursion is the best way to 'mutate', without using mutable structures.

However, there are real mutation constructs, like IORef [1] It will do actual in-place (atomic) mutation if you really want in-place updates. It requires the IO monad.

[1] https://hackage.haskell.org/package/base-4.20.0.1/docs/Data-...

icrbow · 2 years ago
> does Haskell have a special construct that allows for values to be overwritten

Yes and no.

No, the language doesn't have a special construct. Yes, there are all kinds of mutable values for different usage patterns and restrictions.

Most likely you end up with mutable containers with some space reserved for entity state.

You can start with putting `IORef EntityState` as a field and let the `update` write there. Or multiple fields for state sub-parts that mutate at different rates. The next step is putting all entity state into big blobs of data and let entities keep an index to their stuff inside that big blob. If your entities are a mishmash of data, then there's `apecs`, ECS library that will do it in AoS way. It even can do concurrent updates in STM if you need that.

Going further, there's `massiv` library with integrated task supervisor and `repa`/`accelerate` that can produce even faster kernels. Finally, you can have your happy Haskell glue code and offload all the difficult work to GPU with `vulkan` compute.

icrbow · 2 years ago
> ECS library that will do it in AoS way

TLAs aren't my forte. It's SoA of course.

mrkeen · 2 years ago
> My question for Haskellers is how to do updates of values on a large scale, let's say in a simulation.

The same way games do it. The whole world, one frame at a time. If you are simulating objects affected by gravity, you do not recalculate the position of each item in-place before moving onto the next item. You figure out all the new accelerations, velocities and positions, and then apply them all.

kccqzy · 2 years ago
I don't understand why you hate the IO monad so much. I mean I've seen very large codebases doing web apps and almost everything is inside the IO monad. It's not as "clean" and not following best practices, but still gets the job done and is convenient. Having pervasive access to IO is just the norm in all other languages so it's not even a drawback.

But let's put that aside. You can instead use the ST monad (not to be confused with the State monad) and get the same performance benefit of in-place update of values.

gspr · 2 years ago
Use the ST monad? :)
throwaway81523 · 2 years ago
Well what kind of values and how many updates? You might have to call an external library to get decent performance, like you would use NumPy in Python. This might be of interest: https://www.acceleratehs.org/
whateveracct · 2 years ago
You can use apecs, a pretty-fast Haskell ECS for those sorts of things.
IceDane · 2 years ago
"Pretty fast".. relatively speaking, considering that it's in an immutable, garbage collected language. Still woefully slow compared to anything else out there(say, bevy? which incidentally works similarly to apecs) and mostly practically unusable if the goal is to actually create a real product.

Want to just have fun? Sure.