While I agree, the article doesn't really give concrete examples.
Elixir is layered, making it easy to learn and master. You can get pretty far with Phoenix without ever understanding (or even knowing about) the more fundamental building blocks or the runtime. In large part, this is because of its ruby-inspired syntax. You'll have to adjust to immutability, but that's pretty much it.
Then one day you'll want to share state between requests and you'll realize that the immutability (which you're already comfortable with at this point) goes beyond just local variables: it's strictly enforced by these things called "processes". And you'll copy and paste a higher-level construct like an Agent or Genserver and add the 1 line of code to this root supervisor that was just a file auto-generated in your project. But that'll get you a) introduced to the actor model and b) thinking about messaging while c) not ever worrying about or messing up concurrency.
Then you'll want to do something with TCP or UDP and you'll see these same patterns cohesively expressed between the runtime, the standard library and the language.
Then you'll wan to do something distributed, and everything you've learnt about single-node development becomes applicable to distributed systems.
Maybe the only part of Elixir which can get complicated are Macros / metaprogramming. But you can get far without ever understanding this, and Phoenix is so full of magic (which isn't a good thing), that by the time you do need it, you'll certainly have peaked behind the covers once or twice.
The synergy between the runtime, standard library and language, backed by the actor model + immutability is a huge productivity win. It's significantly different (to a point where I think it's way more accurate to group Ruby with Go than with Elixir), but, as I've tried to explain, very approachable.
> Elixir is layered, making it easy to learn and master. You can get pretty far with Phoenix without ever understanding (or even knowing about) the more fundamental building blocks or the runtime. In large part, this is because of its ruby-inspired syntax. You'll have to adjust to immutability, but that's pretty much it.
This is my main problem with elixir. Even for senior developers it's hard to know how the building blocks and runtime works and since all abstractions are leaky, you'll end up with mysterious issues that only make sense once you understand all the layers. And in case of elixir there are lots.
Actually Elixir and Erlang are at the same level: they both run on the BEAM. So you need to know the BEAM that it is like a virtual machine that act differently from what we are used to (JVM, .NET).
Once you learn the BEAM everything become clear and easy to use.
IMHO Process state isolation and message passing are the core, that's not so difficult to learn and apply.
To be fair, Elixir also allows developers to tag functions, which can then trigger macros. Some of them (like @doc) does not transform code, but others (like @impl, or Appsignal’s transaction decorators) will.
I'm not disagreeing with that sentiment, but I learned a considerable amount while implementing my own annotations. I do agree that they can be maddening... especially in the case of Spring Boot where you can forget to add a @Component annotation, and unit tests will pass... then you'll deploy your app and immediately see it blow up in prod.
> copy and paste a higher-level construct like an Agent or Genserver and add the 1 line of code to this root supervisor that was just a file auto-generated in your project. But that'll get you a) introduced to the actor model and b) thinking about messaging while c) not ever worrying about or messing up concurrency.
Isn't it well known that GenServers can become severe bottlenecks unless you know the inner workings of everything to the point where you're an expert?
I'm not an Elixir expert or even used a GenServer in practice but I remember reading some warnings about using GenServers around performance because they can only handle 1 request at a time and it's super easy to bring down your whole system if you don't know what you're doing.
And I remember seeing a lot of forum posts around the dangers of using GenServers (unless you know what you're doing).
It's not really as easy as just copy / pasting something, adding 1 line and you're done. You need to put in serious time and effort to understand the intricacies of a very complex system (BEAM, OTP) if you plan to leave the world of only caring about local function execution.
And as that blog post mentions, it recommends using ETS but Google says ETS isn't distributed. So now suddenly you're stuck only being able to work with 1 machine. This is a bit more limiting than using Python or Ruby and deciding to share your state in Redis. This really does typically require adding a few lines of code and now your state is saved in an external service and now you're free to scale to as many web servers you want until Redis becomes a bottleneck (which it likely never will). You can also freely restart your web servers without losing what's in Redis.
I know you can do distributed state in Elixir too, but it doesn't seem as easy as it is in other languages. And it's especially more complicated / less pragmatic than other tech stacks because almost every other tech stack all use the same tools to share external state so it's a super documented and well thought out problem.
A GenServer may become a concurrency bottleneck as any other language with concurrent data access may become a bottleneck depending on your abstraction of choice. This is nothing specific to Elixir.
What Erlang/Elixir bring to the table is that there is a good vocabulary and introspection tools to observe these issues. For example, if you have a GenServer as a bottleneck, you can start Observer (or the Phoenix LiveDashboard or similar), order processes by message queue, and find which one is having trouble to catch-up with requests. So we end-up talking about it quite frequently - it is easier to talk about what you see!
If you need distributed data, then by all means, use Redis or PostgreSQL or similar. ETS is not going to replace it. What ETS helps is with sharing data within the same node. For example, if you have a machine with 8 cores, you may start 8 Ruby/Python instances, one for each core. If the cache is stored on Redis, you will do a network roundtrip to Redis every time you need the data. Of course you can also cache inside each instance but that can lead to large memory usage, given each instance has its own memory space. This may be accentuated depending on the data, such as caching geoip lookups, as mentioned in the post you linked.
In Elixir, if you have 8 cores, it is a single instance. Therefore, you could cache geoip lookups in ETS and it can be shared across all cores. This has three important benefits: lower memory usage, reduced latency, and increased cache-hit ratio in local storages. At this point, you may choose to not use Redis/DB at all and skip the additional operational complexity. Or, if you prefer, you can still fallback to Redis, which is something I would consider doing if the geoip lookups are expensive (either in terms of time or money).
In any case, ETS is completely optional. If you just want to go to Redis on every request, you can just do that too! And for what is worth, if I need distributed state, I just use to the database too.
Couple of other solid responses here, but going to add my own -
GenServer is immaterial - it's the single process that can be a bottleneck. Just like a single thread can be in other languages. If you need multiple threads in other languages, you'll need multiple processes in Erlang. The nice thing here being the cost of spinning up more (even one per request) is negligible, the syntax is trivial, and the error model in the event something goes wrong is powerful, whereas threads in other languages are so heavy weight you have to size the pool, they can be complicated to work with, and if things go wrong your error handling options tend to be limited.
Use ETS in the places you'd use in memory caching. That's it. That's what it's meant for. If you need distributed consistency, it's not the right answer. If you need a single source of truth, it's not the right answer. But if you need a local cache, that does not require strong consistency(and that's very, very common in distributed systems), it works great.
> And I remember seeing a lot of forum posts around the dangers of using GenServers (unless you know what you're doing).
The danger is using a single process of the GenServer instead of multiple, so you can get single process bottleneck that won’t use multiple cores. You don’t have to know any intricacies of BEAM or OTP to know and design around using multiple process instances of the GenServer.
> I know you can do distributed state in Elixir too, but it doesn't seem as easy as it is in other languages.
You can use Redis in Elixir as well. Saying that Elixir is worse at distribution than Python/Ruby because ETS isn’t distributed is a bit like saying Python is bad at distribution because objects are not distributed. It’s especially strange since Elixir ships with a distribution system (so you can access ETS from other machines) while your other example languages do not.
> And as that blog post mentions, it recommends using ETS but Google says ETS isn't distributed. So now suddenly you're stuck only being able to work with 1 machine.
There is a mostly API-compatible distributed version of ETS in OTP, called DETS. And a higher-level distributed database built on top of ETS/DETS called Mnesia, again, in OTP. So, no, you aren't.
> I know you can do distributed state in Elixir too, but it doesn't seem as easy as it is in other languages. And it's especially more complicated / less pragmatic than other tech stacks because almost every other tech stack all use the same tools to share external state so it's a super documented and well thought out problem.
You can use the same external tools in Elixir as on platforms that don't have a full distributed database built in as it is in the OTP, so, I don't see how the fact that those external tools are widely used on other platforms makes Elixir harder.
Whether or not this is a problem comes down to two factors:
1 - How ok are you with just accepting things as documenting, like having to put `use HelloWeb, :controller` in each controller. Personally, I _had_ to understand what this was doing and how, but I imagine some people don't care so much.
2 - Do you need / want to do anything outside what's an idiomatic Phoenix. As a simple example, I think Plug.Parser is wrong to raise an exception when invalid json is given to it (shitty user data is hardly "exceptional" and I don't need error logs with poor signal to noise).
It makes for great demos but it basically means “this just works so you don’t need to know how it works”.
The problem is you always need to know how your tools work, often sooner rather than later, as you inevitably end up with edge cases and bugs in your application that simply can’t be resolved without that understanding.
I would say do at least one small project or tutorial without Phoenix but in any case Phoenix is a web framework with routes, controllers, templates etc you won't get lost there.
You'll lose some things (no built-in auth but there's libs) and win some (easy websockets and pub/sub).
Unfortunately had a bug I found today in which an event_notifier genserver was calling a method that checked against the database for the the state of an Event, to decide whether to send a notification and then update the event to record that notifications were sent and upon successful update send the notifications. But in the query to construct the list of users that should be notified, the User module was not imported at the top of the file and the notifications were failing to send. Of course, if the path had ever been run, it would have obviously failed and been the simplest type of error to rectify. This is why it's generally not an issue and is actually nice as it gives you direct visibility into dependencies. But, on rare exception, when you don't test the path because it's dependent on time-dependent state and are overly confident in the suitability of the untested code you're deploying to production, you get bit by the compiler that quietly warns and lack of static analysis (oh nice writing this comment just made me discover https://github.com/rrrene/credo)
Can you explain what you mean about module naming? I’ve done a good amount of Elixir, and I’m puzzled because I don’t know what you are referring to at all.
Most libraries and frameworks I’ve used enforce things on your modules via behaviors which have nothing to do with naming. Perhaps you are talking about naming functions for behaviors? Because I think it’s the same in most languages that “overridden” or “implementations” of functions for modules/classes that use some kind of interface must be named the same as the interface definition.
Probably not appropriate for CPU bound loads like that.
Erlang/Elixir would be fine as an orchestration language calling out to C code to actually do the number crunching, but you likely won't see much of the benefit of the language doing that unless you're doing something more complicated in that layer. If it's a super thin layer of glue, Python is better suited (not least because Numpy is familiar and meant to work with the language)
The greatest achievement of Elixir is making the Erlang platform and ecosystem accessible to everyone. And that's because its "Ruby-ness".
I learned Ruby with Rails, so in the same spirit you could learn Elixir with Phoenix and I really think it's a bona-fide approach to "graduate" to the BEAM world.
But, caveat emptor, the BEAM world is like an alien wunder-weapon: everything we take for granted in the modern web development world was already invented --with flying colors too-- in Erlang/BEAM so there is a lot of overlapping in terms of architecture solutions. In a Kubernetes/Istio world, would you go for a full BEAM deployment? I don't say it's not an already solved problem but what's the perfect mix-ratio? It depends.
The overlap between K8s and BEAM is a good question. Even amongst experienced BEAM (especially Erlang) programmers, there's a lot of conflicting information.
From my limited understanding, Kubernetes is comparatively complicated, and can hamstring BEAM instances with port restrictions.
On the other hand, there's a rarely documented soft limit on communication between BEAM nodes (informally, circa 70 units, IIRC). Above this limit, you have to make plans based on sub-clusters of nodes, though I have certainly not worked at that level of complexity.
Would be interesting to hear what other people think about this specific subject.
I have no idea where this limit came from. I worked at WhatsApp[1], and while we did split nodes into separate clusters, I think our big cluster had around 2000 nodes when I was working on it.
Everything was pretty ok, except for pg2, which needed a few tweaks (the new pg module in Erlang 23 I believe comes from work at WhatsApp).
The big issue with pg2 on large clusters, is locking of the groups when lots of processes are trying to join simultaneously. global:set_lock is very slow when there's a lot of contention because when multiple nodes send out lock requests simultaneously and some nodes receive a request from A before B and some receive B before A, both A and B will release and retry later, you only get progress when there's a full lock; applying the Boss node algorithm from global:set_lock_known makes progress much faster (assuming the dist mesh is or becomes stable). The new pg I believe doesn't take these locks anymore.
The other problem with pg2 is a broadcast on node/process death that's for backwards compatibility with something like Erlang R13 [2]. These messages are ignored when received, but in a large cluster that experiences a large network event, the amount of sends can be enormous, which causes its own problems.
Other than those issues, a large number of nodes was never a problem. I would recommend building with fewer, larger nodes over a large number of smaller nodes though; BEAM scales pretty well with lots of cores and lots of ram, so it's nicer to run 10 twenty core nodes instead of 100 dual core nodes.
[1] I no longer work for WhatsApp or Facebook. My opinions are my own, and don't represent either company. Etc.
FWIW, the soft limit for 70 units is about using the global module, which provides consistent state in the cluster (i.e. when you do a change, it tries to change all nodes at once).
The default Erlang Distribution was shown to scale up to ~300 nodes. After that, using sub-clusters is the way to go and relatively easy to setup: it is a matter of setting the "-connect_all false" flag and calling Node.connect/2 based on the data being fed by a service discovery tool (etcd, k8s, aws config, etc).
PS: this data came from a paper. I am struggling to find it right now but I will edit this once/if I do.
with BEAM you get in a node what you would get in a k8s cluster. if you only look at supervisor trees and you grok the concept you’re streets ahead the average developer and their “distributed” systems knowledge.
I find this quite funny because it's my first time hearing I was supposed to be thinking of Elixir as a Ruby thing. I actually learnt about it from a concurrent computing class and it was always an Erlang thing, and now I know it as the magic sauce behind Discord that I always want to try and never find a good reason to.
> I always want to try and never find a good reason to
As someone who learns best by doing, what are some practical projects that someone could do to learn Elixir? I know that Elixir is quite capable of solving certain kinds of problems very elegantly, but maybe my experience hasn’t presented these kinds of problems yet. Outside of building a Discord-like server or a Phoenix web app, what other good practical projects/applications are there for Elixir?
I'm probably the crazy one in the community who is using Elixir for the most super-strange things. For example:
- as a custom DHCP server to do multiple concurrent PXE booting (among other things)
- as a system for provisioning on-metal deployments (like ansible but less inscrutable).
- as a system for provisioning virtual machines over distributed datacenters.
I'll probably also wind up doing DNS and HTTP+websocket layer-7 load balancing too by the end of the year. Probably large-size (~> 1TB) broadband file transfer and maybe even object storage gateway by next year. I've rolled most of these things out to prod in just about year. I honestly can't imagine doing all of these things in Go, without a team of like 20.
Elixir sucks at:
- platform-dependent, like IOS, android, or like SDL games or something,
- number-crunchy, like a shoot em up, or HPC.
- something which requires mutable bitmaps (someone this past weekend brought up "minecraft server").
Actually even desktop might be okay, especially if you pair it up with electron.
if you need to do some batching operations such as read from a queue, do some processing and publish to s3, I can recommend Broadway https://github.com/dashbitco/broadway
I have seen lots of codebases in lots of languages do this type of task, but aside from maybe spark on the high end, i haven't see it done better.
The beauty of erlang is your code reads like synchronous code so it's easy to read/maintain, but it has all the power of the parallelism of async code.
> I know it as the magic sauce behind Discord that I always want to try and never find a good reason to
Don't ever want to build a web app? That's pretty much Elixir's sweet spot, IMO. You'll get great productivity, great scalability, performance that's better than popular interpreted languages and a code base that's easy to reason about.
You'll also be able to do a lot from within the VM instead of relying on external services.
Same. I have never even heard there is a relation to ruby, I always just thought it was just a functional language for the Erlang runtime. I had no idea the syntax was "Ruby like".
The article says nothing about them switching to rust. They implemented a specific data structure in rust using elixir NIFs (native interface). This is equivalent of calling a C library from other languages.
Others have already repeatedly mentioned this was a performance optimization. Elixir/Erlang is not particularly good for heavily CPU bound tasks--This is by design. For lack of a better source, here is a post on the Elixir forum from Robert Virding (one of the creators of Erlang) on the subject: https://elixirforum.com/t/on-why-elixir/34038/62
The best things about Elixir are mix and phoenix. We all can talk about how well under load on multicore machines it behaves but that is the same as we would talk about Erlang. What pushes Elixir beyond Erlang is advanced macro language that allows for things like Ecto, mix with moden Ruby like gem+rake kinda dependency management and really really good solid testing framework.
Elixir/Phoenix is really good. And the ecosystem is also pretty solid.
Pros:
* Functional language
* Multicore support built in
* Mix
* Phoenix
* REPL
* solid ecosystem of most needed tools
Cons are:
* Functional language
* Still niche adoption, not many talented people to pick from.
* If you are deploying via release ( as you should ) mix is going away in production
Can be plus or minus depending on people reading it etc.
Now things like LiveView are just cherry on top. In general Elixir/Phoenix is a full package.
Phoenix is necessary for the success of Elixir absolutely. Because languages at this level of abstraction need a web framework to thrive.
I'm not sure I find Ecto and Phoenix as central to Elixirs value proposition in general. I'm looking a lot at Nerves (IoT) and Membrane (media serving). Having a strong web framework simplifies things and Phoenix is good there. But there are a lot of things to like about Elixir/Erlang/BEAM.
Tbh, I feel comfortable writing Elixir and dealing with OTP. But while I find Plug and Ecto to be very interesting projects, I've not touched Phoenix at all. I just didn't feel the need.
I haven't done a release with Elixir or Phoenix yet and the documentation about it is quite confusing. There are many different ways but I have no idea which one to choose.
I went through this a short while ago for a new side project. You can totally use Mix and you'll be fine to get going with initially but you lose a lot of the benefits of BEAM. A proper BEAM "release" has several key advantages. Most of them are in the Elixir docs [1] but I'll point out the ones I like.
* You don't need to include the source code because you have the pre-compiled code in the release, that includes packages as well. No need to `mix deps.get` on production. They don't even require Erlang / Elixir on the production box because that's also baked directly into the release. As long as your architecture is the same as your build machine, you get a super light weight artifact and a simplified production stack.
* It's very easy to configure the BEAM vm from releases. I think most of this is possible through mix with everything installed on the box but using a release you get it put into the release artifact and there's no fussing around after that.
* It also makes life easier when you start using umbrella applications. You can keep your code together and cut releases of individual applications under the umbrella. That lets you scale your application while keeping it together in a single unit (if that's your thing).
There are other benefits, but ultimately it's the way the erlang/elixir/beam community seem to prefer. For me this was the selling point, as I expect tooling will continue in that direction rather than supporting mix in production.
An OTP/Mix release will produce an executable which bundles your code with the whole BEAM runtime, so for a start, it will mean that won't need to install Elixir on the host machine / server.
OTP based release executables also have an internal versioning system. When people talk about the Erlang/Elixir hot code swapping feature, OTP releases are the basis for it. But it needs a bit of extra work beyond just creating the release binary itself.
Yeah it's super confusing, especially since mix releases are now a thing, but weren't before.
I've heard Chris McCord (the author of Phoenix) say he doesn't use Elixir releases in production in most of his consulting company's client work. He talked about it in some podcast like 6 months ago. I think they just run the same mix command as you would in development but he wasn't 100% clear on that.
But yeah, it's not easy to reason about it, and also if you decide to use releases it's a bummer you lose all of your mix tasks. You can't migrate your database unless you implement a completely different strategy to handle migrations. But then if you ever wanted to do anything else besides migrations that were mix tasks, you'd have to port those over too.
Not many talented people to choose from applies to any language though. And a functional language weeds out all the people who aren't willing to try something new.
As someone with a background in Objective-C, Swift, C++, C# and Java, and currently using Ruby, I'm looking for my next language for web development. Elixir sounds like a step up from Ruby, but I really miss static typing, and I find it hard to justify investing time in yet another language that doesn't have it.
But what are the alternatives? I'm looking for something with static typing, good editor support, mature community projects (e.g. testing on par with rspec), faster than Ruby (though most statically typed languages tend to be) and if it supports some functional paradigms that could be a plus (I dabbled in F# and SML previously).
- Scala is an option, but build times sound like an issue, and the JVM is a bit 'heavy' (e.g. startup times, min memory usage).
- Haskell sounds cool, but maybe a bit too esoteric? (i.e. is it worth the trouble, does it have all the middleware I'm used to)
- C# could be an option, but is the open source/community support there? (if you're not a corporate shop, doing corporate things).
- And then there's Rust, which I'm fascinated by, but I'm also worried that I'll be less productive worrying about lifetimes all the time, and that it's less mature (though growing fast, and seems to have attracted an amazing community of very talented people).
I'm also interested in ways to use a language like that in the frontend - Scalajs sounds pretty mature, C# has Blazor and Rust seems like one of the best ways to target WebAssembly.
So what is a boy to do? Stick to Ruby until the Rust web story is more mature? Try out Elixir in the meantime? Join the dark side and see what C# web dev is like these days? It can be really hard evaluating ecosystems like that from the outside.
Gleam[1] is a typed language on the BEAM. It's still in its early days, more so than Rust. May still be worth keeping an eye on. Nim[2] and Crystal[3] also exist. No idea what their web situation is like, but Nim has a JS compile target, that might be interesting.
> but [in Rust] I'm also worried that I'll be less productive worrying about lifetimes all the time
I'd avoid getting too concerned about this! Explicit lifetime annotations are nowhere near as prevalent in 'modern' (2015+) Rust as they were in the early days, and it's somewhat rare to need to think about them.
The most helpful advice I had on this was to avoid passing references in Rust beyond simple cases unless there is a real need - owning data is much simpler, and clones are generally fine until you get to optimising a program.
If you're interested in Rust for web dev, now is a great time to jump in - while not as mature as Ruby or C# in terms of ecosystem, Rocket is now usable with stable Rust, and Warp is (IMO) a close-to-best-in-class library for building HTTP servers.
I spent the last 8 months learning Elixir, Ecto & Phoenix etc... I am about to leave it and spend more time in C# as I really miss static typing, I also miss the way the documentation is written, the amount of material to read on .net and C#. The library, community and packages are richer for a start. Will be interesting to see if I miss Elixir at all.
If you're willing to be patient, I think that in the next year Elixir is going to get an add-in aggressive static typing library (better than dialyzer) enabled by compiler hooks. It's going to happen, and it's going to be very good.
I've been learning F# and OCaml the last couple weeks and I really like the syntax. (I have been using elixir for 4 years)
The main thing I don't like about OCaml is that it doesn't have method overloading. I love the arity-based (number of arguments in a function) function signatures in elixir but that's not possible in OCaml since it curries methods.
e.g. in ocaml, calling method with less arguments than expected will return a function that takes the remaining arguments:
let add = (fun x y -> x + y)
let add_four = add 4 (* returns a function expecting 1 arg*)
add_four 5 (* returns 9 *)
Not a big deal but I would rather lose this built in currying in exchange for arity-based method overloading.
Otherwise, OCaml is a fast language, both compiling and runtime performance.
I agree that the lack of polymorphism is annoying. A solution to this would be modular implicits. This would allow the built-in currying to work as well.
This. Although I would not use Reason since the compiler layer, bucklescript, changed its name to Rescript to rebrand as its own frontend language and left Reason holding the bag. There is no reference to OCaml in any documentation that was once under the bucklescript project. It even created its own sytax that is different than Reason and Ocaml to something more like JavaScript.
It basically should have been a new project and have had nothing to do with bucklescript.
The worst part is that the owner of bucklescript even owned some properties that had the name "reasonml" in it (like reasonml.org and the reasonml discord group, which weren't owned by the Reason team) and then he pointed all those thing to Rescript. Just the confusion did some serious damage to Reason.
If JVM isn't a blocker try Kotlin. It compiles faster than Scala and has a great IDE experience. Runtime performance is very good of course with the usual JVM caveats (high memory usage, need to use recent JDK for latency optimised GC if you need that).
Could be worth your time to check out vapor, the server side swift web framework. Obviously doesn’t have the ecosystem as the more popular frameworks but has been under active development with corporate sponsorship for a few years now.
> Elixir sounds like a step up from Ruby, but I really miss static typing, and I find it hard to justify investing time in yet another language that doesn't have it.
Static typing serves basically two purposes: correctness through static type analysis and enabling compiler optimizations.
Like many languages that don’t mandate static typing, Elixir has available static type analysis solutions; in Elixir's case (as for Erlang) it's Dialyzer, which does more static analysis than just what is usually thought of as typechecking.
I love Elixir but i will never advocate Dialyxir for somebody looking for static typing. It's super slow, it's not as powerful and rarely (but sometimes) it rejects programs that are valid.
Elixir seems to encourage simple data structures — everything is made up of basic data structures, and since there's no encapsulation, libraries seem to be built with an attitude of "developers are gonna inspect everything so we might as well make things clear and simple". I only noticed this in contrast to libraries in popular OO languages (most recently Python) where everything is done through objects that often have inscrutable instance variables and "missing" methods/methods that library authors simply haven't gotten around to implementing.
Having a small library of functions operating on a small number of data structures makes programming a lot more intuitive than a large number of classes, each with their bespoke set of things you can do to them.
Instead of "lack of encapsulation", it's more "lack of private state". In an object oriented language, you have private instance variables and methods to manipulate them. In a functional language, you have functions which manipulate data structures.
If possible, the data structure would have straightforward fields which are public and documented. If necessary, you might make it opaque, expecting only the library which manages the data structure to manipulate it.
One way to think about this is that in functional programming, the "verbs" (functions and manipulation patterns like map) are more generic and the "nouns" (data structures) are less important than in OO languages. See http://steve-yegge.blogspot.com/2006/03/execution-in-kingdom...
It encourages a functional way of thinking, IMO: since you can't have mutable objects, use immutable data structures and functions which alter them, such that where you may have had:
x.f(y)
you now have something like:
x = f(x,y)
I've found that years of functional programming have mangled my brain. Now when I use something like Python, I only use dataclasses and accessor methods on them, or write methods which perform and return the result of some computation based on the values of the object.
I always joke with my team that the best thing I did for my Python was learn Erlang, and that was when I was stuck with NamedTuples and not DataClasses.
Every time I think about using Phoenix, I get scared off by warnings about how not knowing BEAM can result in serious problems. I’m not sure if that conclusion is justified but it’s where I end up every time. It’s odd and unfortunate that Elixir and especially Phoenix seem to have invested heavily in being approachable but the rest of the ecosystem seems to have warning signs posted all over the place.
Is this a fair impression? Or is it possible to run Phoenix in production and gradually learn more about the BEAM, leveling up as you encounter new challenges?
We work hard exactly so that you can run Phoenix in production and gradually learn more about the BEAM along the way!
One recent example is the built-in dashboard showing all the different data the VM and the framework provide: https://github.com/phoenixframework/phoenix_live_dashboard/ - at first it may be daunting but providing a web-based experience to help familiarize with the building blocks is hopefully a good first step. We also added tooltips along the way to provide information about new concepts.
The same applies to Elixir: learning all of functional programming, concurrent programming, and distributed programming would definitely be too much to do at once, so we do our best to present these ideas step by step.
For what is worth, this is also true for Erlang. It has tools like Observer (which was used as inspiration for the dashboard) and a great deal of learning materials.
One last advice is to not drink the cool-aid too much. For example, you will hear people talking about not using databases, about distributed cluster state, etc, and while all of this is feasible, resist the temptation of playing with those things until later, unless the domain you want to use requires explicitly tackling those problems.
As a daily user of Elixir since 2016, having come from a Python/Django background, I can assure you that you can know very little about OTP/Beam and still be immensely productive. But once you learn the underlying concepts that gird Elixir, so many solutions that normally require queues/caching/back pressure/state machines/distributive architectures are all possible with Elixir itself. I can say this honestly that Elixir is the most powerful I’ve ever felt as an engineer in any language.
I suppose it depends on how specialized or 'heavy' your needs are. I suspect that for a decent portion of web applications, Phoenix might last you longer than others before you would need to dive into the nitty gritty in either language.
In my use case, most of my work is rather straightforward web apps. When using Rails I have occasionally run into issues where I had to tweak things or dive a bit deeper. With Phoenix these same projects would've been fine for a while longer.
Of course, when you do need to dive in, perhaps an advantage of Rails/Django/Laravel/etc. is that the ecosystem is larger, so your problem might be solved without having to really figure out more.
No, it can't really. I'd even say that Phoenix largely uses BEAM and its genservers under the hood "so that you don't have to" and still be able to reap their massive benefits.
You'd be perfectly fine (and it's not uncommon) writing a production app without writing a single genserver (kinda the cornerstone of BEAM) yourself, and you'd still get a highly performant web app that can handle a ton of concurrent traffic out of the box.
Then you can take it from there, deepen your knowledge, and write your own genservers and other BEAM goodies where and when you need them. Phoenix can guide you from the very basics, helping you write apps in simple fashion very much like in Rails, Laravel, Django, etc. It just has a larger range of possibilities on the upper end, letting you build massive high-traffic services, if you need to.
It's totally possible to run Phoenix in prod. I started learning elixir and am using Phoenix for https://getlounge.app. I don't use any of the advanced BEAM features like GenServers etc.
It's really not that bad to learn, to deploy and to work with. You don't have to go all-in distributed system, shared state etc to get a lot of benefit from Ex and the smart decisions it has made.
> Erlang has some bad rep for "weird syntax", but it's completely unfounded, the syntax is actually quite simple and clean.
Anything that isn't Algol-flavored and either procedural or class-based OOP tends to be seen as weird syntax, and much more so if it ticks both boxes.
It's not as much a matter of simple and clean as familiar in a world where almost everything people will be exposed to first tends to come from a narrow design space.
Elixir is layered, making it easy to learn and master. You can get pretty far with Phoenix without ever understanding (or even knowing about) the more fundamental building blocks or the runtime. In large part, this is because of its ruby-inspired syntax. You'll have to adjust to immutability, but that's pretty much it.
Then one day you'll want to share state between requests and you'll realize that the immutability (which you're already comfortable with at this point) goes beyond just local variables: it's strictly enforced by these things called "processes". And you'll copy and paste a higher-level construct like an Agent or Genserver and add the 1 line of code to this root supervisor that was just a file auto-generated in your project. But that'll get you a) introduced to the actor model and b) thinking about messaging while c) not ever worrying about or messing up concurrency.
Then you'll want to do something with TCP or UDP and you'll see these same patterns cohesively expressed between the runtime, the standard library and the language.
Then you'll wan to do something distributed, and everything you've learnt about single-node development becomes applicable to distributed systems.
Maybe the only part of Elixir which can get complicated are Macros / metaprogramming. But you can get far without ever understanding this, and Phoenix is so full of magic (which isn't a good thing), that by the time you do need it, you'll certainly have peaked behind the covers once or twice.
The synergy between the runtime, standard library and language, backed by the actor model + immutability is a huge productivity win. It's significantly different (to a point where I think it's way more accurate to group Ruby with Go than with Elixir), but, as I've tried to explain, very approachable.
This is my main problem with elixir. Even for senior developers it's hard to know how the building blocks and runtime works and since all abstractions are leaky, you'll end up with mysterious issues that only make sense once you understand all the layers. And in case of elixir there are lots.
In my first 6 months of Elixir development, I got very far with just modules, structs, and functional programming.
A common pitfall for new Elixir developers is to reach features that are overkill.
Metaprogramming, gen servers, protocols, etc, are powerful, but you don't need to know them to start building apps with Phoenix.
But annotations are just tags that do nothing, and god knows where the code that actually does anything as a result of them resides.
Isn't it well known that GenServers can become severe bottlenecks unless you know the inner workings of everything to the point where you're an expert?
I'm not an Elixir expert or even used a GenServer in practice but I remember reading some warnings about using GenServers around performance because they can only handle 1 request at a time and it's super easy to bring down your whole system if you don't know what you're doing.
This blog post explains how that happens: https://www.cogini.com/blog/avoiding-genserver-bottlenecks/
And I remember seeing a lot of forum posts around the dangers of using GenServers (unless you know what you're doing).
It's not really as easy as just copy / pasting something, adding 1 line and you're done. You need to put in serious time and effort to understand the intricacies of a very complex system (BEAM, OTP) if you plan to leave the world of only caring about local function execution.
And as that blog post mentions, it recommends using ETS but Google says ETS isn't distributed. So now suddenly you're stuck only being able to work with 1 machine. This is a bit more limiting than using Python or Ruby and deciding to share your state in Redis. This really does typically require adding a few lines of code and now your state is saved in an external service and now you're free to scale to as many web servers you want until Redis becomes a bottleneck (which it likely never will). You can also freely restart your web servers without losing what's in Redis.
I know you can do distributed state in Elixir too, but it doesn't seem as easy as it is in other languages. And it's especially more complicated / less pragmatic than other tech stacks because almost every other tech stack all use the same tools to share external state so it's a super documented and well thought out problem.
What Erlang/Elixir bring to the table is that there is a good vocabulary and introspection tools to observe these issues. For example, if you have a GenServer as a bottleneck, you can start Observer (or the Phoenix LiveDashboard or similar), order processes by message queue, and find which one is having trouble to catch-up with requests. So we end-up talking about it quite frequently - it is easier to talk about what you see!
If you need distributed data, then by all means, use Redis or PostgreSQL or similar. ETS is not going to replace it. What ETS helps is with sharing data within the same node. For example, if you have a machine with 8 cores, you may start 8 Ruby/Python instances, one for each core. If the cache is stored on Redis, you will do a network roundtrip to Redis every time you need the data. Of course you can also cache inside each instance but that can lead to large memory usage, given each instance has its own memory space. This may be accentuated depending on the data, such as caching geoip lookups, as mentioned in the post you linked.
In Elixir, if you have 8 cores, it is a single instance. Therefore, you could cache geoip lookups in ETS and it can be shared across all cores. This has three important benefits: lower memory usage, reduced latency, and increased cache-hit ratio in local storages. At this point, you may choose to not use Redis/DB at all and skip the additional operational complexity. Or, if you prefer, you can still fallback to Redis, which is something I would consider doing if the geoip lookups are expensive (either in terms of time or money).
In any case, ETS is completely optional. If you just want to go to Redis on every request, you can just do that too! And for what is worth, if I need distributed state, I just use to the database too.
GenServer is immaterial - it's the single process that can be a bottleneck. Just like a single thread can be in other languages. If you need multiple threads in other languages, you'll need multiple processes in Erlang. The nice thing here being the cost of spinning up more (even one per request) is negligible, the syntax is trivial, and the error model in the event something goes wrong is powerful, whereas threads in other languages are so heavy weight you have to size the pool, they can be complicated to work with, and if things go wrong your error handling options tend to be limited.
Use ETS in the places you'd use in memory caching. That's it. That's what it's meant for. If you need distributed consistency, it's not the right answer. If you need a single source of truth, it's not the right answer. But if you need a local cache, that does not require strong consistency(and that's very, very common in distributed systems), it works great.
The danger is using a single process of the GenServer instead of multiple, so you can get single process bottleneck that won’t use multiple cores. You don’t have to know any intricacies of BEAM or OTP to know and design around using multiple process instances of the GenServer.
> I know you can do distributed state in Elixir too, but it doesn't seem as easy as it is in other languages.
You can use Redis in Elixir as well. Saying that Elixir is worse at distribution than Python/Ruby because ETS isn’t distributed is a bit like saying Python is bad at distribution because objects are not distributed. It’s especially strange since Elixir ships with a distribution system (so you can access ETS from other machines) while your other example languages do not.
There is a mostly API-compatible distributed version of ETS in OTP, called DETS. And a higher-level distributed database built on top of ETS/DETS called Mnesia, again, in OTP. So, no, you aren't.
> I know you can do distributed state in Elixir too, but it doesn't seem as easy as it is in other languages. And it's especially more complicated / less pragmatic than other tech stacks because almost every other tech stack all use the same tools to share external state so it's a super documented and well thought out problem.
You can use the same external tools in Elixir as on platforms that don't have a full distributed database built in as it is in the OTP, so, I don't see how the fact that those external tools are widely used on other platforms makes Elixir harder.
Can you explain something more about why you think this is not a good thing?
Is it like Laravel (PHP)? Lots of magic so it is very easy to get started but a hell to maintain?
I really like to get started with Elixir (for webdev) but I am not sure if I should start with Phoenix.
1 - How ok are you with just accepting things as documenting, like having to put `use HelloWeb, :controller` in each controller. Personally, I _had_ to understand what this was doing and how, but I imagine some people don't care so much.
2 - Do you need / want to do anything outside what's an idiomatic Phoenix. As a simple example, I think Plug.Parser is wrong to raise an exception when invalid json is given to it (shitty user data is hardly "exceptional" and I don't need error logs with poor signal to noise).
It makes for great demos but it basically means “this just works so you don’t need to know how it works”.
The problem is you always need to know how your tools work, often sooner rather than later, as you inevitably end up with edge cases and bugs in your application that simply can’t be resolved without that understanding.
You'll lose some things (no built-in auth but there's libs) and win some (easy websockets and pub/sub).
Also LiveView is fun.
See below (jswny) for more clarification.
Most libraries and frameworks I’ve used enforce things on your modules via behaviors which have nothing to do with naming. Perhaps you are talking about naming functions for behaviors? Because I think it’s the same in most languages that “overridden” or “implementations” of functions for modules/classes that use some kind of interface must be named the same as the interface definition.
How is it for performance as compared to
- Hand written single-threaded C - Python code stringing numpy call together.
(I know these are very different things). Whenever I dip into concurrency for performance I always feel like things could be so much better.
Erlang/Elixir would be fine as an orchestration language calling out to C code to actually do the number crunching, but you likely won't see much of the benefit of the language doing that unless you're doing something more complicated in that layer. If it's a super thin layer of glue, Python is better suited (not least because Numpy is familiar and meant to work with the language)
Dead Comment
I learned Ruby with Rails, so in the same spirit you could learn Elixir with Phoenix and I really think it's a bona-fide approach to "graduate" to the BEAM world.
But, caveat emptor, the BEAM world is like an alien wunder-weapon: everything we take for granted in the modern web development world was already invented --with flying colors too-- in Erlang/BEAM so there is a lot of overlapping in terms of architecture solutions. In a Kubernetes/Istio world, would you go for a full BEAM deployment? I don't say it's not an already solved problem but what's the perfect mix-ratio? It depends.
On the other hand, there's a rarely documented soft limit on communication between BEAM nodes (informally, circa 70 units, IIRC). Above this limit, you have to make plans based on sub-clusters of nodes, though I have certainly not worked at that level of complexity.
Would be interesting to hear what other people think about this specific subject.
Everything was pretty ok, except for pg2, which needed a few tweaks (the new pg module in Erlang 23 I believe comes from work at WhatsApp).
The big issue with pg2 on large clusters, is locking of the groups when lots of processes are trying to join simultaneously. global:set_lock is very slow when there's a lot of contention because when multiple nodes send out lock requests simultaneously and some nodes receive a request from A before B and some receive B before A, both A and B will release and retry later, you only get progress when there's a full lock; applying the Boss node algorithm from global:set_lock_known makes progress much faster (assuming the dist mesh is or becomes stable). The new pg I believe doesn't take these locks anymore.
The other problem with pg2 is a broadcast on node/process death that's for backwards compatibility with something like Erlang R13 [2]. These messages are ignored when received, but in a large cluster that experiences a large network event, the amount of sends can be enormous, which causes its own problems.
Other than those issues, a large number of nodes was never a problem. I would recommend building with fewer, larger nodes over a large number of smaller nodes though; BEAM scales pretty well with lots of cores and lots of ram, so it's nicer to run 10 twenty core nodes instead of 100 dual core nodes.
[1] I no longer work for WhatsApp or Facebook. My opinions are my own, and don't represent either company. Etc.
[2] https://github.com/erlang/otp/blob/5f1ef352f971b2efad3ceb403...
The default Erlang Distribution was shown to scale up to ~300 nodes. After that, using sub-clusters is the way to go and relatively easy to setup: it is a matter of setting the "-connect_all false" flag and calling Node.connect/2 based on the data being fed by a service discovery tool (etcd, k8s, aws config, etc).
PS: this data came from a paper. I am struggling to find it right now but I will edit this once/if I do.
Also there all the hooks needed to adapt and change it as you grow.
I would argue that when you reach above a hundred nodes, you will need to optimise yourself in any tech though
with BEAM you get in a node what you would get in a k8s cluster. if you only look at supervisor trees and you grok the concept you’re streets ahead the average developer and their “distributed” systems knowledge.
As someone who learns best by doing, what are some practical projects that someone could do to learn Elixir? I know that Elixir is quite capable of solving certain kinds of problems very elegantly, but maybe my experience hasn’t presented these kinds of problems yet. Outside of building a Discord-like server or a Phoenix web app, what other good practical projects/applications are there for Elixir?
- as a custom DHCP server to do multiple concurrent PXE booting (among other things)
- as a system for provisioning on-metal deployments (like ansible but less inscrutable).
- as a system for provisioning virtual machines over distributed datacenters.
I'll probably also wind up doing DNS and HTTP+websocket layer-7 load balancing too by the end of the year. Probably large-size (~> 1TB) broadband file transfer and maybe even object storage gateway by next year. I've rolled most of these things out to prod in just about year. I honestly can't imagine doing all of these things in Go, without a team of like 20.
Elixir sucks at:
- platform-dependent, like IOS, android, or like SDL games or something,
- number-crunchy, like a shoot em up, or HPC.
- something which requires mutable bitmaps (someone this past weekend brought up "minecraft server").
Actually even desktop might be okay, especially if you pair it up with electron.
I have seen lots of codebases in lots of languages do this type of task, but aside from maybe spark on the high end, i haven't see it done better.
The beauty of erlang is your code reads like synchronous code so it's easy to read/maintain, but it has all the power of the parallelism of async code.
I wrote one in Scala+Akka and the actor model is great. It's even better when the runtime supports preemption, like Elixer.
Don't ever want to build a web app? That's pretty much Elixir's sweet spot, IMO. You'll get great productivity, great scalability, performance that's better than popular interpreted languages and a code base that's easy to reason about.
You'll also be able to do a lot from within the VM instead of relying on external services.
It will never be as popular though.
And it's not like other languages stay stagnant. JS and TS, Java with green threads, C# with actor and LiveView like libraries.
Dead Comment
Still a very impressive use of elixir IMHO.
They haven't switched, just use Rust at some places for performance improvements.
Elixir/Phoenix is really good. And the ecosystem is also pretty solid.
Pros:
* Functional language * Multicore support built in * Mix * Phoenix * REPL * solid ecosystem of most needed tools
Cons are:
* Functional language * Still niche adoption, not many talented people to pick from. * If you are deploying via release ( as you should ) mix is going away in production
Can be plus or minus depending on people reading it etc.
Now things like LiveView are just cherry on top. In general Elixir/Phoenix is a full package.
I'm not sure I find Ecto and Phoenix as central to Elixirs value proposition in general. I'm looking a lot at Nerves (IoT) and Membrane (media serving). Having a strong web framework simplifies things and Phoenix is good there. But there are a lot of things to like about Elixir/Erlang/BEAM.
Why should I deploy with release?
* You don't need to include the source code because you have the pre-compiled code in the release, that includes packages as well. No need to `mix deps.get` on production. They don't even require Erlang / Elixir on the production box because that's also baked directly into the release. As long as your architecture is the same as your build machine, you get a super light weight artifact and a simplified production stack.
* It's very easy to configure the BEAM vm from releases. I think most of this is possible through mix with everything installed on the box but using a release you get it put into the release artifact and there's no fussing around after that.
* It also makes life easier when you start using umbrella applications. You can keep your code together and cut releases of individual applications under the umbrella. That lets you scale your application while keeping it together in a single unit (if that's your thing).
There are other benefits, but ultimately it's the way the erlang/elixir/beam community seem to prefer. For me this was the selling point, as I expect tooling will continue in that direction rather than supporting mix in production.
[1] - https://elixir-lang.org/getting-started/mix-otp/config-and-r...
OTP based release executables also have an internal versioning system. When people talk about the Erlang/Elixir hot code swapping feature, OTP releases are the basis for it. But it needs a bit of extra work beyond just creating the release binary itself.
I've heard Chris McCord (the author of Phoenix) say he doesn't use Elixir releases in production in most of his consulting company's client work. He talked about it in some podcast like 6 months ago. I think they just run the same mix command as you would in development but he wasn't 100% clear on that.
But yeah, it's not easy to reason about it, and also if you decide to use releases it's a bummer you lose all of your mix tasks. You can't migrate your database unless you implement a completely different strategy to handle migrations. But then if you ever wanted to do anything else besides migrations that were mix tasks, you'd have to port those over too.
But what are the alternatives? I'm looking for something with static typing, good editor support, mature community projects (e.g. testing on par with rspec), faster than Ruby (though most statically typed languages tend to be) and if it supports some functional paradigms that could be a plus (I dabbled in F# and SML previously).
- Scala is an option, but build times sound like an issue, and the JVM is a bit 'heavy' (e.g. startup times, min memory usage).
- Haskell sounds cool, but maybe a bit too esoteric? (i.e. is it worth the trouble, does it have all the middleware I'm used to)
- C# could be an option, but is the open source/community support there? (if you're not a corporate shop, doing corporate things).
- And then there's Rust, which I'm fascinated by, but I'm also worried that I'll be less productive worrying about lifetimes all the time, and that it's less mature (though growing fast, and seems to have attracted an amazing community of very talented people).
I'm also interested in ways to use a language like that in the frontend - Scalajs sounds pretty mature, C# has Blazor and Rust seems like one of the best ways to target WebAssembly.
So what is a boy to do? Stick to Ruby until the Rust web story is more mature? Try out Elixir in the meantime? Join the dark side and see what C# web dev is like these days? It can be really hard evaluating ecosystems like that from the outside.
[1] https://github.com/gleam-lang/gleam [2] https://nim-lang.org/ [3] https://crystal-lang.org/
https://github.com/wende/elchemy
I'd avoid getting too concerned about this! Explicit lifetime annotations are nowhere near as prevalent in 'modern' (2015+) Rust as they were in the early days, and it's somewhat rare to need to think about them.
The most helpful advice I had on this was to avoid passing references in Rust beyond simple cases unless there is a real need - owning data is much simpler, and clones are generally fine until you get to optimising a program.
If you're interested in Rust for web dev, now is a great time to jump in - while not as mature as Ruby or C# in terms of ecosystem, Rocket is now usable with stable Rust, and Warp is (IMO) a close-to-best-in-class library for building HTTP servers.
Speaking about Elixir/Erlang alone, it's an interesting experience, because learning it properly is very different to almost every other language.
So if you're keen on moving to a new language for the sake of intellectual curiosity, then you will probably enjoy the trip.
I've been learning F# and OCaml the last couple weeks and I really like the syntax. (I have been using elixir for 4 years)
The main thing I don't like about OCaml is that it doesn't have method overloading. I love the arity-based (number of arguments in a function) function signatures in elixir but that's not possible in OCaml since it curries methods.
e.g. in ocaml, calling method with less arguments than expected will return a function that takes the remaining arguments:
Not a big deal but I would rather lose this built in currying in exchange for arity-based method overloading.Otherwise, OCaml is a fast language, both compiling and runtime performance.
It basically should have been a new project and have had nothing to do with bucklescript.
The worst part is that the owner of bucklescript even owned some properties that had the name "reasonml" in it (like reasonml.org and the reasonml discord group, which weren't owned by the Reason team) and then he pointed all those thing to Rescript. Just the confusion did some serious damage to Reason.
My advice is to stick to OCaml.
Static typing serves basically two purposes: correctness through static type analysis and enabling compiler optimizations.
Like many languages that don’t mandate static typing, Elixir has available static type analysis solutions; in Elixir's case (as for Erlang) it's Dialyzer, which does more static analysis than just what is usually thought of as typechecking.
Can be written functionally, everything is typed, very active community
Now objects feel very weird.
Having a small library of functions operating on a small number of data structures makes programming a lot more intuitive than a large number of classes, each with their bespoke set of things you can do to them.
If possible, the data structure would have straightforward fields which are public and documented. If necessary, you might make it opaque, expecting only the library which manages the data structure to manipulate it.
One way to think about this is that in functional programming, the "verbs" (functions and manipulation patterns like map) are more generic and the "nouns" (data structures) are less important than in OO languages. See http://steve-yegge.blogspot.com/2006/03/execution-in-kingdom...
Writing these static methods that pass around data. But it’s not the same.
I also really miss pattern matching when I don’t have it. But more languages have been adding it.
Deleted Comment
Is this a fair impression? Or is it possible to run Phoenix in production and gradually learn more about the BEAM, leveling up as you encounter new challenges?
One recent example is the built-in dashboard showing all the different data the VM and the framework provide: https://github.com/phoenixframework/phoenix_live_dashboard/ - at first it may be daunting but providing a web-based experience to help familiarize with the building blocks is hopefully a good first step. We also added tooltips along the way to provide information about new concepts.
The same applies to Elixir: learning all of functional programming, concurrent programming, and distributed programming would definitely be too much to do at once, so we do our best to present these ideas step by step.
For what is worth, this is also true for Erlang. It has tools like Observer (which was used as inspiration for the dashboard) and a great deal of learning materials.
One last advice is to not drink the cool-aid too much. For example, you will hear people talking about not using databases, about distributed cluster state, etc, and while all of this is feasible, resist the temptation of playing with those things until later, unless the domain you want to use requires explicitly tackling those problems.
I hope this helps and have fun!
In my use case, most of my work is rather straightforward web apps. When using Rails I have occasionally run into issues where I had to tweak things or dive a bit deeper. With Phoenix these same projects would've been fine for a while longer.
Of course, when you do need to dive in, perhaps an advantage of Rails/Django/Laravel/etc. is that the ecosystem is larger, so your problem might be solved without having to really figure out more.
> not knowing BEAM can result in serious problems
No, it can't really. I'd even say that Phoenix largely uses BEAM and its genservers under the hood "so that you don't have to" and still be able to reap their massive benefits.
You'd be perfectly fine (and it's not uncommon) writing a production app without writing a single genserver (kinda the cornerstone of BEAM) yourself, and you'd still get a highly performant web app that can handle a ton of concurrent traffic out of the box.
Then you can take it from there, deepen your knowledge, and write your own genservers and other BEAM goodies where and when you need them. Phoenix can guide you from the very basics, helping you write apps in simple fashion very much like in Rails, Laravel, Django, etc. It just has a larger range of possibilities on the upper end, letting you build massive high-traffic services, if you need to.
Erlang has some bad rep for "weird syntax", but it's completely unfounded, the syntax is actually quite simple and clean.
Elixir opens doors mainly for Rubyists, but there are some other paths to discover BEAM. Check out:
Luerl – Lua on Erlang https://github.com/rvirding/luerl LFE - Lisp Flavoured Erlang https://github.com/rvirding/lfe
Anything that isn't Algol-flavored and either procedural or class-based OOP tends to be seen as weird syntax, and much more so if it ticks both boxes.
It's not as much a matter of simple and clean as familiar in a world where almost everything people will be exposed to first tends to come from a narrow design space.