You can also see what it looks like from the control room on Hamish Hamilton's YouTube channel, with the AD calling out shots and all: https://m.youtube.com/watch?v=gfjWjkTP4p8. (Hamish Hamilton has directed every Super Bowl halftime show since 2010.)
John DeMarsico directs the SNY broadcasts for the NY Mets and sometimes posts behind the scenes for how all the cameras come together into a production. I think they are pretty interesting to watch.
> Without any marketing, it earned a reputation among seasoned professionals and became a staple at the world’s top live events.
Sounds like the entertainment industry. Everyone truly knows everyone, especially when you're working on the same show with the same crew year after year.
I don't think it was meant to be taken literally (we didn’t write the article). We’d actually love to do more marketing, we barely have time for it though. We don’t have a storefront website—just a basic site with outdated product info but we dedicate all our efforts to the support section. We post on LinkedIn a couple times a year to reassure everyone that we're still alive, but that’s hardly a real marketing strategy. Currently our sales come from word-of-mouth and industry connections, not much from marketing. Hopefully, we’ll find the time to step it up in the future!
Great to see Elixir gaining traction in mission-critical broadcast systems! I wonder, how much of Cyanview's reliability comes from Elixir specifically versus just good implementation of MQTT? and is there any specific Elixir features were essential that couldn't be replicated in other languages?
We use MQTT a lot, it is really a central piece of our architecture, but Elixir brings a lot of benefits regarding the handling of many processes which are often loosely coupled.
The BEAM and OTP offer a sane approach to concurrency and Elixir is a nice language on top. Here is what I find the most important benefits:
- good process isolation, even the heap is per process. This allows us to have robust and mature code running along more experimental features without the fear of everything going down. And you still have easy communication between processes
- supervision tree allows easy process management. I also created a special supervisor with different restart strategies. The language allows this and then, it integrates as any other supervisor. With network connections being broken and later reconnected, the resilience of our system is tested regularly, like a physical chaos monkey
- the immutability as implemented by the BEAM greatly simplifies how to write concurrent code. Inside a process, you don't need to worry about the data changing under you, no other process can change your state. So no more mutex/critical sections (or very little need). You can still have deadlock though, so it is not a silver bullet
Hey it's nice to see a very successful business in Belgium in this space!
I work at the university and we build acquisition systems with exotic cameras and screens, do you think we could meet one time to discuss possible (commercial and research) projects ?
This is Elixir/Erlang/BEAM's core use case, the thing it was designed to do, coordinating and routing with failover and fallbacks a large number of realtime feeds. The original use case was phone calls, but other than the fact these video streams are much much larger per second, most of the principles carry over.
As much as I am a critic of the system, if this is your use case, this is out-of-the-box a very strong foundation for what you need to get done.
Yes, this was one of our initial considerations when we first started, and the telecom analogy of the original Erlang development application was one of the main reasons we took this approach. Now, we only "stream" metadata, control data, and status. Even though we manage video pipelines and color correctors, the video stream itself is always handled separately.
For anyone interested in the video stream itself, here's a summary. On-site, everything is still SDI (HD-SDI, 3G-SDI, or 12G-SDI), which is a serial stream ranging from 1.5Gbps (HD) to 12Gbps (UHD) over coax or fiber, with no delay. Wireless transmission is typically managed via COFDM with ultra-low latency H.264/H.265 encoders/decoders, achieving less than 20ms glass-to-glass latency and converting from/to SDI at both ends, making it seamless.
SMPTE 2110 is gaining traction as a new standard for transmitting SDI data over IP, uncompressed, with timing comparable to SDI, except that video and audio are transmitted as separate independent streams. To work with HD, you need at least 10G network ports, and for UHD, 25G is required. Currently, only a few companies can handle this using off-the-shelf IT servers.
Anything streamed over the public internet is compressed below 10 Mbps and comes with multiple seconds of latency. Most cameras output SDI, though some now offer direct streaming. However, SDI is still widely used at the end of the chain for integration with video mixers, replay servers, and other production equipment.
For any finite, computable task, as long as the language has access to the hardware that can perform the task in practical time, assuming the language doesn't present any compilation or memory issues to take advantage of said hardware in practical time for the task to be worth computing.
> and is there any specific Elixir features were essential that couldn't be replicated in other languages?
From the article:
> “Yes. We’ve seen what the Erlang VM can do, and it has been very well-suited to our needs. You don’t appreciate all the things Elixir offers out of the box until you have to try to implement them yourself.
I have implemented Elixir in critical financial applications, B2B growth intelligence applications, fraud detection applications, scan-and-go shopping applications, and several others.
In every case, like the engineering team in this article demonstrates, the developer experience and end results have exceeded expectations. If you haven’t used Elixir, you should give it a try.
Elixir and Erlang have always garnered a lot of respect and praise - I’m always curious why they’re not more widely used (I’m no exception - despite hearing great things for literal decades, I’ve never actually picked it up to try for a project).
I've thought about this a lot, and I think that part of what hurts Erlang/Elixir adoption is the scale of the OTP. It brings a ton of fantastic tools, like supervision trees, process linking, ETS, application environments & config management, releases, and more. In some ways it's closer to adopting a new OS than a new programming language.
That's what I love about Elixir, but it means that selling it is more like convincing a developer who knows and uses CSV to switch to Postgres. There's a ton of advantages to storing data in a relational DB instead of flat files, but now you have to define a schema up front, deal with table and row locking, figure out that VACUUM thing, etc.
When you're just setting out to learn a new language, trying to understand a new OS on top hurts adoption.
I think most people tend to stick with what they learn first or hop to very similar languages. Schools generally taught Java and then more recently Python and JS, all of which are relatively similar.
Unless someone who knows those three languages is curious or encounters a particular problem that motivates them to explore, they're unlikely to pick up an immutable, functional language.
We use it in our robotics startup, and I wholeheartedly agree.
As an example, we just rolled out a feature in our cloud offering that allows a user to remotely call a robot to a specified waypoint inside a facility, and show real time updates of the robot's position on its map of the world as it navigates there. We did this with just MQTT, LiveView, Phoenix PubSub, and a very small amount of JS for map controls. The cloud portion of this feature was basically built in 2-3 weeks by one person (minus some pre-existing code for handle displaying raw map PNGs from S3, existing MQTT ingress handling, etc.).
Of course you _can_ do things like this with other languages. However, the core language features are just so good that, for our use cases, it blows the other choices out of the water.
Would Gleam be practical for a similar application aside from the OTP/BEAM runtime? I am guessing you'd have to leverage Elixir libraries that are not present for Gleam yet, and you might have slower compile times due to static typing, but you'd catch runtime errors sooner. Would it be more of a debugging vs. fast dynamic iteration trade-off? I am looking to settle on either Gleam or Elixir. I liked Gleam's original ML syntax before, but I like static typing. Thoughts? I am replacing C with Zig, and I am brushing up on my assembly by adding ARM to my x64 skill set.
I don’t think there’s any evidence whatsoever that you would catch runtime bugs sooner with Gleam than with Elixir (or Erlang). Erlang’s record for reliability is stronger than many statically typed languages, including even Java.
There is a certain class of errors static types can prevent but there’s a much larger set of those it can’t. To make the case for a language like TS/Java/Swift/Golang or Gleam actually resulting in fewer runtime defects than Erlang or Elixir, I’d want to see some real world data.
It depends on what “sooner” means to you. Gleam catches more before the code runs; Elixir catches them when they happen but recovers gracefully. If you’re paranoid about bugs reaching users, I would think Gleam’s your pick, no? If you trust your tests and love dynamic freedom, Elixir should be fine. I don't have much experience with either language. I did more in Erlang 8 years ago, but not much. I am on the edge of choosing Gleam over Elixir. It's mainly subjective: I prefer the syntax in Gleam, although I liked the original ML-like syntax when it first came out.
> There is a certain class of errors static types can prevent but there’s a much larger set of those it can’t
Maybe you can go into this more, but I don't really understand what that means, what is this larger set of runtime errors that can't be prevented by static typing?
I use a bit of Elixir, and I'd say most of the errors I'm facing at runtime are things like "(FunctionClauseError) no function clause matching", which is not only avoidable in Gleam, but actually impossible to write without dipping into FFI.
I'm excited for more static typing to come into Elixir, as it stands I'm only really confident about my Elixir code when it has good test coverage, and even then I feel uneasy when refactoring. Still a fun language to use though.
One criticism I have with elixir is the lack of typing (they are working on it now, but I have yet to use it). So yes, I think gleam would be nice. But when we started, it was not even version 0.1 (and I had not heard of it)
I suppose we can have a mixed language project, with erlang, elixir and gleam. Not sure about the practicality of it though
Amazing work, and certainly for such a tentacled project good enough is good enough. I only brought up Gleam vs. Elixir because I am going to pick one to learn this year. I've played with LFE too, and as I wrote earlier, I played with Erlang for a bit.
Gleam has a subset of OTP functionality already [1]. It also compiles extremely quickly. I haven't made any huge projects yet, but I've used some fairly chunky libraries and everything compiles super quick.
It’s always surprised me how the world of digital video is a cousin of IT yet is impenetrable to people outside the video industry. How they refer to resolutions, colors, networking, storage is (almost deliberately?) different.
This gives an idea of the parameters we cover for roughly 200 different models of broadcast cameras we might have so far. These are only to tweak the image quality which is the job of the video engineer (vision engineer in UK). We usually don't cover all the other functions a camera has, which could be more intended for the camera operator himself. The difficulty is to bring some consistency with so many different cameras and protocols.
Do you "normalize" the parameters to some intermediate config so that everything behind that just needs to work with that uniform intermediate config? What about settings that are unique to a given device?
People who only ever work with 'consumer' video equipment need extra training and a back-to-basics set of reading material to understand things like the difference between a 420 and 422 color space, or why serious cinema cameras record video ungraded, or what the color grading process in a post-production workflow looks like (and the different aesthetic choices of grading that might be possible). That's before even getting into things like raw yuv/y4m uncompressed video, or very-high-bitrate barely compressed video, generating proxy footage to work with in an editor because the raw is too much of a firehose of data to handle even on a serious workstation....
I would say that unless you have a professional reason, there's very little benefit to the average end-user to do a deep dive into it. If your intention is to spend $7000 on a RED camera and then $13,000 on lenses, gimbal, cage, follow focus, matte box, memory cards etc to make a small and cost effective single camera production package, then by all means, dig into it.
30 odd years ago, part of my role was to colour balance cameras in a studio environment. We didn’t need computers - but at most there were only 5 cameras :)
I absolutely love reading about hard problems that are invisible to most people.
Dead Comment
https://x.com/SNYtv/status/1832250958258036871
Sounds like the entertainment industry. Everyone truly knows everyone, especially when you're working on the same show with the same crew year after year.
It's definitely a family of sorts.
We use MQTT a lot, it is really a central piece of our architecture, but Elixir brings a lot of benefits regarding the handling of many processes which are often loosely coupled. The BEAM and OTP offer a sane approach to concurrency and Elixir is a nice language on top. Here is what I find the most important benefits:
- good process isolation, even the heap is per process. This allows us to have robust and mature code running along more experimental features without the fear of everything going down. And you still have easy communication between processes
- supervision tree allows easy process management. I also created a special supervisor with different restart strategies. The language allows this and then, it integrates as any other supervisor. With network connections being broken and later reconnected, the resilience of our system is tested regularly, like a physical chaos monkey
- the immutability as implemented by the BEAM greatly simplifies how to write concurrent code. Inside a process, you don't need to worry about the data changing under you, no other process can change your state. So no more mutex/critical sections (or very little need). You can still have deadlock though, so it is not a silver bullet
I work at the university and we build acquisition systems with exotic cameras and screens, do you think we could meet one time to discuss possible (commercial and research) projects ?
As much as I am a critic of the system, if this is your use case, this is out-of-the-box a very strong foundation for what you need to get done.
For anyone interested in the video stream itself, here's a summary. On-site, everything is still SDI (HD-SDI, 3G-SDI, or 12G-SDI), which is a serial stream ranging from 1.5Gbps (HD) to 12Gbps (UHD) over coax or fiber, with no delay. Wireless transmission is typically managed via COFDM with ultra-low latency H.264/H.265 encoders/decoders, achieving less than 20ms glass-to-glass latency and converting from/to SDI at both ends, making it seamless.
SMPTE 2110 is gaining traction as a new standard for transmitting SDI data over IP, uncompressed, with timing comparable to SDI, except that video and audio are transmitted as separate independent streams. To work with HD, you need at least 10G network ports, and for UHD, 25G is required. Currently, only a few companies can handle this using off-the-shelf IT servers.
Anything streamed over the public internet is compressed below 10 Mbps and comes with multiple seconds of latency. Most cameras output SDI, though some now offer direct streaming. However, SDI is still widely used at the end of the chain for integration with video mixers, replay servers, and other production equipment.
All programming languages can do any task. It's about how easy they make that task for you.
For instance, Elixir supports compilation targeting GPUs (within exactly the same language, not a fork).
Most languages do not allow that (and for most it would be fairly hard to implement).
Dead Comment
From the article:
> “Yes. We’ve seen what the Erlang VM can do, and it has been very well-suited to our needs. You don’t appreciate all the things Elixir offers out of the box until you have to try to implement them yourself.
In every case, like the engineering team in this article demonstrates, the developer experience and end results have exceeded expectations. If you haven’t used Elixir, you should give it a try.
Edit: Fixed an editing error.
That's what I love about Elixir, but it means that selling it is more like convincing a developer who knows and uses CSV to switch to Postgres. There's a ton of advantages to storing data in a relational DB instead of flat files, but now you have to define a schema up front, deal with table and row locking, figure out that VACUUM thing, etc.
When you're just setting out to learn a new language, trying to understand a new OS on top hurts adoption.
Unless someone who knows those three languages is curious or encounters a particular problem that motivates them to explore, they're unlikely to pick up an immutable, functional language.
As an example, we just rolled out a feature in our cloud offering that allows a user to remotely call a robot to a specified waypoint inside a facility, and show real time updates of the robot's position on its map of the world as it navigates there. We did this with just MQTT, LiveView, Phoenix PubSub, and a very small amount of JS for map controls. The cloud portion of this feature was basically built in 2-3 weeks by one person (minus some pre-existing code for handle displaying raw map PNGs from S3, existing MQTT ingress handling, etc.).
Of course you _can_ do things like this with other languages. However, the core language features are just so good that, for our use cases, it blows the other choices out of the water.
I don’t think there’s any evidence whatsoever that you would catch runtime bugs sooner with Gleam than with Elixir (or Erlang). Erlang’s record for reliability is stronger than many statically typed languages, including even Java.
There is a certain class of errors static types can prevent but there’s a much larger set of those it can’t. To make the case for a language like TS/Java/Swift/Golang or Gleam actually resulting in fewer runtime defects than Erlang or Elixir, I’d want to see some real world data.
Maybe you can go into this more, but I don't really understand what that means, what is this larger set of runtime errors that can't be prevented by static typing?
I use a bit of Elixir, and I'd say most of the errors I'm facing at runtime are things like "(FunctionClauseError) no function clause matching", which is not only avoidable in Gleam, but actually impossible to write without dipping into FFI.
I'm excited for more static typing to come into Elixir, as it stands I'm only really confident about my Elixir code when it has good test coverage, and even then I feel uneasy when refactoring. Still a fun language to use though.
I suppose we can have a mixed language project, with erlang, elixir and gleam. Not sure about the practicality of it though
[1] https://github.com/gleam-lang/otp
https://pastebin.com/cgeG2r0k
I would say that unless you have a professional reason, there's very little benefit to the average end-user to do a deep dive into it. If your intention is to spend $7000 on a RED camera and then $13,000 on lenses, gimbal, cage, follow focus, matte box, memory cards etc to make a small and cost effective single camera production package, then by all means, dig into it.
Deleted Comment