I love these text-based languages for music composition. Its something that is approaching a gap in music composition in real-life vs via computer. In real-life you can tell your bandmates to "just play a I V IV in C" and they get it. But we are still not quite at a place where we can tell a computer that exact phrase and get something useful. I love how close these text-based languages are getting though!
I've actually made my own musical language too - called miti [1], which is just one of many others including textbeat [2], foxdot [3], sonic-pi [4], chuck [5], and melrose [6]. Each has their own goals and capabilities.
No list of text-based music systems is complete without the granddaddy of them all: [LilyPond](http://lilypond.org/)
Alda seems greatly inspired by the syntax of LilyPond, and in fact, I actually think the musical rendering is done via LilyPond. I'm surprised the docs don't seem to mention this heritage at all.
Indeed. And if you use Lilypond, check out Frescobaldi -- it makes typesetting and editing scores much easier, via things like syntax-staff co-highlighting.
I struggle to get LilyPond running on MacOS - is it easier on Linux? It would be great for us but my mates all use Windows or MacOS, so we resort to graphical notation programs, where version control and portability is hard
My current favorite is OpusModus https://www.opusmodus.com. It is a commercial app, which will dissapoint some, but is worth the price to me for the level of quality and capabilities. As an added bonus, it is written in Lisp, can be extended using Lisp and uses a S-expression DSL for composition. The development environment has a REPL and a variety of interfaces for composing and exploring music.
> My current favorite is OpusModus https://www.opusmodus.com. It is a commercial app, which will dissapoint some, but is worth the price to me for the level of quality and capabilities.
Personally I love seeing this kind of niche (I assume?) commercial software. It's great that people can make a living working on this kind of thing.
Fantastic work on miti! Looks like it could be a game changer for modular systems especially, but in general just having a single "brain" control all your synths in a way thats as readable as this would be great.
Out of interest, are you aware of any language, framework or environment that includes sound synthesis as well, essentially making it an all-in-one?
I do wonder if I'm asking too much, since Supercollider already exists but I'm not much a fan of using it because of how verbose it is and I'm a little bit upset it uses its own language (so I couldn't import more generic modules), but its worth asking anyway. My current plan is to use a non-livecoding module for an existing language, since the ability to change on the fly isn't important to me
I think SuperCollider is the best out there. It's syntax is weird yeah but you can use OSC to talk to anything else (this could easily be added to miti as well). Really I don't think a really good all-in-one exists there are lots of things that do all, but none do all as well as something that just does one thing.
> "just play a I V IV in C" and they get it. But we are still not quite at a place where we can tell a computer that exact phrase and get something useful.
We are at this place since at least 3 decades ago. How come you never heard about Band in a box? [0]
> I love how close these text-based languages are getting though!
Comparing to what BIAB can do now after 3 decades of development all these text-based languages are sooooo amateurish and ridiculous. They are not even reinventing the wheel. I'd say they're reinventing the circle.
In real-life you can tell your bandmates to "just play a I V IV in C" and they get it. But we are still not quite at a place where we can tell a computer that exact phrase and get something useful....
You can definitely do this exact thing in Band in a Box, simple as typing in the chord progression as a lead sheet.
What do you think of hookpad? It's something along those lines, too, right? Even though it's not text based, but it's easy to get a chord progression playing, which I really like.
Trackers are a very different beast, however for my sake I've found them a good intermediary whilst your looking for the live coding language that suits you.
I use Renoise (not free, but very cheap imo and very powerful), and have a sample pack called AWKW (I think, its something like that) which is just a load of single cycle waveforms, and I use that to build rudimentary subtractive synths.
Sorry to be a downer here, but can someone explain to me what the appeal of these projects are? In the past year or so I've seen a number of text-based music notation projects hit the front page of Hacker News and don't really understand the purpose they are trying to achieve.
If I want to transport music notation between software programs, I have MIDI for that, which does a good job of capturing all elements of a performance (keys, timing, velocity, pedals, aftertouch, etc.) These types of notation formats almost always fail to achieve MIDIs precision across all the aspects of a performance.
If I want to compose music, I need a format that makes it easy to visualize and manipulate notes in context of the other notes playing across all instruments. DAW piano rolls do a fantastic job of overlaying note information, and traditional combined scores do the same too. Again, text notation usually falls flat here - for example, if you had a large ensemble, and wanted to know which chord was being played on the 2nd beat of the 5th measure, how easily can do you do that? How do you determine if it's an open spacing or closed spacing? How do you determine the root without walking back through all the octave shifts?
I can explain why I use tooling like this. I use Lilypond though, not Alda.
For me, it is very much just a markup for music notation that can be tracked in version control. Just like how one would write markdown and then generate html pages, I write Lilypond to generate scores.
Example workflow- I write a song, and use Ableton to capture it. I’m performing with real instruments, and creating arrangements without writing anything down.
Now I want to perform it with other musicians. I have a few options.
I can notate what I created by hand. I can use a proprietary graphical notation tool, which are expensive and generally have sub-par UX.
Or I can use a text-based system like Lilypond, and generate it. I can create reusable blocks, code snippets, look at a diff when I make changes, print the whole arrangement, print just the scores for individual instruments, adjust arrangements as easy as you adjust prose in a text editor.
And also, not have to think much about the presentation layer. Mature tools in this category like Lilypond output great looking scores.
Not the author, but a big fan of these kind of projects, which fall into a much larger community known as "Live Coding" (though it also applies to other creative persuits as well, music is probably the most common).
If the piano roll works for you, chances are these kinds of things won't. For me personally, the piano roll has never worked. Whilst I'm still trying to find my place within live coding, I've found trackers (especially Renoise) to be the best for me for the time being.
Where these projects really excel though is in algorithmic composition and generative music (modular synthesis is also quite well known for this), so rather than telling the computer exactly what to play and when, you define the rules and tweak them to change the results.
Of course they can also be used for a more traditional style of music as well, its simply another form of writing music for computers. Some people (most people) find the piano roll works for them, others prefer trackers and sequencers, others prefer code. Its purely preference at that point
> Sorry to be a downer here, but can someone explain to me what the appeal of these projects are?
I think for some people, the appeal is immediately obvious, and for others it isn't. If the appeal isn't there for you, that's OK. It just means it's probably not the right tool for you.
One of the things I find most fascinating about music-making is how varied and personal the workflows are, and how deeply those workflows affect the resulting music.
You aren't being a "downer" by not liking the sound of a tuba. It's just not your jam. But it certainly is for others and that's OK too.
Many of us are very fast with text editors and very expressive with algorithms but struggle with DAW interfaces. Text is just flexible enough to work as a replacement and the already existing tools to manipulate and manage it make it very attractive. Many of the things you pointed out could be handled with various editor macros or possibly even regex. In the case of programming languages, the high level information could be directly encoded by the author of the piece and the low level information could be generated dynamically.
It's a very different perspective on computer UI design and one I personally prefer, although I get why it's not for everyone.
> If I want to transport music notation between software programs, I have MIDI for that, which does a good job of capturing all elements of a performance (keys, timing, velocity, pedals, aftertouch, etc.) These types of notation formats almost always fail to achieve MIDIs precision across all the aspects of a performance.
Well put. A lot of these feel like the wrong tool for the job, and I have a really hard time understanding why you'd want to track pitch data in a format like this when, as you say, DAWs exist and midi tracks can be moved around between them.
The only time these types of projects make more sense to me is when they present opportunities for discoverability or non-linear sequencing that can result in new or interesting ideas. For example, the Orca Sequencer used in this demo vide](https://www.youtube.com/watch?v=Pe8wE0sx31Q) is an example of a tool that folks use as a synth-agnostic sequencer that takes advantage of a non-linear format to create music that you might not in a DAW/piano roll. Or at the very least, makes you take a wholly different route to get there.
Projects like Alda, however, make less sense to me since they are squarely pointed at linear-workflow music tracking, which is such a saturated space that it's hard to come in with a new format that doesn't just feel like a more painful way to do what a DAW can let you do.
For me the (more hypothetical than real) appeal is that if I actually get around to play with Markov chains and music, having an output target that is text-based (and well-documented) is a WHOLE lot easier than having an output that is one of "I programatically drive a mouse to click around" or "an undocumented binary format".
But, then, I also do close to 100% of my 3D-modelling in what looks like code.
For me, the question isn't "why do people like text-based description languages" it is "why do people dislike them".
And I guess the answer is "it doesn't matter, we are all different, and at the end of the day, what works for me may not work for you, and vice versa".
These projects appeal to those who think everything should look/work like simple code - basically hobby coding/tinkering with a bit of a musical excuse.
It's such a busy space, partly because these projects rarely bring anything new to the table. And if they do you end up with something that takes a long time to learn and has almost as much complexity as a real instrument.
So it's rare for either approach to get traction without some kind of championing by academia or industry.
>If I want to transport music notation between software programs, I have MIDI for that, which does a good job of capturing all elements of a performance (keys, timing, velocity, pedals, aftertouch, etc.) These types of notation formats almost always fail to achieve MIDIs precision across all the aspects of a performance.
Music notation works at a higher level of abstraction than that, and is intended to be intelligible by humans first and foremost.
I misunderstood the point of this comment before, thinking you were arguing that MIDI fails because it is not, itself, a clear human-legible format like text-notation languages being discussed are.
I understand now that you likely meant that MIDI does not capture the score itself, so it does not work as a notation file format, whereas these text-notation languages do describe the score.
Alright, it was wrong of me to call MIDI a notation format, since it's not.
MIDI is a protocol meant to be able to capture the performance of music, and much of the performance isn't human-intelligible data, so MIDI doesn't work as notation.
However, the case I was making is that MIDI can capture all the data present in a score already and transport between applications. No, you would never read MIDI itself from a file and try to play it on a piano, but I also doubt you could easily read Alda straight from a text file without rendering it either.
"If I want to compose music, I need a format that makes it easy to visualize and manipulate notes in context of the other notes playing across all instruments."
Most composers compose using a piano. Orchestration is something that generally happens later, after the song has already been worked out on one instrument.
Lilypond is quite good for transcribing music. It's compact and quick to write, so you can get the job done quickly, and the layout engine is actually good out of the box.
However, I once tried a (small) composition project in Lilypond, and it really slowed to a crawl when I started to make major changes to what I'd already written. The ability to "refactor" is just so much easier in a visual editor like Musescore or Sibelius.
Perhaps for more experienced composers, with less of a need for complete redos, tools like these are better at "getting out of your way".
For me, I think this is one of the coolest projects I've seen in a long time, and definitely the first musical programming language I've ever seen. I was a musician in high school and haven't picked up an instrument for a long time, but I still remember how sheet music works. As a drummer, I was never much good at reading notation or playing other instruments, so this would be a really accessible way for me to write music and see what it sounds like, as well as debugging it the way I would with code.
What I think this is trying to do is to put music composition at the reach of people who are used to REPLs and programming languages in general. But I agree with you that it is questionable if any of that makes any sense, since we already have fairly sophisticated software to compose music using notations that are much easier to handle (traditional staff, piano and guitar interfaces).
> If I want to compose music, I need a format that makes it easy to visualize and manipulate notes in context of the other notes playing across all instruments. DAW piano rolls do a fantastic job of overlaying note information, and traditional combined scores do the same too. Again, text notation usually falls flat here
Text notation is keyboard entry friendly, and version control friendly. Systems that support it will often render it to something that is more convenient for viewing, often as you edit or on save (e.g., there are VSCode extensions that render scores from LilyPond and ABCNotation; for LilyPond, there’s also support for navigating into the source from the rendered score.)
Dude, to me it sounds like you are missing the point of music, which is that there isn't a point. It is just to experiment and do whatever feels good for you, which can be quite different things for different people.
ABC is very well optimized for single voice traditional instrumental music. In that domain (and with a bit of experience at it), it's very easy to write and read ABC:
The header L: field defines the default length of a note. In the body of the tune, uppercase letters A-G are notes in the octave that starts with middle C, lowercase letters a-g are notes in the octave above that. A number following a note name is a multiplier for the note length. Bar lines are pipes.
The drawback to ABC is as you wander away from that original use case, it gets harder and harder to use. Here's snippet of an arrangement of Sleigh Ride I was working on a few years ago:
z +mf+ (+accent+e .^d/) z/ z|(+accent+e .d/) z/ z +p+ .^F/.=G/||[K:G bass octave=-1] +crescendo(+ .A .A .A .A|.A .A .A .A|+f+ +crescendo)+ +accent+ d zz +sfz+ G|
Studying it again, I can mostly figure out what it is trying to do -- z is a rest, things surrounded in +s are extra commands, so +p+ and +f+ are dynamics, +accent+ puts an accent over the next note, etc. It's still pretty easy to understand, but I'd have a hell of a time playing the music from that notation. And I coded up that ABC, if someone else had done it there's a decent chance I'd have to stop and look up some of their notation in the ABC standard.
The main drawback of ABC notation is that it doesn't capture polyphonic music well but by the looks of it, Alda's notation doesn't either. You can write polyphonic music in ABC but it is clumsy. Another drawback is that the ABC language has warts and has accumulated some cruft. Which is to be expected since the notation was invented in the 1980's and have been incrementally improved since. The main advantage of ABC is that tens of thousands of songs have been notated in the format and that it is well-supported by lots of programs.
I've played with ABC / ABC.js to render sheet music in the browser before and I like it for transporting notation; but I don't think it fulfills the same use-case that Alda is attempting to hit.
Without commenting on Alda specifically, people should understand that it's just one member of this list of highly overlapping (but also interestingly distinct) tools:
This looks useful and fun, but without programmatic ways of specifying notes (i.e. ‘$v1 = $v2 + 12’) it’s not much of a “programming language” and mostly just a “text based notation system”.
Looks like a previous version was mostly a Clojure DSL, but the latest major version no longer is. There are variables and other useful features we know from other programming languages that aren't mentioned on the landing page.
Of course there are also varying definitions of what a programming language is. For instance, I consider CSS to be a programming language, but I know many people disagree with that position (and that's okay). I personally don't think that a "programming language" must be a general-purpose, turing-complete language. Alda seems to be a non-general purpose, turing-incomplete language. At this point though, we're maybe getting into semantics a bit.
This is what I've wondered about -- with this and with the other few "music programming languages" on HN in the past year.
They almost always seem to be simply "music notation", not music programming. At the end of the day, it's not that different from sheet music, just written slightly differently.
If I wanted to actually program music based on this, could I? Could I write a loop that plays random notes on the pentatonic scale? Or that systematically plays through every scale, without writing out all the notes by hand?
It sounds like there’s a layer built on top of this to write clojure that interacts with alda, and you definitely could with that. There are also a bunch of other options. Orca (a visual sound programming language) and sonic pi would be where i would start for two very different approaches to the concept.
I think that something that Alda looks like it lacks right now is doing things around intervals rather than notes, as transpositions and key changes will be very tedious without embedded knowledge of intervals.
My project is based around a Python library called Mingus, which I think gives you most of what you'd ever need to build a music programming project.
See https://www.youtube.com/watch?v=2L7edwef-5k Scheme seems like a much more powerful language for music composition, with the ability to deal with fractions and making your own tuning system.
"LilyPond is a music engraving program", great for writing sheet music. This seems to be a music composition tool, eg you can use the repl to tweak your performance
I've actually made my own musical language too - called miti [1], which is just one of many others including textbeat [2], foxdot [3], sonic-pi [4], chuck [5], and melrose [6]. Each has their own goals and capabilities.
- [1] https://github.com/schollz/miti
- [2] https://github.com/flipcoder/textbeat
- [3] https://foxdot.org/
- [4] https://sonic-pi.net/
- [5] https://chuck.cs.princeton.edu/
- [6] https://github.com/emicklei/melrose
Alda seems greatly inspired by the syntax of LilyPond, and in fact, I actually think the musical rendering is done via LilyPond. I'm surprised the docs don't seem to mention this heritage at all.
https://hwiegman.home.xs4all.nl/gw-man/PLAY.html
https://www.frescobaldi.org/
https://csound.com/
Personally I love seeing this kind of niche (I assume?) commercial software. It's great that people can make a living working on this kind of thing.
Not that theres anything wrong with that, but if $400 is the price for ending my suffering, I'll pay out in a heartbeat ;)
Out of interest, are you aware of any language, framework or environment that includes sound synthesis as well, essentially making it an all-in-one?
I do wonder if I'm asking too much, since Supercollider already exists but I'm not much a fan of using it because of how verbose it is and I'm a little bit upset it uses its own language (so I couldn't import more generic modules), but its worth asking anyway. My current plan is to use a non-livecoding module for an existing language, since the ability to change on the fly isn't important to me
I think SuperCollider is the best out there. It's syntax is weird yeah but you can use OSC to talk to anything else (this could easily be added to miti as well). Really I don't think a really good all-in-one exists there are lots of things that do all, but none do all as well as something that just does one thing.
We are at this place since at least 3 decades ago. How come you never heard about Band in a box? [0]
> I love how close these text-based languages are getting though!
Comparing to what BIAB can do now after 3 decades of development all these text-based languages are sooooo amateurish and ridiculous. They are not even reinventing the wheel. I'd say they're reinventing the circle.
- [0] https://en.wikipedia.org/wiki/Band-in-a-Box
You can definitely do this exact thing in Band in a Box, simple as typing in the chord progression as a lead sheet.
I use Renoise (not free, but very cheap imo and very powerful), and have a sample pack called AWKW (I think, its something like that) which is just a load of single cycle waveforms, and I use that to build rudimentary subtractive synths.
If I want to transport music notation between software programs, I have MIDI for that, which does a good job of capturing all elements of a performance (keys, timing, velocity, pedals, aftertouch, etc.) These types of notation formats almost always fail to achieve MIDIs precision across all the aspects of a performance.
If I want to compose music, I need a format that makes it easy to visualize and manipulate notes in context of the other notes playing across all instruments. DAW piano rolls do a fantastic job of overlaying note information, and traditional combined scores do the same too. Again, text notation usually falls flat here - for example, if you had a large ensemble, and wanted to know which chord was being played on the 2nd beat of the 5th measure, how easily can do you do that? How do you determine if it's an open spacing or closed spacing? How do you determine the root without walking back through all the octave shifts?
For me, it is very much just a markup for music notation that can be tracked in version control. Just like how one would write markdown and then generate html pages, I write Lilypond to generate scores.
Example workflow- I write a song, and use Ableton to capture it. I’m performing with real instruments, and creating arrangements without writing anything down.
Now I want to perform it with other musicians. I have a few options.
I can notate what I created by hand. I can use a proprietary graphical notation tool, which are expensive and generally have sub-par UX.
Or I can use a text-based system like Lilypond, and generate it. I can create reusable blocks, code snippets, look at a diff when I make changes, print the whole arrangement, print just the scores for individual instruments, adjust arrangements as easy as you adjust prose in a text editor.
And also, not have to think much about the presentation layer. Mature tools in this category like Lilypond output great looking scores.
If the piano roll works for you, chances are these kinds of things won't. For me personally, the piano roll has never worked. Whilst I'm still trying to find my place within live coding, I've found trackers (especially Renoise) to be the best for me for the time being.
Where these projects really excel though is in algorithmic composition and generative music (modular synthesis is also quite well known for this), so rather than telling the computer exactly what to play and when, you define the rules and tweak them to change the results.
Of course they can also be used for a more traditional style of music as well, its simply another form of writing music for computers. Some people (most people) find the piano roll works for them, others prefer trackers and sequencers, others prefer code. Its purely preference at that point
I think for some people, the appeal is immediately obvious, and for others it isn't. If the appeal isn't there for you, that's OK. It just means it's probably not the right tool for you.
One of the things I find most fascinating about music-making is how varied and personal the workflows are, and how deeply those workflows affect the resulting music.
You aren't being a "downer" by not liking the sound of a tuba. It's just not your jam. But it certainly is for others and that's OK too.
It's a very different perspective on computer UI design and one I personally prefer, although I get why it's not for everyone.
Well put. A lot of these feel like the wrong tool for the job, and I have a really hard time understanding why you'd want to track pitch data in a format like this when, as you say, DAWs exist and midi tracks can be moved around between them.
The only time these types of projects make more sense to me is when they present opportunities for discoverability or non-linear sequencing that can result in new or interesting ideas. For example, the Orca Sequencer used in this demo vide](https://www.youtube.com/watch?v=Pe8wE0sx31Q) is an example of a tool that folks use as a synth-agnostic sequencer that takes advantage of a non-linear format to create music that you might not in a DAW/piano roll. Or at the very least, makes you take a wholly different route to get there.
Projects like Alda, however, make less sense to me since they are squarely pointed at linear-workflow music tracking, which is such a saturated space that it's hard to come in with a new format that doesn't just feel like a more painful way to do what a DAW can let you do.
But, then, I also do close to 100% of my 3D-modelling in what looks like code.
For me, the question isn't "why do people like text-based description languages" it is "why do people dislike them".
And I guess the answer is "it doesn't matter, we are all different, and at the end of the day, what works for me may not work for you, and vice versa".
Heck, I even use both vi and emacs.
It's such a busy space, partly because these projects rarely bring anything new to the table. And if they do you end up with something that takes a long time to learn and has almost as much complexity as a real instrument.
So it's rare for either approach to get traction without some kind of championing by academia or industry.
Music notation works at a higher level of abstraction than that, and is intended to be intelligible by humans first and foremost.
there's an entire generation of people who mainly compose through ableton live's piano roll though
I understand now that you likely meant that MIDI does not capture the score itself, so it does not work as a notation file format, whereas these text-notation languages do describe the score.
MIDI is a protocol meant to be able to capture the performance of music, and much of the performance isn't human-intelligible data, so MIDI doesn't work as notation.
However, the case I was making is that MIDI can capture all the data present in a score already and transport between applications. No, you would never read MIDI itself from a file and try to play it on a piano, but I also doubt you could easily read Alda straight from a text file without rendering it either.
Most composers compose using a piano. Orchestration is something that generally happens later, after the song has already been worked out on one instrument.
Most of the time mountaineers use a mountaineering jacket to run errands in town when it's raining.
I'm not sure that measuring "most" has much explanatory power in either case.
However, I once tried a (small) composition project in Lilypond, and it really slowed to a crawl when I started to make major changes to what I'd already written. The ability to "refactor" is just so much easier in a visual editor like Musescore or Sibelius.
Perhaps for more experienced composers, with less of a need for complete redos, tools like these are better at "getting out of your way".
https://youtube.com/watch?v=tCRPUv8V22o
Maybe /s, maybe not…
Text notation is keyboard entry friendly, and version control friendly. Systems that support it will often render it to something that is more convenient for viewing, often as you edit or on save (e.g., there are VSCode extensions that render scores from LilyPond and ABCNotation; for LilyPond, there’s also support for navigating into the source from the rendered score.)
The drawback to ABC is as you wander away from that original use case, it gets harder and harder to use. Here's snippet of an arrangement of Sleigh Ride I was working on a few years ago:
Studying it again, I can mostly figure out what it is trying to do -- z is a rest, things surrounded in +s are extra commands, so +p+ and +f+ are dynamics, +accent+ puts an accent over the next note, etc. It's still pretty easy to understand, but I'd have a hell of a time playing the music from that notation. And I coded up that ABC, if someone else had done it there's a decent chance I'd have to stop and look up some of their notation in the ABC standard.Dead Comment
https://github.com/toplap/awesome-livecoding
"All things live coding : A curated list of live coding languages and tools"
Of course there are also varying definitions of what a programming language is. For instance, I consider CSS to be a programming language, but I know many people disagree with that position (and that's okay). I personally don't think that a "programming language" must be a general-purpose, turing-complete language. Alda seems to be a non-general purpose, turing-incomplete language. At this point though, we're maybe getting into semantics a bit.
Syntax change: https://github.com/alda-lang/alda/blob/master/doc/alda-2-mig...
Out of curiosity, on what basis do consider CSS to be a programming language? And does it applies to versions prior to CSS3 too?
They almost always seem to be simply "music notation", not music programming. At the end of the day, it's not that different from sheet music, just written slightly differently.
If I wanted to actually program music based on this, could I? Could I write a loop that plays random notes on the pentatonic scale? Or that systematically plays through every scale, without writing out all the notes by hand?
I think that something that Alda looks like it lacks right now is doing things around intervals rather than notes, as transpositions and key changes will be very tedious without embedded knowledge of intervals.
My project is based around a Python library called Mingus, which I think gives you most of what you'd ever need to build a music programming project.
https://bspaans.github.io/python-mingus/
[0]: http://lilypond.org/
https://guix.gnu.org/en/blog/2020/music-production-on-guix-s...