Oh man, I get that as an author you have to choose a path to introduce the new learner to...but it bums me out to see that the material completely avoids tracking as one of the preeminent ways to make music on computers.
Instead it goes down the midi path, which of course ultimately is the dominant commercial technology today. But I've always thought that the complexity and expense of a good midi setup is more of a prosumer-type thing.
Tracking gets you quick entry from chiptunes through extraordinarily expressive sampling to VSTs and even into midi at the edges, and there's trackers for pretty much every kind of computer that can make music.
You can very cheap/free/easily explore the main musical concepts presented here from synthesis to digital audio.
Bonus, most classical tracker files are a kind of "open source" music in that you can see all the note data, the techniques the composers used, and have access to all of their instruments. You get to "see" both composition and performance details down to the note.
I really wish that the academic computer arts educators would catch on to these core pieces of the demoscene -- which is now UNESCO recognized by now six countries as intangible cultural heritage for all of humanity -- and were developed to both challenge and wow the audience and make production by literally penniless children possible.
I'm trying to understand this topic more, so I'm curious what you mean by the word tracking. I searched around, and wanted to check if my understanding is roughly correct:
Is tracking when the frequency data of the instrument, and the frequency+tempo changes in the music track are stored? And does midi just say "guitar" or "piano" and leave it up to the software to decide what those instruments sound like? So tracking would always reproduce the same sound, while midi can vary, even if it's making the same tones?
For e.g. an acoustic guitar and an electric guitar might both be producing a note at a particular base frequency (e.g. C2) but the overtones and amplitudes of those overtones would be completely different, giving each instrument their particular sound. So does tracking record that overtone distribution for each instrument, to ensure an acoustic guitar sounds like an acoustic guitar (that the music composer wanted)?
She wrote a "treatise" on electronic music called An Individual Note of Music, Sound and Electronics. From the back cover:
"[...] a fascinating glimpse into the creative mind behind the Oramics machine. In this engaging account of the possibilities of electronic sound, Oram touches on acoustics, mathematics, cybernetics and esoteric thought, but always returns to the human, urging us to 'see whether we can break open watertight compartments and glance anew' at the world around us."
Birds of Parallax from 9:45 onwards is my favorite.
They had this on repeat in an electronic music history exhibition I attended in a London museum some ... counting... 12 years ago.
Just slimmed some chapters, but this looks like a great resource! If someone wants to dive more deeply into digital synthesis, I can recommend "The Theory and Technique of Electronic Music" by Miller Puckette (creator of Max and Pure Data): https://msp.ucsd.edu/techniques/latest/book.pdf. All examples are actually Pure Data patches that you can try out and experiment with.
The second edition is almost a complete rewrite - much more up to date, but loses some of the core nerdiness of the original.
It's worth mentioning that "computer music" in the original sense was more about generative compositions and experiments with synthesis and DSP, all controlled and generated by hand-written software.
DAWs are much more emulations of a traditional recording studio that happen to run on a computer. So although a computer is involved, they're not "computer music" in the traditional sense.
The difference is that you can do far more with languages like Supercollider. Max, PD, and Csound, especially when controlled with custom code.
But they're much harder to work with. Unlike DAWs and VSTs, they're not optimised for commercial production values. This makes them more experimental and more of a niche interest.
There isn't a lot of notable pure computer music around outside of academia. The biggest success was probably the THX Deep Note. BT made some albums with (mostly) Csound. Autechre used Max quite heavily. Holly Herndon is another name.
So commercially, DAWs are everywhere, but there's no huge commercial computer music fan scene in its own right.
I suggest having some kind of sequencer and synthesizers (one subtractive, one FM) available to play with while reading. Free VSTs in the free Reaper DAW are a fine starting point.
Reaper can be downloaded without paying anything for it. But in a wink-wink strategy, continued use of it after a certain period of time is supposed to be accompanied by paying for a license. This is not enforced. You can call this free if you wish, but it's all a bit wobbly.
Also, if you want to play with synthesis, then VCV Rack, which is truly free (but also comes in a for-cost version with a few more features) is likely the right place to start, or its even free-er cousin/fork Cardinal (which can even be run in your browser)
"Computer Music" is a very broad term (no surprises here) so, like many here, I can point out topics that are not covered. In particular, computer music (aka algorithmic) composition [1], or very recent AI techniques like the Google seq2seq example at [2], or the (unpublished, but probably a form of generative adversarial networks) techniques used by SunoAI and Udio.
"Computer Music" is also a fairly conventional academic musical genre exploring elements of electro-acoustic, acousmatic, musique concrete, synthesis, algorithmic and serial composition techniques.
Indeed, or a PDF. When one study these topics is way easier to do so via PDFs for example: you bookmark the page and continue tomorrow. If it’s HTML, you need to bookmark too, but this is a hassle since bookmarking creates an entry in your browser bookmarks (that you need to clean up later) and if the html page is too long and has no anchors, good luck remembering what part you read last (not to mention that one can read a pdf offline and the pdf can be archived easily). Also knowing how many pages there are to read and how many you have read so far is very helpful (in contrast, reading a website is rather tiring since you don’t know how far are you or how much is left)
You're really going to dump a total newbie into simulated rack synths, computer music languages, and whatnot? In order to "save time" over learning a DAW?
I'm sympathetic to some of what you're plugging. Really. I love VCVRack. But have mercy!
If you want to make "normal" electronic music (and never tried before), use GarageBand on an iPad. It's easier to learn than Ableton et al. because GarageBand has reasonable settings built in. I.e. it will make sounds right away, without endless screwing around. (You might even try GarageBand on a phone, if the screen is large enough.)
If you want to make "experimental" music then ... you'll have to experiment. Most of the recommendations in these comments are aimed at experimental music.
Most things labeled "computer music" belong to a very specific retro experimental music aesthetic, literally dating back to the era when you could barely make music on a computer at all. Much of this music was heavily influenced by academic workers. That may be exactly what you're looking for! On the other hand if you're not quite sure what I'm talking about, then be aware that "computer music" is not the only, or even the sanest, way to make music on your computer.
It's like asking whether you can do serious photography without Photoshop/Lightroom or create games without Unreal/Unity. The answer is you can, but do you really want to? Your most important goal is to use a tool to get the job done. The tool is a method to get there, not something you want to fight with.
It's more like when kids start taking music lessons. Most parents aren't going to spend more than $100-200 on an instrument, in case the kid decides they want to quit. But the entrypoint for virtually any instrument that you could call "playable" is usually north of $500 (which also competes against a massive supply of used instruments from people that spent $500+ and then quit).
There's nothing wrong with playing around with Reaper, Garageband, BandLab, or any of the more entry level "instruments" in this analogy. Preferable even, if you don't want to blow hundreds of bucks on a program.
I have been seeing a few DJ with livestreams composing with Strudel. It's a live web repl programming based approach. I don't think it necessarily scales to professional use, but it's a reasonable intro to the core concepts.
I've gone through the tutorial and it was honestly the most fun I've had on the web in a while.
Learning the app is not the difficult part. It is honing your style within the toolset you're comfortable with. Every DAW has its pain points and learning curve. Spend a few hours a week with each and see which one works for you, is my advice. Same as any other tool, you can't create effectively until you've become comfortable with it.
Literally hundreds or even thousands of ways, physical instrument such as sequencer/sampler, other DAWs. It’s not about learning a commercial app it’s about understanding principles of music production irrelevant of your platform. Just pick one and go: your ears won’t know any difference
Honorable mention: FruityLoops. I remember it from high school, 2006, we've had a hand-me-down 486 with maybe 32mb RAM? The boys made some great loops, I brought a guitar, we ran a freakin live hip hop show, standing ovations, FL delivered.
If you bought FL back then, you should still have a license for the latest FL Studio! They offer lifetime updates, which is a pretty good offer if you like the software. (I use Bitwig which doesn't, but I find it worth the tradeoff.)
Learning a bit of ableton is the least hard part of making compelling electronic music. Bitwig is fine as well. There is such a deluge of people eager to teach you via youtube or udemy etc.
You'll have to spend time learning whatever tool you are going to employ. If commercial is the issue... Have a look at SuperCollider. It has a learning curve, new programming language and all that. But the flexibility and actual software architecture is pretty unmatched in its own nieche IMO.
Ableton Live is very intuitive and there is a lite version that is bundled with some interfaces (https://www.ableton.com/en/products/live-lite/features/?pk_v...). It has been years so I don't remember which interface / version I started with but I quickly fell in love and upgraded to the full version. The time I have spent learning it has been fun and worthwhile, so maybe give it a try.
Try LMMS, Pure Data, VCV Rack, or SunVox - all powerful free/open-source alternatives that can produce professional-quality electronic music without the Ableton learning curve or cost.
As opposed to what? Spending time learning any of the alternative tools out there? Everything you do is going to have a learning curve, so you might as well start learning the tool that does what you want.
If you don't want to use a computer, you could write and perform exclusively using hardware. Like a modular synthesizer, or a standalone synth, or an Elektron box (Digitakt, Digitone, etc).
Sure, but what will work for you will depend on what you consider "compelling electronic music," it is a big and diverse field and each have different tools which suit them. Without having some idea about your interests and direction in electronic music, you will just get a massive list of random applications which may or may not work for your goals.
puredata or supercollider - although I would honestly recommend Max/MSP over either (but it is commercial). Ableton is great and most DAWs in general are useful and quite similar so the skills are transferable, but they do lend themselves to specific orthodox kinds of composition, dance music and sound collage basically.
Instead it goes down the midi path, which of course ultimately is the dominant commercial technology today. But I've always thought that the complexity and expense of a good midi setup is more of a prosumer-type thing.
Tracking gets you quick entry from chiptunes through extraordinarily expressive sampling to VSTs and even into midi at the edges, and there's trackers for pretty much every kind of computer that can make music.
You can very cheap/free/easily explore the main musical concepts presented here from synthesis to digital audio.
Bonus, most classical tracker files are a kind of "open source" music in that you can see all the note data, the techniques the composers used, and have access to all of their instruments. You get to "see" both composition and performance details down to the note.
I really wish that the academic computer arts educators would catch on to these core pieces of the demoscene -- which is now UNESCO recognized by now six countries as intangible cultural heritage for all of humanity -- and were developed to both challenge and wow the audience and make production by literally penniless children possible.
Is tracking when the frequency data of the instrument, and the frequency+tempo changes in the music track are stored? And does midi just say "guitar" or "piano" and leave it up to the software to decide what those instruments sound like? So tracking would always reproduce the same sound, while midi can vary, even if it's making the same tones?
For e.g. an acoustic guitar and an electric guitar might both be producing a note at a particular base frequency (e.g. C2) but the overtones and amplitudes of those overtones would be completely different, giving each instrument their particular sound. So does tracking record that overtone distribution for each instrument, to ensure an acoustic guitar sounds like an acoustic guitar (that the music composer wanted)?
This is the source that I was reading - https://scalibq.wordpress.com/2017/03/29/trackers-vs-midi/
If not, I'd really appreciate any other reading material to understand what you mean by tracking, thanks!
[1] https://en.wikipedia.org/wiki/Daphne_Oram
Deleted Comment
"[...] a fascinating glimpse into the creative mind behind the Oramics machine. In this engaging account of the possibilities of electronic sound, Oram touches on acoustics, mathematics, cybernetics and esoteric thought, but always returns to the human, urging us to 'see whether we can break open watertight compartments and glance anew' at the world around us."
http://www.anomie-publishing.com/coming-soon-daphne-oram-an-...
https://youtu.be/lNTZh0jHOvs?t=585
https://youtu.be/tgamhuQnOkM
[0] https://mitpress.mit.edu/9780262044912/the-computer-music-tu...
It's worth mentioning that "computer music" in the original sense was more about generative compositions and experiments with synthesis and DSP, all controlled and generated by hand-written software.
DAWs are much more emulations of a traditional recording studio that happen to run on a computer. So although a computer is involved, they're not "computer music" in the traditional sense.
The difference is that you can do far more with languages like Supercollider. Max, PD, and Csound, especially when controlled with custom code.
But they're much harder to work with. Unlike DAWs and VSTs, they're not optimised for commercial production values. This makes them more experimental and more of a niche interest.
There isn't a lot of notable pure computer music around outside of academia. The biggest success was probably the THX Deep Note. BT made some albums with (mostly) Csound. Autechre used Max quite heavily. Holly Herndon is another name.
So commercially, DAWs are everywhere, but there's no huge commercial computer music fan scene in its own right.
https://mitpress.mit.edu/9780262044912/the-computer-music-tu...
I suggest having some kind of sequencer and synthesizers (one subtractive, one FM) available to play with while reading. Free VSTs in the free Reaper DAW are a fine starting point.
Also, if you want to play with synthesis, then VCV Rack, which is truly free (but also comes in a for-cost version with a few more features) is likely the right place to start, or its even free-er cousin/fork Cardinal (which can even be run in your browser)
https://cardinal.kx.studio/https://vcvrack.com/
[1]. https://ccrma.stanford.edu/~blackrse/algorithm.html
[2]. https://arxiv.org/pdf/2301.11325
- Bytebeat: https://dollchan.net/bytebeat/ (https://greggman.com/downloads/examples/html5bytebeat/html5b... !Warning loud!)
- Cardinal: https://cardinal.kx.studio/live
- Glicol: https://glicol.org
- Kabelsalat: https://kabel.salat.dev
- NoiseCraft: https://noisecraft.app
- Strudel: https://strudel.cc (https://github.com/terryds/awesome-strudel)
- Tidal Cycles: https://tidalcycles.org
I'm sympathetic to some of what you're plugging. Really. I love VCVRack. But have mercy!
If you want to make "experimental" music then ... you'll have to experiment. Most of the recommendations in these comments are aimed at experimental music.
Most things labeled "computer music" belong to a very specific retro experimental music aesthetic, literally dating back to the era when you could barely make music on a computer at all. Much of this music was heavily influenced by academic workers. That may be exactly what you're looking for! On the other hand if you're not quite sure what I'm talking about, then be aware that "computer music" is not the only, or even the sanest, way to make music on your computer.
There's nothing wrong with playing around with Reaper, Garageband, BandLab, or any of the more entry level "instruments" in this analogy. Preferable even, if you don't want to blow hundreds of bucks on a program.
I've gone through the tutorial and it was honestly the most fun I've had on the web in a while.
https://strudel.cc/workshop/getting-started/
If you don't want to use a computer, you could write and perform exclusively using hardware. Like a modular synthesizer, or a standalone synth, or an Elektron box (Digitakt, Digitone, etc).
Deleted Comment
But it won’t save you time.
Or money.