The thing that people complain about most when it comes to using DAWs on Linux is that 90% of the plugins in the world are not available (at least, not without some relatively technical magic such as yabridge).
It is therefore quite curious to see people get all excited about a DAW on another "platform" where at least 90% of the plugins in the world are not available, and in all likelihood are even less likely to ever become available than they are on Linux.
There's certainly a role for a tool like this in education and for people who so far have no realized that they really need to have Pigments or fabfilter for their project. And yes, people do exaggerate the extent to which a specific plugin is needed. Nevertheless, the lack of ability to run essentially any of the existing 3rd party plugins would, were it a native DAW, be viewed as completely crippling.
The webaudio modules "standard" offers some hope here, and I suspect that within 2-5 years, plugin toolkits like JUCE will allow you to build not just as "windows/VST3" or "Linux/LV2" or "macOS/AudioUnit" formats, but also "wasm/webaudiomodule" (or something like it). However, given how easy the various Linux options already are with JUCE, and how few plugin developers choose to use them, I have to wonder if the massively larger size of a "browser platform market" would be enough to get them to add another platform.
I was a longtime JUCE user and won't hold my breath for them to support the web. They skate strictly where the puck was two years ago, not where it's going. I also wouldn't call their Linux support "easy" - it's not surprising to me very few JUCE developers even consider using Linux in CI, let alone as a supported target.
That said, I think there's something interesting about building out an audio platform with "no VSTs" as a constraint - about 6 years ago I was convinced that the web was a deadend for even middling complexity audio projects when I saw Bandlab at NAMM, and I was very wrong. It seems like the value of a DAW that you can fire up in a browser and instantly access all your projects/share them with your friends is more valuable than having no plugins and crashing after hitting browser tab memory limits. And looking down the road it frees you from the serious problems with native plugins and current plugin APIs.
I think you make an interesting point about the implications of the no-VST constraint. In the earlier days of Reason (before they had VST support or even Rack Extensions) it was great because I could work on a song on any system that had just Reason installed and anyone that also had the current version could open it as is. No installing plugins and no plugin compatibility issues between users. Away from the studio for a weekend? Just install on a laptop and use the dongle, no problem!
Creatively it was very freeing. Naturally, plugin envy eventually crept in and I was glad when they did add VST support, but I miss the ease of use and portability. And you got to know the stock effects inside and out which offered some streamlining in workflow.
I recently used Bandlab when I was at my girlfriend's and the only thing I had with me was my company's laptop, which I don't install any audio software on.
I wrote and recorded a little song and published it withing three hours or so, just as an experiment.
I had to reload the UI a few times after moving too quickly and because of a janky internet connection, but other than that I thought it was a well designed tool. I think it's liberating to be shielded from the many choices you make when working in a "real" DAW, and when I don't have REAPER or Studio One around, I'll happily work with a tool like this to simply stay in the habit of producing music when on the road.
Drivers and control panels for interfaces are also an issue on non macOs and windows platforms. I don't see hardware vendors changing that any time soon.
Don't control panels and surfaces send MIDI messages that could be processed in a somehow standard way? Or do they (predominantly) run proprietary protocols over raw USB data pipes?
Yes. Reaper does too. The thing that makes plugins incompatible on Linux is just that they aren't compiled for Linux. You can run some plugins through WINE via some tools, but it's more bug-prone than natively compiled Linux LV2 or VST
This is deeply impressive, and of course the first thing I'm doing is sharing it with some mates. The technical completeness of this, the fact that you can do this sort of thing in the browser at all - that to me is mindblowing (I'm 52 and remember when marquee tags were a bit of a stretch...).
But... like other commenters - there it stops, and I'm just not quite sure why.
The audience is probably me. I'm an avid Ableton user - I pay a bloody fortune for it, I upgrade it every year, I am happy to support their development because it's an insanely - insanely - good piece of software that does everything I need it to do. I'm also now completely embedded in the clip view, so going back to a linear view just isn't a possibility for me.
More to the point though - this clearly isn't aimed at people who know nothing about what they're doing. It's very non-amateur and clearly very, very powerful. But at the same time it isn't aimed at me, either - as someone who does know what they're doing, I'm thinking "um, VSTs?" or "clip view?" or "live performance / latency issues?" or whatever.
So... who is the audience? Maybe there is a middle ground of people who don't have the means to fork out for a good desktop DAW. Maybe teenagers who are wanting to learn the principles without the spend. Maybe because it'd be very cool for collaborating? I just don't know.
Nonetheless, it's an insane demonstration of what can be done in a browser these days and for that I massively doff my cap - amazing work!
Can't agree more. Might be the most crazy thing I have ever seen done in the browser.
It actually demotivates me to work on music and motivates me to work on some web app ideas.
I am not sure who the audience is though either. Reaper works wonderfully on Linux.
The issue is any DAW or really any musical instrument is massive investment in time to learn to be good. The money isn't really the bottleneck. I can easily get a reasonably priced flute on ebay. The reason I don't play the flute is because the amount of time involved to learn to play it.
My question is why not just have a desktop daw implement rpc on their daemon to work collaboratively natively instead of being sandboxed in a browser? Or does this already exist?
I would be down for a browser ableton suite that had all the stock devices and didn't have vst support. I think over time people learn that you can do 90% of what you need to do with just stock devices (although max4live support would be amazing)
There is no such thing as DAW inside a browser. DAW is mostly about the lowest latency possible unless it's for the sole purpose of sound creation (synthesis/sampling). This would allow it to bear the name Digital Audio Rework Station (DARwS) In all other cases lowest latency, ASIO drivers etc. are a must-have.
A lot of people who have apparently never stepped foot in a recording studio are replying to you. Pre-plugin era was exactly about this. Drive the DAC and manage writes to the disk without introducing (much) latency so tracking can get done. Perhaps no one who uses this will intend to setup more than one microphone.
There's a huge divide between people who might play with this at home as a toy and those who would be able to work with professional musicians with it.
The latter group will have some very strict requirements around performance, latency and workflow.
Minimal latency is only really needed for live performance and monitoring, though these do tend to be crucial demands in most cases. A major problem for browsers is also poor support for multichannel devices.
They do plan to have a "native wrapper like tauri" in the future. I've played around with node-web-audio-api for low latency multichannel for Electron, but it wasn't a great success. Mostly because Rust audio backends (and almost all audio backends in general) aren't very good in such usage.
If you're just picking up samples from Splice or whatever and arranging them, sure, latency means nothing, but it becomes pretty crucial when you're recording an instrument.
No, low latency is not needed for all non-realtime things. Even then, you are looking for latency compensation. Just add one VST plugin to an audio track and you will instantly add latency.
The crux is you want everything to play in sync when doing recording and overdubbing, e.g. "hit record and what I hear live is in sync with what I have recorded already". Almost all DAWs solve this by just starting things a bit later (latency compensation). Some audio cards solve this by allowing direct hardware monitoring. But even then you will have some samples of latency.
Most audio interfaces support direct monitor, so there is essentially zero latency between your source and the monitor when recording tracks. DAWs then allow you to set the latency as a "take nudge" where the take is immediately nudged by the known latency amount, so that when you play back your tracks they are in sync.
Lastly, some fancy interfaces have built in DSP, so that you can load your effects right in the interface, for when you want effects in your recording monitor feed...
So I dont think latency is that critical anymore, and with a decent interface its mostly sorted out.
Second this. You can even go down to exactly 0ms of latency, if you are using an analog mixer with multiple simultaneous output routes in front of your digital interface. One output route is for live monitoring the other is for live recording. The DAW just needs to latency compensate the "record head", which almost every professional DAW is doing of course. _Every_ studio is working like that. OK, a lot of studios have digital mixers nowadays, that adds maybe 2-4ms of latency, which is still ok.
There are three places to accumulate latency, the input from the instrument to the computer, the processing of filters, the output to the monitor (headphones/speakers)/recording. Sometimes you can get away with 2 or 3 ms of latency, but anything over 5 ms is super frustrating. Remember, you're fighting the latency between plucking a string or hitting a key and when the computer acknowledging the data and sending it back out to the monitor which you're using as your guide. Best case you go and "massage" the new track to line up with the existing tracks, worst case it sounds like an out of sync high school marching band.
EDIT: The concerns here are primarily with input latency. Between plucking a string and heading it in your monitor it has to go through: your input hardware, the USB interface, the OS, the browser (which doesn't have explicit low latency capabilities), and JS. Most platforms support ASIO which is a low-level driver for reading audio data from devices. About as close to reading the ADCs yourself. Without a low-latency driver working with the OS there's so much latency overhead it's audible.
Because when you want to record your instrument along with whatever else is in your project, timing is critical and everything needs to line up.
You cannot be performing to audio that you are hearing with any delay, especially if the monitoring of the live audio is also being routed through software.
At a certain point of latency it introduced delays and badly affects how you perform. In some circumstances it makes performance actually impossible.
There are ways around this, namely if the software knows exactly what the input and output latency is then the playback and recording can be compensated. For live monitoring though you really need that done in the audio hardware itself in hard real time.
Music is very latency sensitive. If you are recording any source you generally want to have overall latency < 5ms. Input and monitoring latency is usually either handled by using fancy DSP systems or a "hack" where input audio bypasses any internal processing and gets routed directly back for monitoring.
I think the other people got a little too technical.
The reasons are things like, if you want to play in time with a previously recorded track, or if you are using digital effects and need to be able to hear their effect on your instrument as you play it.
Both of these are less of an issue today than they were 15 years ago where a USB 2.0 audio interface added significant delay into audio and made it harder to get what you wanted out of the system.
I think it’s time for a truly open Digital Audio Workstation one that’s actually usable, well-designed, functional, and free from logins, cloud dependencies, or mandatory subscriptions. Something you can simply download or access from any platform that supports a browser.
That’s why I really like the idea of building a DAW in the browser, it has huge potential for all kinds of users, especially in education, whether for kids, older people, or just anyone who wants to make music on the go, no matter what device they’re using.
I see a lot of promise in this project and fully support André, who has already contributed to developing great audio tools.
This is impressive but, why would I use this over other DAWs? Why name it openDAW if it is not open source?
edit: I like the idea of the "Discoverable Toys" and can see how this could develop into something new. But why not just concentrate on that and bring it to other DAWs in form of a plugin, instead of writing a whole new DAW in the browser?
Good question, I read the about page[0] which says:
> Will the DAW be open source from the start or only become open source later?
> To make the most of being open-source, we believe that there should be an appropriate infrastructure for documentation and quality assessment of code contributions. Our current focus is to lay the foundation for an MVP and release a public standalone version 1 by the end of the year.
So it seems that they intend to open source it later. Still a bit of a strange move, but fair enough I guess.
> instead of writing a whole new DAW in the browser?
Ease of use. Login without installing anything, work on a project on the tabled on the train, later on continue the project without syncing it somehow, all while working in collaboration with a friend.
I would welcome a new open source DAW. Ardour is the go-to option now, and is very capable, but it's starting to show its age and the UI is quite clunky (or an acquired taste at least).
Also the open source audio editor situation is quite dismal. Audacity is really the only game in town. It's showing its age too and its trajectory doesn't look great. A more editing-focused DAW, which OpenDAW seems to be, would be very welcome.
And I've been trying to use it for 15 years, and each time I think "ok, next PC upgrade, this is gonna be sick!" ...and it's still molasses. Not even criticizing it. I've been pumped about it from the start, but it was asking a lot of the web as a platform 15 years ago.
This is featherweight in comparison, and a lot closer to a traditional DAW than audiotool's skeuomorphic virtual analogue approach.
It is an audio/video/midi plugin standard for the web and it is rather mature.
During covid I worked on a collaborative browser-based DAW, https://sequencer.party. I definitely bit off more than I could chew, but you can wire up plugin chains at least.
I would strongly suggest you consider adding webaudiomodule support and instantly get ~50 plugins supported in the DAW. I also packaged up a bunch of them ready for consumption here: https://github.com/boourns/wam-community
It is therefore quite curious to see people get all excited about a DAW on another "platform" where at least 90% of the plugins in the world are not available, and in all likelihood are even less likely to ever become available than they are on Linux.
There's certainly a role for a tool like this in education and for people who so far have no realized that they really need to have Pigments or fabfilter for their project. And yes, people do exaggerate the extent to which a specific plugin is needed. Nevertheless, the lack of ability to run essentially any of the existing 3rd party plugins would, were it a native DAW, be viewed as completely crippling.
The webaudio modules "standard" offers some hope here, and I suspect that within 2-5 years, plugin toolkits like JUCE will allow you to build not just as "windows/VST3" or "Linux/LV2" or "macOS/AudioUnit" formats, but also "wasm/webaudiomodule" (or something like it). However, given how easy the various Linux options already are with JUCE, and how few plugin developers choose to use them, I have to wonder if the massively larger size of a "browser platform market" would be enough to get them to add another platform.
That said, I think there's something interesting about building out an audio platform with "no VSTs" as a constraint - about 6 years ago I was convinced that the web was a deadend for even middling complexity audio projects when I saw Bandlab at NAMM, and I was very wrong. It seems like the value of a DAW that you can fire up in a browser and instantly access all your projects/share them with your friends is more valuable than having no plugins and crashing after hitting browser tab memory limits. And looking down the road it frees you from the serious problems with native plugins and current plugin APIs.
Creatively it was very freeing. Naturally, plugin envy eventually crept in and I was glad when they did add VST support, but I miss the ease of use and portability. And you got to know the stock effects inside and out which offered some streamlining in workflow.
I wrote and recorded a little song and published it withing three hours or so, just as an experiment.
I had to reload the UI a few times after moving too quickly and because of a janky internet connection, but other than that I thought it was a well designed tool. I think it's liberating to be shielded from the many choices you make when working in a "real" DAW, and when I don't have REAPER or Studio One around, I'll happily work with a tool like this to simply stay in the habit of producing music when on the road.
But... like other commenters - there it stops, and I'm just not quite sure why.
The audience is probably me. I'm an avid Ableton user - I pay a bloody fortune for it, I upgrade it every year, I am happy to support their development because it's an insanely - insanely - good piece of software that does everything I need it to do. I'm also now completely embedded in the clip view, so going back to a linear view just isn't a possibility for me.
More to the point though - this clearly isn't aimed at people who know nothing about what they're doing. It's very non-amateur and clearly very, very powerful. But at the same time it isn't aimed at me, either - as someone who does know what they're doing, I'm thinking "um, VSTs?" or "clip view?" or "live performance / latency issues?" or whatever.
So... who is the audience? Maybe there is a middle ground of people who don't have the means to fork out for a good desktop DAW. Maybe teenagers who are wanting to learn the principles without the spend. Maybe because it'd be very cool for collaborating? I just don't know.
Nonetheless, it's an insane demonstration of what can be done in a browser these days and for that I massively doff my cap - amazing work!
It actually demotivates me to work on music and motivates me to work on some web app ideas.
I am not sure who the audience is though either. Reaper works wonderfully on Linux.
The issue is any DAW or really any musical instrument is massive investment in time to learn to be good. The money isn't really the bottleneck. I can easily get a reasonably priced flute on ebay. The reason I don't play the flute is because the amount of time involved to learn to play it.
There's a huge divide between people who might play with this at home as a toy and those who would be able to work with professional musicians with it.
The latter group will have some very strict requirements around performance, latency and workflow.
Edit: and reliability
They do plan to have a "native wrapper like tauri" in the future. I've played around with node-web-audio-api for low latency multichannel for Electron, but it wasn't a great success. Mostly because Rust audio backends (and almost all audio backends in general) aren't very good in such usage.
https://github.com/ircam-ismm/node-web-audio-api
The crux is you want everything to play in sync when doing recording and overdubbing, e.g. "hit record and what I hear live is in sync with what I have recorded already". Almost all DAWs solve this by just starting things a bit later (latency compensation). Some audio cards solve this by allowing direct hardware monitoring. But even then you will have some samples of latency.
most plug-ins don't add latency ?
Lastly, some fancy interfaces have built in DSP, so that you can load your effects right in the interface, for when you want effects in your recording monitor feed...
So I dont think latency is that critical anymore, and with a decent interface its mostly sorted out.
EDIT: The concerns here are primarily with input latency. Between plucking a string and heading it in your monitor it has to go through: your input hardware, the USB interface, the OS, the browser (which doesn't have explicit low latency capabilities), and JS. Most platforms support ASIO which is a low-level driver for reading audio data from devices. About as close to reading the ADCs yourself. Without a low-latency driver working with the OS there's so much latency overhead it's audible.
You cannot be performing to audio that you are hearing with any delay, especially if the monitoring of the live audio is also being routed through software.
At a certain point of latency it introduced delays and badly affects how you perform. In some circumstances it makes performance actually impossible.
There are ways around this, namely if the software knows exactly what the input and output latency is then the playback and recording can be compensated. For live monitoring though you really need that done in the audio hardware itself in hard real time.
The reasons are things like, if you want to play in time with a previously recorded track, or if you are using digital effects and need to be able to hear their effect on your instrument as you play it.
Both of these are less of an issue today than they were 15 years ago where a USB 2.0 audio interface added significant delay into audio and made it harder to get what you wanted out of the system.
Otherwise latency is not really a problem.
That’s why I really like the idea of building a DAW in the browser, it has huge potential for all kinds of users, especially in education, whether for kids, older people, or just anyone who wants to make music on the go, no matter what device they’re using.
I see a lot of promise in this project and fully support André, who has already contributed to developing great audio tools.
edit: I like the idea of the "Discoverable Toys" and can see how this could develop into something new. But why not just concentrate on that and bring it to other DAWs in form of a plugin, instead of writing a whole new DAW in the browser?
> Will the DAW be open source from the start or only become open source later?
> To make the most of being open-source, we believe that there should be an appropriate infrastructure for documentation and quality assessment of code contributions. Our current focus is to lay the foundation for an MVP and release a public standalone version 1 by the end of the year.
So it seems that they intend to open source it later. Still a bit of a strange move, but fair enough I guess.
[0] https://opendaw.org/
Ease of use. Login without installing anything, work on a project on the tabled on the train, later on continue the project without syncing it somehow, all while working in collaboration with a friend.
Also the open source audio editor situation is quite dismal. Audacity is really the only game in town. It's showing its age too and its trajectory doesn't look great. A more editing-focused DAW, which OpenDAW seems to be, would be very welcome.
This is featherweight in comparison, and a lot closer to a traditional DAW than audiotool's skeuomorphic virtual analogue approach.
It is an audio/video/midi plugin standard for the web and it is rather mature.
During covid I worked on a collaborative browser-based DAW, https://sequencer.party. I definitely bit off more than I could chew, but you can wire up plugin chains at least.
I would strongly suggest you consider adding webaudiomodule support and instantly get ~50 plugins supported in the DAW. I also packaged up a bunch of them ready for consumption here: https://github.com/boourns/wam-community