On back navigation, that's really bad bug! I'll repro it and fix. Thanks for finding that!
On back navigation, that's really bad bug! I'll repro it and fix. Thanks for finding that!
There are a few small areas where the library hiccups, like with bigger chords. I've been able to work around some of those things by reframing the chord as an inversion of N relative to the root of whatever scale I'm in. I haven't bothered to debug why it does that, but my little work around has been sufficient.
During creation of a new track:
- Why do I need a description?
- Why do I need a name, actually?
- Why is there only Latin and Empty available?
- Edit: Just realised that the Latin track is hardcoded?
During editing:
- I was not sure whether / how my actions affect the tracks / track items. (One remedy could be showing the actual waveform of the created sounds in the editor instead of a placeholder waveform.)
- Creating a new instrument — I found the "Instrument Sound Pack" dropdown menu only by accident after some clicking. It would be great to see what type of instrument I'm dealing with without having to click on the instrument itself. (Maybe map it directly to the displayed name? I'd rather have Acoustic Bass 1 to 4 instead of a bunch of "Unnamed Instrum...")
- Some actions have no effect until you restart the playback (e.g. changing speed of a track item).
- Some actions stop the playback (e.g. changing the instrument type)
This is really good feedback, I'll try to address some of these tonight.
I've added name and description because I made a feature where you could create an account to save your work. During development, I got tired of having to constantly recreate test scenarios, so I integrated AWS cognito and started saving compositions.
Latin is hardcoded. To be honest, I wasn't really sure what to do there. The latin template really just bootstraps the UI to save some time and act as a demo. My plan is to make 3-5 premade compositions for each major genre, and have the create flow let you pick between them.
On making changes impacting the UI, this is something that I'm still really struggling to find a balance between. I'll prioritize this higher based on what you've said!
The idea is that you have a "framework" that you can change for what you want the music generator to produce. You can download the MIDI files it produces as well.
When the pandemy started, I got really serious about learning guitar. I started guitar lessons and eventually hit a wall with trying to grok music theory. I was involuntarily given the opportunity to work on a side project (thanks google), so I picked something that I felt would keep me interested for a while. That kicked off this: https://app.bars.ai
It's very very far from being done, let alone useful. It originally started with little scripts that I had been writing, and it slowly evolved into a website, fancier css, APIs, etc. I've been using it as my little test bed for experimenting with new stuff.
Text to audio is too limiting. I’d rather input a melody or a drum beat and have the AI compose around it.
Their paper says that they trained it on the Lakh MIDI dataset, and they have a section on potential copyright issues as a result.
Assuming you don't care for legal issues, theoretically you could do: raw signal -> something like Spotify Basic Pitch (outputs MIDI) -> Anticipatory (outputs composition) -> Logic Pro/Ableton/etc + Native Instruments plugin suite for full song