Readit News logoReadit News
btown · 7 years ago
This is an awesome project, but it seems it was done without reference to academic literature on source separation. In fact, people have been doing audio source separation for years with neural networks.

For instance, Eric Humphrey at Spotify Music Understanding Group describes using a U-Net architecture here: https://medium.com/this-week-in-machine-learning-ai/separati... - paper at http://openaccess.city.ac.uk/19289/1/7bb8d1600fba70dd7940877...

They compare their performance to the widely-cited state of the art Chimera model (Luo 2017): https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5791533/#R24 with examples at http://naplab.ee.columbia.edu/ivs.html - from the examples, there's significantly less distortion than OP.

Not to discourage OP from doing first-principles research at all! But it's often useful to engage with the larger community and know what's succeeded and failed in the past. This is a problem domain where progress could change the entire creative landscape around derivative works ("mashups" and the like), and interested researchers could do well to look towards collaboration rather than reinventing each others' wheels.

EDIT: The SANE conference has talks by Humphrey and many others available online: https://www.youtube.com/channel/UCsdxfneC1EdPorDUq9_XUJA/vid...

musicale · 7 years ago
People have also been doing audio source separation effectively for years without neural networks.
calf · 7 years ago
It's interesting cause I have a recording of human voices plus a background TV show that was too loud; I've looked around for something that would be able to separate the two but I haven't found a straightforward solution.

For example if you Google then FASST is one of the ones that come up, but it's a whole framework and in order to use it you'd have to learn the research yourself; much of these software is not geared for end users.

wodenokoto · 7 years ago
> but it seems it was done without reference to academic literature on source separation

Sums up towards data science pretty well.

Deleted Comment

emcq · 7 years ago
What motivates people to invent phrases like "perceptual binarization" when googling "audio binary mask" literally gives you citations in the field that have been doing this for years?

For example, 2009 musical sound separation based on binary time frequency masking.

Or more recent stuff using deep learning. Also the field generally prefers ratio masks because they lead to better sounding output.

avian · 7 years ago
I know from my own experience that it's possible to dig yourself quite deep into some niche research field without realizing that there's an existing body of knowledge about it. If you or nobody else in your research circle knows the right keywords to enter into search fields it's really easy to overlook piles of published papers.

I want to say things were different back when we relied more on human librarians in searching for literature, but unfortunately history is full of cases where people independently discovered the same things as well.

brianberns · 7 years ago
One extreme amusing/alarming example of this: “A Mathematical Model for the Determination of Total Area Under Glucose Tolerance and Other Metabolic Curves”

https://academia.stackexchange.com/questions/9602/rediscover...

ska · 7 years ago
My approach to avoid this is always to try and find a recent and well written Masters or Ph.D thesis in the area. You can't always find them of course, but if you do they tend to have pretty good context and a more detailed bibliography than you'll find elsewhere.

That said, if you are still at the point of inventing new terms for things people have been doing for decades, you are probably being fairly superficial in the area as well.

Research areas like CNN are especially prone to this because it is so much easier to apply the techniques than to understand the problem domain, and it generates a lot of low quality research papers. See also "when all you have is a hammer".

evanweaver · 7 years ago
Early on building https://faunadb.com/, it took us about a year to discover that the literature referred to "historical" or "time travel" queries as temporality. Other startups in our space were also using homemade jargon.

We figured somebody else had for sure done this and kept searching for different keywords until we picked up a bunch of papers with the correct terminology.

sorryforthethro · 7 years ago
Also, the time investment to learn a congruent field's jargon is often much greater than just making up words and let the internet peanut gallery sort out the synonyms for you.
coredog64 · 7 years ago
Maybe this is a Cunningham’s Law type attempt at finding prior art for a new patent.

Dead Comment

zxcvvcxz · 7 years ago
If you look smart and inventive you appear higher social status. Helps nerds impress women I reckon.
GistNoesis · 7 years ago
Hello, a little self promotion, you can see it our experiment with some deep neural networks doing real-time audio processing in the browser, using tensorflow.js

http://gistnoesis.github.io/

If you want to see how it's done it's shared source : https://github.com/GistNoesis/Wisteria/

Thanks

SyneRyder · 7 years ago
Does anyone know if this is related to the new iZotope RX 7 vocal isolation & stemming tools? It does seem to be talking about something similar, especially when it mentions using the same technique to split a song into instrument stems.

(Or to put it another way - there is commercial music software released in the last year that lets you do this yourself now.)

https://www.youtube.com/watch?v=kEauVQv2Quc

https://www.izotope.com/en/products/repair-and-edit/rx/music...

pizza · 7 years ago
Going back further, X-Tracks did this ~5 years ago https://vimeo.com/107971872

That said, I think a deep learning approach will likely do a lot better (and be a lot easier to develop, imo)

Also, check out Google's Magenta project; it aims to use ML in various music / creativity projects.

I personally plan on doing a project that will involve audio source separation as well as sample classification; a good trick for analyzing audio data is to convert it into images (maybe with some additional pre-transformations applied, such as passing it through an audio filter that exaggerates human-perceived properties of sound) and then just use your run-of-the-mill, bog-standard, state-of-the-art image classifiers on the resulting audio spectrogram with some well-chosen training/validation sets.

Eli_P · 7 years ago
That's interesting, are you going to make something like LabelImg[1]? I've been looking for something like that for audio, yet I'm not sure about treating audio as images. I've heard of this trick, but NN for audio better do work with RNN, GRU[2], maybe LSTM; and images are processed with CNN.

[1] https://github.com/tzutalin/labelImg [2] https://en.wikipedia.org/wiki/Gated_recurrent_unit

ambicapter · 7 years ago
Do you have to convert it into an image? What is it about classifiers that require image input? I've always found it very cool that audio compression and image compression end up using similar frequency-space techniques sometimes.
bsaul · 7 years ago
I used to work in an audio processing research center back in 2003, and colleagues next to me were able to isolate each instrument in a stereo mix live using the fact that they were "placed" on different spot in the stereo plane.

Don't ask me how they did that, it was close to magic to me at that time, but i'm sure it wasn't neural networks. Although it probably involved convolution, as it is the main tool for producing audio filters.

If anyone has more info on the fundamental differences of the neural network approach compared to the "traditional" one, i'd be thankful.

derekp7 · 7 years ago
Back when I was a teen, I used to strip out the vocal tracks on stereo music, by disconnecting the speaker ground wires from the back of the amplifier and connecting them to each other (so each speaker would ground against the other one). Since the vocal was "center", it had the same waveform on both speakers, so the speakers couldn't make a sound if they couldn't dump the ground to something. And the instrumentals weren't distorted too much. At least it worked good enough for a cheap Karaoke setup.
robbrown451 · 7 years ago
I discovered this in the 80s when wires in to my car stereo speakers came loose and suddenly the vocal or lead instrument was missing from the music. Really puzzled me for a bit til I found the problem.
redsky17 · 7 years ago
You can generally do this pretty closely digitally by taking the left and right channels of a stereo track and inverting the phase on one of them. Since the vocals are usually panned center (as you said), the inverted phase ends up destructively cancelling them. Depending on how the instrumentation was mixed, usually the instruments are left pretty in-tact.
heywire · 7 years ago
I discovered something similar by accident, only with the line audio cables between by tape/CD and the amplifier. I would pull the plugs out just enough so the tip made connection but not the ring. It would give the same effect.
tomc1985 · 7 years ago
There's a trick you can do to isolate vocals from some music by essentially flipping one of the stereo channels and combining their waveforms. All the stereo data cancels out and you're left with anything not panned hard center. Recombine that with the original stereo file converted to mono and you then get the vocals, usually a bunch of cruft from the reverb, and anything else panned hard center
urvader · 7 years ago
Most smartphones shift phases on the left/right channels to make it sound better in headphones (simulating bigger room), this means you can combine the two channels (by connecting the positive wires or just wiggle a little in the headphone jack half way in) to extract the vocal track. Not as fancy as a CNN but I’m guessing it would be better to do something with the stereo information in preprocessing before training.
tobr · 7 years ago
Removing the center is possible, but how exactly would you "recombine" it with the original to keep the center? Correct me if I'm wrong, but I don't think the math works out like that.

Say we have mixed three sources, let's call them L (panned hard left), C (center) and R (hard right). Then the left channel has +L+C, and the right channel has +R+C.

Now we phase invert one of them, say the right channel, and combine them. The new mono file is +L+C-R-C. +C-C cancels out and we're left with +L-R.

Since +R and -R essentially sounds the same, it sounds like if we had originally done a mono mix of L and R (+L+R).

But we can't combine this with a straight mono conversion (+L+C+R+C) in any way that will remove both L and R. All we can do is reproduce +L+C or +R+C.

skykooler · 7 years ago
Interestingly, with a Nexus 5 and a certain pair of earbuds, this seems to happen automatically (minus the last step) - with most songs, I can hear the backing tracks, but no vocals. The earbuds are passive components, so I assume there's something going on electrically that's combining the stereo channels somehow.
bsmith · 7 years ago
This works especially well for pop tunes, where the vocal track is by far the most prominent and is almost universally panned dead-center in the mix.
cannam · 7 years ago
Almost certainly something like "Real-time Sound Source Separation: Azimuth Discrimination and Resynthesis" (Barry et al, https://arrow.dit.ie/argcon/35/)

This is a more sophisticated generalisation of the idea of inverting one channel and averaging in order to isolate the centre-panned vocal (or remove it for karaoke purposes).

It works well with mixes in which stereo placement is entirely by pan pot, adjusting the left/right volume levels of each instrument individually in order to place it on the stereo image. It doesn't work so well with real room recordings, where stereo placement is determined also by timing information (read e.g. https://www.audiocheck.net/audiotests_stereophonicsound.php)

This is a better-specified problem than monoaural source separation, which I think is what the original article here is about.

bsmith · 7 years ago
I'm a fairly n00b-level hobbyist music producer, but I can take a stab.

I like thinking of the "stereo image" of a track as having 3 dimensions in physical space. If you imagine looking forward at a stereo system, left to right is the "pan" (how much of a signal is originating from the left vs. right speakers, with 50% each being dead center), volume of each track (or instrument) is how distant (or close) the sound is (you can imagine an individual instrument moving forwards toward you if louder, and vice versa), and the frequency (or pitch) is how high the track is (with lower pitch or frequency being at the floor).

When a mixing engineer "mixes" a track, each track (or instrument) tends to be: 1) adjusted to an appropriate volume relative to other tracks (forward/backward), panned or spread left/right (usually more stereo "width" on the higher frequency sounds), and "equalized" to narrow and manipulate the band of frequencies coming through to the master mix from that track (it's common to cut out some of the harmonics either side of the fundamental of one instrument to leave "space" for another instrument competing for the same frequency range).

So now, even with a final, mastered track, we could apply various filters to de-mix fairly easily if the mixing engineer has left a good amount of "space" in between the elements in this stereo image. If our bass is always under 150 Hz and every other element is above 200 Hz, a simple low pass will grab the bass (and likely also the kick drum, if there is one). But we could also use a "gate" to only allow a signal of a certain magnitude (volume) through to isolate that kick hit, or the inverse to exclude the kick and just get the bass sound. A band-pass could do a decent job on a guitar or vocal track, but will also grab background noise from other tracks sharing the same frequency range. More complicated techniques could be used to isolate things that have been panned left or right of center based on comparison between the left and right stereo channels of the mix.

These techniques won't be as universally useful as the approach in this article, but for certain tracks or sections of tracks, can give very good results very quickly. More tracks and effects like reverb and distortion make all of this more difficult to do with simpler techniques.

vonseel · 7 years ago
I think you’re overestimating how independent each instruments section of the frequency spectrum is, even after mixing engineers cutting EQ to make places like you said.

The reality is, even instruments like kick drum or bass guitar have significant tonal content above 200hz, much of it overlapping with the guitar and vocal ranges.

nabla9 · 7 years ago
>using the fact that they were "placed" on different spot in the stereo plane.

From the description it looks like they used FastICA. https://en.wikipedia.org/wiki/FastICA

The traditional is faster and less recourse intensive. This approach is just showing that it can be done at this point.

hooloovoo_zoo · 7 years ago
Historically, one approach has been independent component analysis (https://en.wikipedia.org/wiki/Independent_component_analysis).
meatsock · 7 years ago
you can compare the L and R channels to each other to separate signals out of a stereo field. https://cycling74.com/forums/separating-stereo-segments-poss...
tasty_freeze · 7 years ago
Trivia: Avery Wang, the guy who invented the Shazam algorithm and was their CTO did his PhD thesis on this topic:

https://ccrma.stanford.edu/~jos/EE201/More_Recent_PhD_EEs_CC...

``Instantaneous and Frequency-Warped Signal Processing Techniques for Auditory Source Separation'' (1994)

sytelus · 7 years ago
Please stop publishing on Medium. I'm getting error "You read a lot. We like that. You’ve reached the end of your free member preview for this month. Become a member now for $5/month to read this story".

Not gonna do that.

computerex · 7 years ago
Share this sparingly lest it gets "fixed" ;) https://outline.com/SauXTY
En_gr_Student · 7 years ago
There is a lagged autoregressive technique used in forensic analysis that allows 3d reconstruction using 1d (mic) sound.

A CNN should be able to back that out too, and do other things like regenerate a 3d space. In the right, high-fidelity, acoustic tracks could be the spatial information to reconstruct a stage and a performance. It would be neat/beautiful/(possibly very powerful) to back video out of audio in that way.