I'd be more excited if this weren't so tame. The Nord Modular had genetic algorithm patch mutation well over a decade ago, details starting on page 99.
The Hartmann Neuron took a similar approach with neural networks in 2003: https://www.soundonsound.com/reviews/hartmann-neuron
I mean, well done and everything, it's a good project, but Synthesize Brand New Sounds In Ways Never Before Possible!!! is a pitch that synth users hear year after year (pun intended). It turns out that musicians don't like black box patching all that much, but prefer morphing things in parameter space because being musicians they want to interact with their instruments, whether that's timbrally, melodically, or harmonically.
Electronic musicians in particular don't need More Sounds or even More Oscillators and More filters and More FX - sure, those are interesting, but honestly people are already spoiled for choice. What people like most is an instrument whose timbral range may be limited but which has a strong center - secondary characteristics remain largely consistent as primary variables are manipulated, so oscillators don't thin out at higher or lower ranges, filter Q negative feedback gain isn't damped so aggressively that it changes gain structure and so forth. The nicest thing an electronic musician can say about an instrument is not 'it can make so many sounds' but 'you can't get a bad sound out of it.'
Neat! Like a Kaoss pad DIY sample 2d crossfader running on rpi3.
This is 2 Parts, a high end computer that analyses (with ML and Neural magic!) some source waves and outputs blended samples that you can put on a 2d grid, and for these generated waves a simple sample player (made with openFrameworks) running on rpi3 that mixes the waves depending on your xy position.
However it doesn't sound interesting or good for what they show, they probably need a better demo without any roland classics. Their Bass / Piano mix sounds mushy and essentially represents the most boring average synth sound i could imagine. The most interesting thing is the flute/snare crossover that is buried in the overladen promotional fluff video.
Would be nice to hear a demo that really puts out the 'new' neural sounds.
edit: the essential 15 seconds of the video here: https://www.youtube.com/watch?time_continue=100&v=iTXU9Z0NYo...
"You will also need the following Open NSynth Super-specific items" super specific indeed.
If someone is interested in machine learning and music, I'd send them to: http://wekinator.org which is actually a research project rather than a marketing campaign one off, and can be setup, run, and played with in a matter of minutes.
I wish I was a little more excited, the results honestly sound rather like what a Yamaha TG-33 outputs, or any other wavetable synth where you can crossfade between two sounds.
I love the idea of using neural networks to find new sounds and possibilities, but for some reason the NSynth project just doesn't hit it for me. Would love to be convinced otherwise.
Neat. Really needs someone to go ahead and mass produce. I assume Google realized the market is too small for them to worry about, but if someone could build them in bulk I'm sure they would find an audience of people willing to pay a decent price.
My guess is there are not a lot of people who could both a) build this in a short amount of time and b) find practical uses for it.
I seem to recall that NSynth and Wavenet are only operating at 16Khz mono, or perhaps it was even lower. Are we now able to generate full 44.1Khz sound?
Aphex Twin did it better:
Not that I mind having this sort of technology being promoted by the likes of Google (casts glance at two 19" racks full of synthesisers), but I think I'd prefer to go with Mr. D James comes up with over the corporate bread maker path ..
Distortion was "discovered" by misuse of technology. I'm sure it would never have been invented / discovered on purpose. Digital technology can't be abused the same way analog technology can.
Too lazy to log into github: where is the nsynth-generate that is referenced in in the repo in audio/readme.md? Not in the repo in any case and no link either ... but a hyperlink to tmux is given. Mixed up priorities!