Although I've toyed with various synthesis and I have working experience with Faust, I must admit most of project description is way over my head.
To hijack the thread a bit, I'm interested in synthesizing sound samples from scratch. Things like footsteps, squeaking doors, wind, birds, percussions and collision sounds. I'd like to have a synth of sorts that can be configured through ~30 parameters to produce wide range of such effects. People often use SFXR and clones [0] for such purpose. Is there a model that would produce more convincing results while keeping such low number of parameters?
Wow, this is really neat. I was wondering whether this is GPU optimized and not to be disappointed:
"Fragment is a collaborative cross-platform audiovisual live coding environment with pixels based real-time image-synth approach to sound synthesis, the sound synthesis is powered by pixels data produced on the graphics card by live GLSL code, everything is based on pixels!"
Now, if only there was some way to transpile existing VST/AU plugins into this platform. I've been dreaming of GPU accelerated legacy instruments and FX for a while!
Hmmm, seems very cool, but I'd need a tutorial for this.
Gotta say, the docs alone are impressive
very impressive work!
This idea seems to strike many different people at different times. One my favorites is the ANS synthesizer [0], made with optical tone wheels. There's an android app simulating it [1] (my only relation with it is "user"). The synth presented in this github page really goes a step beyond.
[0] https://en.m.wikipedia.org /wiki/ANS_synthesizer
[1] https://play.google.com/store/apps/details?id=nightradio.vir...