Image-based sound synthesizer?

Programming applications for making music on Linux.

Moderators: MattKingUSA, khz

Post Reply
User avatar
unfa
Established Member
Posts: 129
Joined: Tue May 17, 2011 10:43 am
Location: Warsaw, Poland
Has thanked: 1 time
Been thanked: 19 times
Contact:

Image-based sound synthesizer?

Post by unfa »

I have an idea for an LV2 plugin, that'll synthesize sounds based on input images, treating them as spectrograms.
Basically, the reverse of generating a spectrogram, in a DAW-friendly way, that'd also allow the users to play musical notes (pitching the synthesized result according to MIDI input).

That'd be an image-based synthesizer. I'd rather thing of this a s a tool for sound effects design, but coudl als obe used in music.

Right now the only tool I've found in the Linux audio world that can synthesize sound from images is ARSS, and it's a bit of a pain in the ARSS to use it ;)

To synthesize a sound from an image you have to:

1. Download ARSS binary (https://sourceforge.net/projects/arss/)
2. Prepare the image
3. Save your image it as a specific BMP format, becasue it's the only one supported by ARSS
4. Compose a shell command to synthesize a WAV file from your BMP file
5. Run ARSS
6. Listen
7. Tweak the commandline parameters and repeat from 4. , or tweak the image and repeat from 2.
8. You now have to use a sampler plugin or another way of putting that WAV file into your DAW session.

I wish there was a DAW-friendly way of doing this, like
using an LV2 plugin that can just load an image file, have some knobs I can tweak live. It'd be fantastic, if the sound synthesis could happen in realtime, because that'd make it possible to automate some parameters, and have that actually reflected in the sound.

ZynAddSubFX's PADsynth generates samples under the hood (and it works as an LV2 plugin) so I guess that's one approach. I think that Image-Line Harmor synthesizes sound from images on the fly (and does a lot of other addtive synthesis awesomness, but that's not what I'm looking for).

What do you think?
User avatar
autostatic
Established Member
Posts: 1994
Joined: Wed Dec 09, 2009 5:26 pm
Location: Beverwijk, The Netherlands
Has thanked: 32 times
Been thanked: 104 times
Contact:

Re: Image-based sound synthesizer?

Post by autostatic »

Sounds like a great project! You might want to look at Virtual ANS: http://www.warmplace.ru/soft/ans/ or PixiVisor from the same developer: http://warmplace.ru/soft/pixivisor/
I don't think these projects are open source though but maybe you could drop the dev a line ;)
User avatar
davephillips
Established Member
Posts: 592
Joined: Sat Aug 15, 2015 1:05 pm
Has thanked: 35 times
Been thanked: 23 times

Re: Image-based sound synthesizer?

Post by davephillips »

unfa wrote:...Right now the only tool I've found in the Linux audio world that can synthesize sound from images is ARSS...
Ceres, Csound, OpenMusic, and Pure Date (aka Pd) all do image-to-sound conversion.

Best,

dp
Onirom
Established Member
Posts: 16
Joined: Tue Jan 24, 2017 11:17 pm

Re: Image-based sound synthesizer?

Post by Onirom »

There is the Fragment synthesizer which use pixels data as the basis for everything : https://www.fsynth.com

It is real-time, fully open source (JavaScript client/server side/advanced sound server in C), you can import (drag & drop) images or sounds and display it as you want with few lines of live GLSL code, play it forward/backward/any ways from a MIDI controller (MPE ready) or from code, the played pixels data can also be recorded, imported back and played again, since it use the GPU you can do whatever to the pixels data in real-time, i suggest to watch all the videos for all the possibilities. :)

The audio to images conversion is not as good as ARSS though, the sound engine may show off some limits as well because it use a (albeit very large and basically unlimited) heavily optimized bank of oscillators, the phase data is accessible but not used, using FFT/IFFT is planned.

There is also an Audio Server written in C which is quite fast and is able to add band-limited noise, this was made as band-aid performance and because i wanted to output the sound to my DAW of choice :P you can feed it pixels data and it will output sounds synthesized with additive or granular synthesis, the program is quite generic, has simple protocol and is easy to compile and can be found here : https://github.com/grz0zrg/fas

One of the advantage of the sound server with the fact it has many settings and support granular synthesis, is that it support distributed sound synthesis out of the box, many cores can be used (memory hungry) or even multiple machines over the network, someone could literally build a beast of a synth with it with an almost unlimited partials range.

This can be used in a DAW (it support multiple output channels) although it is not as DAW friendly as many would want, basically you also need a client (like Fragment) to feed it the pixels data through the network via WebSocket.

Anyway i am actually writing a DAW-friendly software with a bit of an unique twist not yet seen from all the software that do that already, i don't have any knowledge of plugins though so i am rather slow at making it, but if anyone want to team up... :)
Post Reply