Go-DSP-Guitar multichannel multi-effects processor

Discuss anything new and newsworthy! See http://planet.linuxaudio.org and https://libreav.org/news for more Linux Audio News!

Announcements of proprietary software may fit better in the Marketplace.


Moderators: raboof, MattKingUSA, khz

tramp
Established Member
Posts: 2347
Joined: Mon Jul 01, 2013 8:13 am
Has thanked: 9 times
Been thanked: 466 times

Re: Go-DSP-Guitar multichannel multi-effects processor

Post by tramp »

andrepxx wrote: Fri Jan 31, 2020 6:13 pm
Also note that go-dsp-guitar is NOT primarily about performance. There are plenty of real-time audio applications / plugins for Linux and guitar that achieve good performance. (I think about Guitarix and rakarrack.) This project is about exactness and customizability. The code originates from a "circuit simulation" approach. It "models" the way guitar amplifiers work much more closely than other "guitar plugins", of course, potentially sacrificing performance along the way.
Hear hear :?:
andrepxx wrote: Mon Feb 24, 2020 8:13 am That looks a lot like what I was trying to achieve with go-dsp-guitar. However, I didn't go all the way down to the component level, as it was too much work, and I suspected it wouldn't compute in real-time anyways. General-purpose circuit-simulation like SPICE doesn't work in real-time since the step size is dynamic and can become arbitrarily small, so one has to cut some corners. But huge congrats for making that work!
I wonder what you mean by "The code originates from a "circuit simulation" approach", followed later by "I didn't go all the way down to the component level"
andrepxx wrote: Fri Jan 31, 2020 6:13 pm Finally ...
many instances won't work well on the same machine because they'll eat all 100% CPU and ask for more.
The application is not designed to have you running multiple instances of it on the same machine. (That won't even work because the communication between the UI and the "processing engine" occurs over a Socket, and if that is already created, a second instance of the application won't be able to spawn.) You can simply have it create more "channels" (inputs and outputs), and it will internally process them all concurrently within the same process (and therefore address space).

Regards.
"That won't even work because the communication between the UI and the "processing engine" occurs over a Socket, and if that is already created, a second instance of the application won't be able to spawn."
Oh, wrong, that could work very well, when done the right way. All you need is to separate the processing space ( no global's), and be able to create and connect sockets on free sockets. By the way, we've done it, but, I must admit, it may lead to a bigger code base then the one may expected. :(
On the road again.
andrepxx
Established Member
Posts: 11
Joined: Fri Jan 31, 2020 5:53 pm

Re: Go-DSP-Guitar multichannel multi-effects processor

Post by andrepxx »

Hear hear
?
I wonder what you mean by "The code originates from a "circuit simulation" approach", followed later by "I didn't go all the way down to the component level"
First of all, mind that English is not my first language, so I hope I can explain what I mean without it getting too wordy / complicated. :mrgreen:

If you take a look at how, for example, the "bandpass" effects unit works, you will see that it calculates a discretized variation of the differential equations (difference equations) for an N-th order RC circuit.

When the software models, say, a high-pass or low-pass filter, it will not model the capacitor and the resistor individually and have them interact. It will rather go like: "If the cap voltage is U_cap(t) at time t and the input voltage to the entire RC circuit is U_in(t), then the cap's voltage will change by this and that amount over time (= dU / dt)." - And then it will integrate that over the time step in question and add it to the "old" voltage that the capacitor was charged to.

However, on a component level, the capacitor is a proporional relation between voltage and charge and the resistor is a proportional relation between voltage and current, where current is charge over time. But the current through the resistor (for example) is not constant when the capacitor charges / discharges, therefore, if one were to simulate resistor and capacitor as individual components, one would have to choose a very small step size and do many iterations to model the charging / discharging capacitor with sufficient precision. If we model the entire "RC circuit" one abstraction-level higher, we can go much "coarser" and still achieve results that describe the behaviour of the overall circuit in a very precise way.

The advantage of the "component-level" approach is, obviously, that the user could draw an arbitrary circuit, and not just parametrize pre-fabricated "blocks" (like, say, a bandpass filter) that he / she can then "plug together", like go-dsp-guitar does it now. However, for component-level simulation you'd probably need sort of a computer algebra system to derive "simplified" equations suitable for the actual computation from the net listing that describes the circuit. You then need to evaluate these dynamically generated / derived equations at run-time, so you basically need sort of a built-in interpreter that interprets them. Not only will that add a lot of code / complexity, but it will also be horribly slow to interpret code (or something more abstract) at run-time instead of just running something that's pre-compiled. However, the worst thing is, when you simulate at this low level, you need to have a very small time-step and / or many iterations for the solution to converge, which is basically what SPICE does when it simulates a circuit. However, this approach is not suitable for real-time simulation. It's also pretty useless, even for the "batch mode", since there is already SPICE, and as far as I remember it even uses (or at least is capable of using) the WAV file format to store stimuli and simulation results (even though these WAV files normally are not sampled at the usual audio rates). Therefore, I had to put restrictions to this, which would allow me to go upwards by basically one level of abstraction. We still have a physically motivated model, but only for the behaviour of the circuit as a whole (say, an RC-circuit or a rectifier) and not for the individual components that make it up (say, a resistor and a capacitor or a set of diodes). That can be made sufficiently fast to be simulated in "real-time" then.

In go-dsp-guitar, you can say: "I need an 8-th order bandpass filter." - And you can then combine it with other "building blocks" (that go-dsp-guitar calls "units") such as linear or non-linear amplifiers, other filters (e. g. comb filters), LFOs, etc. and build useful (or useless but funny) stuff out of it. All these units can be parametrized. (For example, a bandpass filter can be parametrized by its upper and lower limit frequency and the desired filter order.) However, you cannot lay out a filter (or another circuit) on the component level with an arbitrary design. We still simulate the behaviour of a circuit in a physically motivated way to process audio, but it is not a general-purpose circuit simulation working on the component level.

Note that "component-level simulation" is not required for "physically motivated models" or what we call "circuit simulation approach". If we did not aim for "physical motivation", we could also use a biquad as a filter. But it would not be physically motivated. (Or perhaps I'm wrong with this assumption and there are, in fact, "physical (analog) implementations" of a biquad filter. I don't want to bet on it. But I hope you get what I mean.) We started off with a (very simple) "circuit simulation", that initially could only simulate very few "blocks". (Originally, we only supported non-linear amplifiers and filters and only one or perhaps two variants of them.) And only then did we say: "Okay, let's see if we can get this fast enough so that we can process a guitar signal with it in real-time." - So in some sense, this was already a (at this stage very crude, but still) "circuit-simulator" which then became suitable for real-time and turned into an audio "plugin". Therefore, the way we approach certain problems is often different to what audio applications "normally" do and that's exactly what distinguishes us from "most other" audio applications.

Similar approximations / simplifications are used for other circuits / units (or parts thereof). An "envelope follower" is modeled as a rectifier and a lowpass filter, as it is (often) done in the real world. Then it uses the output voltage of that low-pass filter to parametrize something else, for example the resonance frequency of a narrow bandpass filter, to create an "auto-wah" effect. Non-linear amplification / clipping is modeled using waveshapers and filters, which are at least loosly based on the linear or non-linear behaviour of actual analog components and / or circuits.

It's all physically motivated and based on actual (analog) circuits (and often actually incorporating values that were in fact measured from (analog) circuits - which will of course include measurement errors), but it does not go down to individual components. For example, we actually measured lots of impulse / step / frequency / phase / ... responses of actual (analog) audio gear when developing go-dsp-guitar, and, of course, at least the impulse responses that we measured are actually part of / included with go-dsp-guitar.

We also assume that this slightly higher level of abstraction in our simulation (compared to an actual component-level representation) makes the application much more useful for audio engineers (and musicians) in the end, since an audio engineer might say: "I need a filter with a passband from 300 Hz to 3 kHz." - But he / she would probably not lay out a schematic for that. So this higher abstraction level not only makes the implementation simpler and allows the computation to be faster but is also often desirable from a user perspective. Of course, there might be situations where I'd be tempted to draw a circuit. However, most of the time, I'd probably just like to have "some filter" with some values that I can enter and / or "drag around".

More details about the project history can be found in the Q & A section of the project's README. Among other stuff, this explains how the project originally came about, how we came to the idea of creating more "physically motivated" audio effects, how we came to use Go as our programming language, JACK as our audio API, etc. The project's already around for a while (obviously) and, as with probably any non-trivial software project, there were some technical details in the relatively early stages that lead to some particular design choices and / or technological choices. If we were to choose today, perhaps we'd choose differently - or perhaps not. In the end, I think we built a relatively solid "product", which has several characteristics that distinguish it from existing (Linux) audio projects, which make it particularly useful for us and which we hope make it useful for some others out there as well. And that's basically what matters.

In fact, the main reason for developing go-dsp-guitar was, that I once heard someone play through some particularly rare and expensive distortion pedal. Since I am very interested in audio engineering (I'm not a professional audio engineer though), I had some sort of an intuition how this pedal could work internally, but I could not really verify this, let alone "play my guitar through it", without building the actual circuit. (I did not own the pedal. I just had an audio file of someone playing through it.) However, I'm not an electrical engineer either, so that I could design and build an electrical circuit to try it out. So what should I do? Well ... I start creating a software that can simulate said circuit on a computer and apply it to my guitar signal, ideally in real-time. That's basically how the project came about.
Oh, wrong, that could work very well, when done the right way.
Well, we run multiple channels within the same process on concurrent threads. We thought this was a "better" approach than allowing to start the application multiple times.

In fact, allowing to start the application multiple times would not be that hard. The socket is configured in a configuration file, which is searched at a fixed location, relative to the application executable. (The configuration file itself then contains other relative paths to other files.) If we wanted to allow multiple socket addresses, the simplest way would probably be to add a command-line parameter to allow the user to specify the path to another configuration file for the second, third, ... process, where other sockets are then specified. (Alternatively, we may just let the user override the socket with a command line parameter.) But we thought the better way is to simply create multiple signal chains from within the same process and run them on separate threads. If we wanted to allow for multiple processes, we'd probably also have to allow the user to change the client name in JACK, since I think it has to be unique (not sure about that though) - currently it's fixed to "go-dsp-guitar".

There are lots of "good reasons" for doing it all in the same process though. For example, since threads within the same process can share memory, we can easily share ressources among threads. For example, if a "root of unity" for the FFT was already calculated for "channel 0", why calculate it again for "channel 1"? (... and hold it twice in memory, effectively leading to more cache misses down the road.) It also allows us to "mix down" all channels into a "master output" used for monitoring and apply simple "spatialization" to them, so that intensity and signal delay are different on left and right channel depending on the audio source's position in the room. Last but not least, remember that go-dsp-guitar also has a "batch processing mode" that operates on WAV files. By processing all channels inside the same application process, we can easily process a multi-channel wave file in one go. (You still have the option to load the individual channels from different files though.)

I don't think it makes much sense to go into and discuss lots and lots of implementation-level details here. (At least I don't think there's much to be gained from it.) I would rather take the time for advancing the application itself. After all, the application is open-source and tends to have quite extensive and useful (I hope) documentation.
CrocoDuck
Established Member
Posts: 1133
Joined: Sat May 05, 2012 6:12 pm
Been thanked: 17 times

Re: Go-DSP-Guitar multichannel multi-effects processor

Post by CrocoDuck »

That's interesting.
andrepxx wrote: Tue Mar 03, 2020 11:08 pm If you take a look at how, for example, the "bandpass" effects unit works, you will see that it calculates a discretized variation of the differential equations (difference equations) for an N-th order RC circuit.
Looks like you use the finite difference method (or a variant of), as (for example) done here: https://onlinelibrary.wiley.com/doi/boo ... 0470749012. That's cool.
andrepxx wrote: Tue Mar 03, 2020 11:08 pm In go-dsp-guitar, you can say: "I need an 8-th order bandpass filter." - And you can then combine it with other "building blocks" (that go-dsp-guitar calls "units") such as linear or non-linear amplifiers, other filters (e. g. comb filters), LFOs, etc. and build useful (or useless but funny) stuff out of it.
That's cool too, however the response of a chain of analog systems, even if linear, is not the product of the response of the simulations of the individual blocks, unless the analog systems are buffered, or naturally have high input impedance and low output impedance. This because they mutually load each other. So, if I understand correctly, perhaps some chains will be fairly dissimilar from what they intend to model. But I reckon they will still sound good. A "SPICE" approach (as you call it) can model the mutual loading of the various circuit blocks.
andrepxx wrote: Tue Mar 03, 2020 11:08 pm (Or perhaps I'm wrong with this assumption and there are, in fact, "physical (analog) implementations" of a biquad filter. I don't want to bet on it. But I hope you get what I mean.)
Indeed there are: https://en.wikipedia.org/wiki/Electroni ... uad_filter
andrepxx
Established Member
Posts: 11
Joined: Fri Jan 31, 2020 5:53 pm

Re: Go-DSP-Guitar multichannel multi-effects processor

Post by andrepxx »

That's cool too, however the response of a chain of analog systems, even if linear, is not the product of the response of the simulations of the individual blocks, unless the analog systems are buffered, or naturally have high input impedance and low output impedance. This because they mutually load each other. So, if I understand correctly, perhaps some chains will be fairly dissimilar from what they intend to model.
Yes, that is true, and I am aware of that fact. However, in the end, when you "diregard" it, what you end up doing is "idealizing" the models from their "real" counterparts (which is what we often do in simulation anyways). I suspect that, even though we're potentially simplifying things a lot, in the end it will still become more useful. (And, of course, it will save both a lot of code and CPU cycles as well.)

Usually, in an "ideal world", you'd want inputs to have infinite impedance and outputs impedance to have zero impedance. The "loading" of outputs is normally not really desired. One particularly bad example is that, when you use certain wah-pedals in conjunction with certain fuzz pedals, the combination of them would become self-oscillating, which is certainly not what you want. :mrgreen:

The advantage of regarding everything as "buffered" is that the effects of the units on your sound become "more orthogonal to each other" than they would otherwise be. When things are "more orthogonal", it will become easier to predict what is going to happen when you combine them, and also what changes you will have to make to each of them in order to arrive at a particular result (= sound in this case).

Similarly, when you select one of the "amplitude-based" effects (overdrive, distortion, fuzz, excess), it will not include the "tone-shaping filters" that most "real" OD pedals include (normally before and after the clipping stage). (Some units do include filters for DC removal and anti-aliasing though.) This is because you can easily add a filter before / after the clipping stage "explicitely" if you really want so. Remember we're not modeling any particular overdrive effect anyways. We're just modeling "overdrive" (soft clipping). Choosing a filter would imply a particular "voicing" though. The effects provide (mainly) the clipping stage. (The fuzz unit includes a lot more stuff to derive a slowly-changing "bias" signal from the input to achieve asymmetry in the clipping.) You can then add filtering exactly where and how you want it to happen.
Post Reply