The original announcement from the Google dev is here: https://groups.google.com/forum/#!msg/g ... aL0E8fAwAJI did not follow the whole conversation [..] but this caught my attention. In the jargon I am used to, sub-millisecond means "up to 1 ms worst case scenario".
I don't think worst-case behaviour of the garbage collector is well-defined. I already doubt that the worst-case behaviour of the operating system's memory management (malloc, free, ...) is well-defined, which is why you're not supposed to call them from a "real-time" thread.
However, if you need "sub-millisecond" timing, you probably won't be able to run it on a general-purpose operating system anyways, including one based on the Linux kernel, since the scheduler's time slices would already be larger than a millisecond. Linux is not an RTOS. (And even a PC, from a hardware architectural perspective, is not really built for real-time. You know, it has all sorts of things that makes timing unpredictable, like interrupts, caching, pipelining, speculative execution, multiple cores / hardware threads doing things concurrently, power-management techniques like dynamic frequency scaling, ...) Even with real-time patches that give you the "fully preemptible kernel", the best you can hope for is being able to run code with hard (?) real-time guarentees in kernel space. So a device driver might be able to run in hard (?) real-time, for example to toggle an I/O pin in software and use that as a clock signal for some serial interface or to do some PWM. A user-level (POSIX) program (and this is what all common audio applications that I know of are) will run under the preemptible Linux kernel and will therefore definitely not have hard real-time guarantees.
I somehow doubt this is possible on a PC. I can barely run JACK at a buffer size of 64 operating at 96 kHz, but then I can do no processing, only use it as a "virtual wire". Any lower and it doesn't even start. I don't know of any "real" client I could attach to it when operating at this low of a buffer though. C code that just does a "memcpy(...)" from input to output might be fine. I haven't tried. But any real calculation, actually, nope.To put things into perspective, assume we are operating at 48 kHz sample rate on a 32 buffer size. That would mean that the audio callback of any application has less than 0.6 ms to process all the samples in the buffer.
RasPi will very likely be too slow for go-dsp-guitar in real-time mode. We do distribute AArch64 binaries, but they're rather for "batch mode". In fact, I never got JACK to work properly on a Raspberry Pi (3B), but I have to admit that I didn't try too hard.Like: maybe I will not be able to run go-dsp-guitar on a raspberry pi at 16 samples per frame and 96 kHz with 8 channels.
I can do 512 samples buffer size at 96 kHz (which is ~5 ms) with 2 channels on an Intel Core i3 2310M with pre-amp and power-amp simulation using a filter order of 2048 (with 4096 being just "slightly too large" - we will probably optimize that bit out soon ) and personally, I consider that "fast enough".
With Guitarix, I can do 256 samples stable (so ~2.5 ms), 128 samples x-run-s rarely. However, their default cabinet simulation does not use convolution, so it's a bit of an apples to oranges comparison.
If I don't use convolution, I can go 256 samples (~2.5 ms) stable in go-dsp-guitar as well, 128 samples x-run-s rarely. Therefore, the performance of the two applications appears to be very similar. However, I'd claim that go-dsp-guitar "does more". It tends to have more sophisticated models, since we started off with a circuit-simulation approach and always put model accuracy and "physical motivation" over pure performance. Therefore, it is supposed to be quite a bit more computationally intensive. It has more inputs and outputs and it applies a "room simulation" to the signals and creates a stereo mixdown in the end. All calculations are done in "double precision" floating-point arithmetic. (Not sure if Guitarix uses single or double precision - not that it would matter a lot, but of course, double precision will use more Cache and memory bandwidth.) And it is implemented in a higher-level language.
I share your opinion that, from a user perspective, one should not really care what language some project uses to get its job done, as long as the results are fine. It was just one of the reasons for me to start my own project instead of contributing to, say, Guitarix. I wanted to take a different route, use something more "mainstream", that I would have experience with, not have to learn from scratch specifically for this project (and then probably produce really bad code in - at least in the beginning ), that I could hope more people would be familiar with (or, even if they're not familiar with it, then at least it's more similar to something they're already familiar with - folks coming from a Java or C or Python background tend to pick up Golang pretty easily).I do not really care, nor I am pretending that your comment means that you consider Faust totally useless, but my feeling about that attitude in general, independently from the degree it is manifested, is that it looks more like cultural resistance rather than something technically motivated.
The other thing (and that's probably the more important differentiation) is that the projects have very different focus as well. You can remote-control go-dsp-guitar over a web interface (or from any application that can send JSON over HTTPS) and therefore run the actual signal processing on a "headless" machine. Guitarix has an optional web UI as an addon, but you cannot control everything from it. Guitarix obviously has way more plugins than go-dsp-guitar. (It's also around for quite a while and therefore probably "more mature", has way more wrinkles ironed out, etc.) However, the main differentiation is, like I already mentioned, that go-dsp-guitar is built from the ground up following a rigorous simulation approach, and in fact this was the number one reason for starting a new project.
In the end, I don't like to compare go-dsp-guitar and Guitarix too much though. They're definitely different projects, which are supposed to have their own place and neither is supposed to replace the other, even though they obviously will have quite some overlap in functionality for the user.
By the way, our first prototype was not implemented in Golang at all (and of course, it didn't have that project name back then), but rather using a mix of Python and C. (We tried to do the actual signal processing in C, the "control" in Python and then do cross-language calls between them.) We did not manage to get any audio API working properly back then though. We tried PortAudio and rtAudio, but both of them gave us very obscure errors. We also tried "raw" ALSA, but that didn't work for real-time at all (latency would increase with runtime of the application) and it also had some bugs. Two years later, we started over in Golang using JACK and then it suddenly all worked pretty well. Of course, we still had to go a long way from there, but at least the I/O was working fine then. There is actually still a repository called "audio-tools" on my GitHub, which is basically the leftover from the go-dsp-guitar Python + C "legacy". It's some tools I used to measure impulse responses. I never ported these to Golang, but they're not end-user relevant anyways.
Exactly this!After all, when we need to do mission critical DSP, we do not use computers at all but only dedicated chips. As far as DSP goes, audio is actually pretty relaxed in terms of requirements. I have worked with DSP systems that were allowed a worst case scenario latency of 20 microseconds, for example. Computers cannot keep up.
Regrards.