Michael Willis wrote:paulfd wrote:while still preallocating the required space to hold the LFO states for each voice somewhere
Why is it necessary to preallocate space to hold LFO states? Isn't it just a function of time? Or do you mean having audio buffer space to store the modulation effects influenced by the LFOs?
Assume that an LFO is basically just a frequency and phase that you have to store in between callbacks to compute the "next chunk" of the sine or cosine, possibly modulating the frequency during the rendering. These values need to be stored somewhere, but considering the way LFOs are supposed to work in SFZ v2, it does not make too much sense to store them within the voice object that will render the sample because depending on the region parameters you may have an arbitrary number of LFOs running in parallel. You would rather have some pool of LFOs that voices can borrow when they start a specific region, and give them back when the sample ends. During the lifecycle of the voice it can track the which LFO targets which parameter, etc.
It's this borrowing and parameter dispatching mechanism that I do not have implemented yet. The good news is the SFZ v2 envelopes and probably even the EQs can work in the some similar way, so this distribution pattern can be used for all of the more advanced parts of the SFZ spec.