while reading another thread about analysing guitar midi notes, i've made a small comment and decided that it actually deserves its own thread. i'm kind of intrigued by the ehx hog,hog2 and the smaller versions of it like the organ pedal, mellotron, etc. it seems to me that they all use the same variation of a single algorithm with a lot of harmonic processing and filtering. i've found a few posts in the muff wiggler forum about the possible workings:
I can't speak for EHX, but my guess is it's some variation of a Fast Fourier Transform. From a mathematical standpoint, there's a series of formulas (or really just one big, ugly formula) that describe the relationship between the input signal and the output signal. What they're doing with the entire 9 series is taking an input signal (they make a lot of interesting assumptions about what the input signal looks like, but that's beside the point of this post) and applying an extreme degree of filtering/waveshaping/signal processing (defined by that formula) to that input signal, generating an output signal that we hear as a variation on a Mellotron/organ/rhodes/whatever.
All of this is an extension of the technology present in the HOG/POG line. One of the interesting features on the HOG is the Spectral Gate, which as I understand it tried to isolate the fundamental harmonic of whatever note(s) were present on the input to improve the "tracking" of the various shifts. One of the ways they could be doing this is applying the Spectral Gate technology to anything/everything coming in and then performing various distortions/waveshaping/filtering/EQ'ing operations on that signal. A Mellotron flute, for example, has a consistent, well defined transient portion of every note-it is effectively the same whether you are playing the lowest G or the highest C, so you can get by with applying one transient to everything that comes in on the input. After that initial transient attack (which has, historically, been the big sticking point with respect to tracking live instruments and doing pitch to synth conversion), the pedal can apply the real time processing necessary to turn a boring electric guitar sound into a goofy clarinet (as shown in the Mel9 demo video). As far as the formant filtering they do to get the various choir sounds, there's the Talking Machine from EHX's past, which was an envelope formant filter that they could have either modeled in software or taken the code from (if it's digital) and put into a new box.
TL;DR-basically I think EHX is using fancy technology to mask the part of the note that doesn't need to be "tracked" (it isn't tracking, but yeah) and then instead of "tracking" a note they're applying a ton of filtering so as to turn the guitar sound into as close to a sine wave as possible, then processing the shit out of that until it sounds like whatever instrument they're looking to emulate.
i know i'm probably not the first one to ask this, but maybe someone here is pretty deep into dsp coding? anyhow, maybe it can spark a discussion about ways to do fx coding. i'm just curious about some opinions/ideas.Basically the way moogboy described - resynthisizing the sound in real time. They have analyzed the original samples and have made an algorithm that describes that sound and applies it to the incoming audio. They don't need to detect the frequency exactly and do any sort of pitch to anything conversion, they just add spectral noise and apply a VERY dense mult-filter transform to make the sound. I imagine there is a nice fast DSP in that box that is working really hard.