RT MIDI

Optimize your system for ultimate performance.

Moderators: MattKingUSA, khz

User avatar
khz
Established Member
Posts: 877
Joined: Thu Apr 17, 2008 6:29 am
Location: German

Re: RT MIDI

Postby khz » Wed Dec 05, 2018 5:31 pm

j_e_f_f_g wrote:Use a midi controller that connects via usb (directly -- no hub).

USB passes the MIDI data (reliably?) through how many layers? USB always sends blockwise instead of bytewise. For example (ok, audio), RME ~tunnels USB sound cards for a reason, because if they used the USB protocol they would not have stable and fast audio processing.
My experience is that 5 pin MIDI is much more reliable than USB. The best would be CV.
j_e_f_f_g wrote:Yes, but that's an irrelevant point. Jitter is a problem only if/when a human detects it.

If you make music with several hardware devices and the computer (ALSA/jack) and each device has a different jitter it is a matter of luck what comes out at the end for music because each reacts differently. I don't like this kind of random (of the devices/software used) groove.
tramp wrote:However, well known is as well, that jack-midi never was done for interact with "hardware synthesizers". For hardware midi handling you indeed better use raw alsa.

Thanks for the information.
tramp wrote:That changed, when interaction with software midi devices steps up in the game.
jack-midi is handled in the realtime-audio thread of jack.

That sounds very good and so I and others "thought" it (from not dev point of view).
tramp wrote:The main issue is indeed the "conversation", or "bridge" from alsa to jack. That's were jitter may be introduced.

For ALSA "bridge"(a2j) and external hardware, could the adjustment of the timers and/or the rtmidi compensate the jitter?
FZ - Does humor belongs in Music?
GNU/LINUX@AUDIO ~ /Wiki $ Howto.Info && GNU/Linux Debian installing >> Linux Audio Workstation LAW
    I don't care about the freedom of speech because I have nothing to say.

User avatar
tramp
Established Member
Posts: 1246
Joined: Mon Jul 01, 2013 8:13 am

Re: RT MIDI

Postby tramp » Wed Dec 05, 2018 6:10 pm

khz wrote:For ALSA "bridge"(a2j) and external hardware, could the adjustment of the timers and/or the rtmidi compensate the jitter?


when you think about it, you'll see that your question already include the answer.
khz wrote:adjustment of the timers

what do that mean,
adjustment during operation == jitter
khz wrote:rtmidi compensate

compensate == jitter

That is on of the main points jeff tries to explain, as harder you try to "compensate" or "adjust", as more jitter you would introduce.
Jack tries it, and in my opinion, jack does a really good job with that. !
But, and I ain't like that, really, I ain't like to give jeff right here, it is simply true, as more layers you use, as more algorithm tries to fetch the right time in micro-seconds, all based on a different time base, even if they all use the same time base, be it jack, be it rtmidi, be it "insert a name here", well, the result will be jitter.
I ain't like to say it, but, it's true, using raw alsa will give you the best results on modern linux systems. :(

User avatar
khz
Established Member
Posts: 877
Joined: Thu Apr 17, 2008 6:29 am
Location: German

Re: RT MIDI

Postby khz » Wed Dec 05, 2018 6:43 pm

Thank you for the helpful information. I'm a user and I don't know the technical details very well. Someone who programs and understands the theory can argue much more soundly.
What I noticed: less (adaptation) is more. KISS principle. :D
FZ - Does humor belongs in Music?
GNU/LINUX@AUDIO ~ /Wiki $ Howto.Info && GNU/Linux Debian installing >> Linux Audio Workstation LAW
    I don't care about the freedom of speech because I have nothing to say.

folderol
Established Member
Posts: 745
Joined: Mon Sep 28, 2015 8:06 pm
Location: Here, of course!
Contact:

Re: RT MIDI

Postby folderol » Wed Dec 05, 2018 9:57 pm

This whole subject is really losing contact with what it is all about. Lets go back to basics. Forget the technology for a moment.
Sound travels approximately 30cm in one millisecond. Agreed?
The average gigging group has 4-5 members spread 2-3 metres apart.
How do they manage to stay in time?

In the 1960s the high tech weapon of choice was the 4 track open reel taperecorer. Groups would play to a couple of tracks, then bounce that across to the other two tracks while also adding two more - rinse and repeat.
How did they manage to sync these multiple layers without precision sample-accurate technology?

A cathedral organ has pipes that can take anything from about 10mS to 500mS to develop sound. The pipe ranks will also be metres long and possibly as much as 10 metres from the organist.
How do they manage to keep that lot in time?

People vastly underestimate the brain's ability to latency/jitter compensate on the fly. That applies to both the performer and the listener. Unless the errors are really extreme it will sound OK.

That's only half of it.

The other factor is the type of sound being produced. Choral sounds or massed strings (translates as synth pads) have such a slow attack and decay there is enormous flexibility. Pure percussive sounds have the least. So simply build up your ideas starting with the percussion then adding layers of less and less time critical parts - Oh, and always listen to your direct output, not the recording return from the track you are adding.

I did an interesting test some time back with live recorded multi-track MIDI. I made a copy of this but with all the tracks quantised. I then asked friends what they though of the different 'arrangments'. Without exception they preferred the original, so what was the extra precision taking away?

I never quantise tracks now.

User avatar
CrocoDuck
Established Member
Posts: 962
Joined: Sat May 05, 2012 6:12 pm
Contact:

Re: RT MIDI

Postby CrocoDuck » Wed Dec 05, 2018 11:09 pm

folderol wrote:A cathedral organ has pipes that can take anything from about 10mS to 500mS to develop sound. The pipe ranks will also be metres long and possibly as much as 10 metres from the organist.
How do they manage to keep that lot in time?

People vastly underestimate the brain's ability to latency/jitter compensate on the fly. That applies to both the performer and the listener. Unless the errors are really extreme it will sound OK.

In this regard there is one thing which is pretty important: the level of sensory coupling, that is, by how many different sensory inputs an instrument provides you the information that a note has been "activated". Evidence suggests that players of keyboard based instruments are able to tolerate much larger latencies than wind instruments players, for example, which have load of sensory feedback from the head bones. The reason might be that the brain could have hard time making sense of contradicting sensory inputs: the bone conducted vibration tells it something is going on, but the sound arrives late! This is only a proposed theory, as far as I know. The statistical significance about different instruments having different thresholds is there, though.

folderol wrote:The other factor is the type of sound being produced. Choral sounds or massed strings (translates as synth pads) have such a slow attack and decay there is enormous flexibility. Pure percussive sounds have the least. So simply build up your ideas starting with the percussion then adding layers of less and less time critical parts - Oh, and always listen to your direct output, not the recording return from the track you are adding.

That's crucial too. If you can hear a direct not latency affected sound (beyond propagation time) then the brain will go by the "first wave front law", by which the direct sound gives most information and eventual delayed copies are integrated for adding details to the perception. This is why we get the correct number of sources even in a very reverberant room, by the way, despite there being delayed copies of the original sound coming from all angles. If you monitor solely from the return output, as you rightfully suggest to avoid, the audibility of latency, and its annoyance factor, goes up. This is also key to the fact that audibility of latency is much higher when using in-ear monitors rather than loudspeakers, as in the second case there is a tiny bit of room reverberation making things messier.
Check my Linux audio experiments on my SoundCloud.
Browse my AUR packages.
Fancying a swim in the pond?

User avatar
raboof
Established Member
Posts: 1531
Joined: Tue Apr 08, 2008 11:58 am
Location: Deventer, NL
Contact:

Re: RT MIDI

Postby raboof » Thu Dec 06, 2018 9:46 am

tramp wrote:
raboof wrote:Actually the timestamp/frame of the incoming MIDI message is recorded (http://jackaudio.org/api/struct__jack__midi__event.html, https://github.com/jackaudio/jack2/blob ... r.cpp#L187), but indeed it is up to you as the library user to decide whether you want to ignore it (more jitter) or take it into account (more latency).


Your interpretation is wrong here, this isn't a timestamp, this is the "stamp" in relation to the audio sample (the single float position in a audio sample frame).


Right, I'm calling this a 'timestamp' though indeed it is measured in frames, not in 'wall clock time'.

tramp wrote:So, all you've to do is as for audio or for midi, work on the incoming data on sample base. eg:

Code: Select all

for (i = 0; i < event_count; i++) {
              process_midi_event[i]
}


to process incoming events sample accurate, without jitter.


When using Jack MIDI, you would process these events in the Jack callback. Here you have the choice to either take into account the time of the event (introducing extra latency) or apply each event to the beginning of the current audio buffer (introducing extra jitter). Or am I missing something here?

User avatar
raboof
Established Member
Posts: 1531
Joined: Tue Apr 08, 2008 11:58 am
Location: Deventer, NL
Contact:

Re: RT MIDI

Postby raboof » Thu Dec 06, 2018 9:56 am

folderol wrote:People vastly underestimate the brain's ability to latency/jitter compensate on the fly. That applies to both the performer and the listener. Unless the errors are really extreme it will sound OK.


I agree with you wholeheartedly with respect to latency.

With respect to jitter, I agree it depends on the type of sound, and it might not matter much for chorals, strings and synth pads. But for percussive or other highly articulated instruments, my experience is that jitter can make those almost unplayable.

folderol wrote:I did an interesting test some time back with live recorded multi-track MIDI. I made a copy of this but with all the tracks quantised. I then asked friends what they though of the different 'arrangments'. Without exception they preferred the original, so what was the extra precision taking away?

I never quantise tracks now.


I'm not surprised - I would argue that the quantization is not adding extra precision, but introducing jitter :D.

User avatar
tramp
Established Member
Posts: 1246
Joined: Mon Jul 01, 2013 8:13 am

Re: RT MIDI

Postby tramp » Thu Dec 06, 2018 10:25 am

raboof wrote:When using Jack MIDI, you would process these events in the Jack callback. Here you have the choice to either take into account the time of the event (introducing extra latency) or apply each event to the beginning of the current audio buffer (introducing extra jitter). Or am I missing something here?


For example, you could process the incoming midi buffer by write it to your own internal buffer, in a format which suits your needs, which you then process in your audio callback sample accurate. No extra latency, no jitter.

User avatar
raboof
Established Member
Posts: 1531
Joined: Tue Apr 08, 2008 11:58 am
Location: Deventer, NL
Contact:

Re: RT MIDI

Postby raboof » Thu Dec 06, 2018 10:42 am

tramp wrote:
raboof wrote:When using Jack MIDI, you would process these events in the Jack callback. Here you have the choice to either take into account the time of the event (introducing extra latency) or apply each event to the beginning of the current audio buffer (introducing extra jitter). Or am I missing something here?


For example, you could process the incoming midi buffer by write it to your own internal buffer, in a format which suits your needs, which you then process in your audio callback


The Jack docs are a bit sparse, but my understanding was you need to call jack_midi_event_get from within your audio callback anyway, right?

tramp wrote:sample accurate. No extra latency, no jitter.


But in your audio callback, you either ignore the time from the MIDI event or you don't, right? If you ignore it you introduce jitter, if you don't you introduce latency?

User avatar
tramp
Established Member
Posts: 1246
Joined: Mon Jul 01, 2013 8:13 am

Re: RT MIDI

Postby tramp » Thu Dec 06, 2018 10:59 am

Sure, in the audio callback.
say I receive a midi_buffer and a audio_buffer, both of the same size. Now I process first over the midi_buffer, 1 by 1, and write it to my own buffer, which again has the same size then the audio buffer. Now, when I process the audio_buffer, I process as well my own_midi_buffer, therewith I receive the midi_events (or what I've made out of them) at sample accurate position in the frame. If it is, for example a note_on event, you be able to open the gate at the sample accurate position in the frame.

Sure, you could process even both buffers at the same time.

User avatar
khz
Established Member
Posts: 877
Joined: Thu Apr 17, 2008 6:29 am
Location: German

Re: RT MIDI

Postby khz » Thu Dec 06, 2018 5:56 pm

j_e_f_f_g wrote:No, but it was done in the 1970's so never published on the internet.

https://www.doc.ic.ac.uk/~nd/surprise_97/journal/vol1/aps2/ wrote:MIDI is the acronym for Musical Instrument Digital Interface. It was defined by the Midi 1.0 specification which was agreed upon in august 1982.

? And the Roland study referred to Dave Smith's invented MIDI protocol (which by the way was his 2nd choice at the time. Since MIDI was cheaper to realize, this became the norm. IMHO)? Amazing. https://en.wikipedia.org/wiki/MIDI
folderol wrote:The average gigging group has 4-5 members spread 2-3 metres apart.

MIDI are control signals and not audio. For audio, what you wrote applies, yes.
But this is about MIDI, controlling hardware and software instruments.
With 5 - 20 hardware/software instruments/sequencers - controlled via USB, MIDI (sound card), ALSA, jack, ... - it doesn't groove anymore if everyone has other/random processing time. In the worst case it is muddy.
Tight, for example, also play good musicians with real (acoustic) instruments. The groove is caused by specific ~errors/inaccuracy. But these are consciously set and not somehow.

By the way, it grooves with Linux MIDI and even if I get more xruns I don't hear any degradation in the audio signal.
So the result is tight, reliable and the audio quality is without interference.
The xruns are definitely based on my (planless) 100% RT madness. :-D
FZ - Does humor belongs in Music?
GNU/LINUX@AUDIO ~ /Wiki $ Howto.Info && GNU/Linux Debian installing >> Linux Audio Workstation LAW
    I don't care about the freedom of speech because I have nothing to say.

folderol
Established Member
Posts: 745
Joined: Mon Sep 28, 2015 8:06 pm
Location: Here, of course!
Contact:

Re: RT MIDI

Postby folderol » Thu Dec 06, 2018 6:16 pm

khz wrote:MIDI are control signals and not audio. For audio, what you wrote applies, yes.
But this is about MIDI, controlling hardware and software instruments.

What about groups that are using dumb keyboards to control soft-synths, MIDI guitar effects, or those using MIDI wind controllers... or even MIDI drum pad controllers?

P.S.
I don't think anyone has produced a MIDI vocalist yet - more's the shame :lol:

User avatar
khz
Established Member
Posts: 877
Joined: Thu Apr 17, 2008 6:29 am
Location: German

Re: RT MIDI

Postby khz » Thu Dec 06, 2018 6:36 pm

folderol wrote:What about groups that are using dumb keyboards to control soft-synths, MIDI guitar effects, or those using MIDI wind controllers... or even MIDI drum pad controllers?

They also want reliable control signals. With a drum pad controller, for example, you can hear random/inaccurate control signals very accurately, in a negative sense.
folderol wrote:I don't think anyone has produced a MIDI vocalist yet - more's the shame :lol:

Yamaha FS1R (Frequency modulation synthesis) https://www.youtube.com/watch?v=IvKTYFWN2Uk or acoustic https://www.youtube.com/watch?v=-6e2c0v4sBM ;-)
FZ - Does humor belongs in Music?
GNU/LINUX@AUDIO ~ /Wiki $ Howto.Info && GNU/Linux Debian installing >> Linux Audio Workstation LAW
    I don't care about the freedom of speech because I have nothing to say.

User avatar
sysrqer
Established Member
Posts: 1521
Joined: Thu Nov 14, 2013 11:47 pm
Contact:

Re: RT MIDI

Postby sysrqer » Thu Dec 06, 2018 6:54 pm

folderol wrote:I don't think anyone has produced a MIDI vocalist yet - more's the shame :lol:

https://www.kvraudio.com/product/delay_ ... audionerdz
Greatest plugin of all time, if only for the facial movements

User avatar
sysrqer
Established Member
Posts: 1521
Joined: Thu Nov 14, 2013 11:47 pm
Contact:

Re: RT MIDI

Postby sysrqer » Thu Dec 06, 2018 6:56 pm

I read recently that Autechre got frustrated with the limitations of MIDI and OSC and decided to create their own protocol. I would love to know more about it but I think that's about all the information that exists about it.

edit:
marf, on 03 Nov 2013 - 3:39 PM, said:

are you sending CV/ Gate out of your computers to external synths with these new computer cv boxes like expert sleepers? or is it Midi or native computer synthesis on the computer in your last couple releases?


we have a motu interface that can send cv but most of my stuff's internal

we don't use midi we made a better protocol. midi is stone age really


could you elaborate on that? what did you improve exactly?





baud rate, resolution, the type of messages

it's not really a good way to ask the question tbh cos it wasn't designed as an improvement to midi, it's a different kind of protocol

but it makes midi redundant apart from for talking to old gear


Will it be published for free or commercialy in the future or is it already available somewhere? How does old hardware recognize this better protocol? Does it behave significantly better when syncing software (DAW, or MAX) with external hardware?



nah we're only using it with our own stuff. it's incompatible with anything external, if we need midi we have to convert and we use sysex to do that usually, it works well enough

i'm not using any midi gear atm at all, rob still uses a bit but not much these days


Return to “System Tuning and Configuration”

Who is online

Users browsing this forum: No registered users and 2 guests