Now hear me out, I've been messing with this linux audio business for ages now but this is the first time I've gotten down to truly trying to flesh out an entire song. Let me summarise my trauma:
- First of all, session handling -- Jack sessions, Qtractor sessions, apps that support sessions, apps that don't, augh augh augh! How does everyone deal with this other than just keeping everything open all the time and never using the computer for anything else?
- MIDI vs. audio -- if I'm sequencing synths (ie. Yoshimi, VSTs) do I create a MIDI track and audio track for each instrument? When should I record to audio? If you're playing with an external keyboard, do you record everything to MIDI or just the audio output? At the moment I have a MIDI and audio track for each instrument because then I can apply effects in the audio track which leads me on to....
- Mono vs. stereo -- should all my software synths etc. be recorded into mono tracks or stereo tracks? What's the policy? Which brings me to...
- Buses buses buses -- In Qtractor, do I create an input/output bus for each track? How does one handle the massive spaghetti mess that you end up with when you start adding sends and stuff into the mix? What stuff should get sent to the Master buses? A mouse and keyboard is massively disabling compared with a physical hands-on patchbay.
- Plugin Instruments -- This is just an interesting little aside, but how do people deal with instruments that are handled as plugins (ie. CALF Monosynth)? How do you capture the audio? With a send and a separate track, I presume.
Thank you if you took the time to read this!