The Music Industry is 10 to 15 years behind
Posted: Sat Feb 24, 2018 8:43 pm
Hi guys!
Recently I found a very interesting YouTube channel: Adam Neely's channel:
https://www.youtube.com/channel/UCnkp4x ... 7sSM3xdUiQ
It is really interesting, full of good information about bass, composition and harmony. Today I saw this video:
https://www.youtube.com/watch?v=YfyNhfpLLZ4
I think that the most interesting part is here: https://youtu.be/YfyNhfpLLZ4?t=344
Adam reports that, according to various sources, the technology at the heart of music software is actually pretty old, significantly older than the bleeding edge technology being implemented in many other consumer devices and software in these very days.
I must confess that this impression really aligns with mine. Think about reverberation algorithms. Most of them are really decades old, but branded as bleeding edge in new plugin releases.
I believe that the reason behind this is a core of superstition that is engraved in the audio consumers brains. There has been a time period in which digital audio was inferior to analog, early in the computer music history. These times are long gone, and honestly I believe the opposite nowadays, but the attitude of many consumers is still overly positively biased towards analog style.
As a result, computers are often used as a mean to emulate analog devices, and plugins GUIs are almost always skeumorphic. Although I have nothing against simulation (which actually can involve a serious amount of research and bleeding edge technologies) and skeumorphic design (although knobs on computer GUIs are beyond awkward), it seems very weird to me that in 2018 we are still to unleash the true potential of computers by creating tools of musical expression that are inherent to the computer, without necessarily needing to simulate something else.
Maybe an exception to this is in our world though. We have excellent analog modelling software (Guitarix, for example (*)) but also fairly experimental concepts (like fastbreeder: http://www.pawfal.org/Software/fastbreeder/). I think that the real bleeding edge is being explored by people into algorithmic composition, making use (for example) of Faust, Csound and PureData, for which I think Linux as the best possible environment in my opinion.
As for finding new means of expression and human/computer interaction I always been fascinated by Onyx Ashanti work, which at some point made use of a Gentoo optimized laptop (if I recall correctly):
http://onyx-ashanti.com/
Adam mentioned Machine Learning and that brought me back to this thread:
viewtopic.php?f=48&t=18078
What if instead of trying to move knobs as on 70s radios we were trying to have synths and effects trying to learn our body language? Or be able to understand spoken instructions? What about trying to visualize sound generation and propagation models in a meaningful way and allow the user to manipulate them? Or what about a Machine Learning algorithm that silently watches you while you mix and master songs, so to learn how you do it, so to pre-mix and pre-master songs for you, and help your professional work by doing it on bulks of projects, leaving you to just finalize the result? I am starting being more and more intrigued by these concepts.
What do you think? Do you feel like modern day software is not as modern day as it should be?
(*) Guitarix may seem of the "analog simulator goodies" category, and it is for sure, but to me it feels more sci-fi tech due to how amazing and brilliant the process of modelling electronics works within its framework. I really think of Guitarix as a true gem.
Recently I found a very interesting YouTube channel: Adam Neely's channel:
https://www.youtube.com/channel/UCnkp4x ... 7sSM3xdUiQ
It is really interesting, full of good information about bass, composition and harmony. Today I saw this video:
https://www.youtube.com/watch?v=YfyNhfpLLZ4
I think that the most interesting part is here: https://youtu.be/YfyNhfpLLZ4?t=344
Adam reports that, according to various sources, the technology at the heart of music software is actually pretty old, significantly older than the bleeding edge technology being implemented in many other consumer devices and software in these very days.
I must confess that this impression really aligns with mine. Think about reverberation algorithms. Most of them are really decades old, but branded as bleeding edge in new plugin releases.
I believe that the reason behind this is a core of superstition that is engraved in the audio consumers brains. There has been a time period in which digital audio was inferior to analog, early in the computer music history. These times are long gone, and honestly I believe the opposite nowadays, but the attitude of many consumers is still overly positively biased towards analog style.
As a result, computers are often used as a mean to emulate analog devices, and plugins GUIs are almost always skeumorphic. Although I have nothing against simulation (which actually can involve a serious amount of research and bleeding edge technologies) and skeumorphic design (although knobs on computer GUIs are beyond awkward), it seems very weird to me that in 2018 we are still to unleash the true potential of computers by creating tools of musical expression that are inherent to the computer, without necessarily needing to simulate something else.
Maybe an exception to this is in our world though. We have excellent analog modelling software (Guitarix, for example (*)) but also fairly experimental concepts (like fastbreeder: http://www.pawfal.org/Software/fastbreeder/). I think that the real bleeding edge is being explored by people into algorithmic composition, making use (for example) of Faust, Csound and PureData, for which I think Linux as the best possible environment in my opinion.
As for finding new means of expression and human/computer interaction I always been fascinated by Onyx Ashanti work, which at some point made use of a Gentoo optimized laptop (if I recall correctly):
http://onyx-ashanti.com/
Adam mentioned Machine Learning and that brought me back to this thread:
viewtopic.php?f=48&t=18078
What if instead of trying to move knobs as on 70s radios we were trying to have synths and effects trying to learn our body language? Or be able to understand spoken instructions? What about trying to visualize sound generation and propagation models in a meaningful way and allow the user to manipulate them? Or what about a Machine Learning algorithm that silently watches you while you mix and master songs, so to learn how you do it, so to pre-mix and pre-master songs for you, and help your professional work by doing it on bulks of projects, leaving you to just finalize the result? I am starting being more and more intrigued by these concepts.
What do you think? Do you feel like modern day software is not as modern day as it should be?
(*) Guitarix may seem of the "analog simulator goodies" category, and it is for sure, but to me it feels more sci-fi tech due to how amazing and brilliant the process of modelling electronics works within its framework. I really think of Guitarix as a true gem.