Disadvantages of Creating Procedural Audio and Music

Intro

In a previous article we discussed what procedural audio is, and how it is being used in the music industry. As well as Andrew Lowell's work in generating procedural audio in 3D space inside Houdini. Here we are going to discuss the downfalls of creating procedural audio. Just because something is easier, doesn't mean it is the more creative way of making something. We are going to find how true that statement is here.

Why Does All Music Sound The Same?

Because it does. In the past few decades popular songs, and bands have started using new tools that make the music recording process easier and faster. As well as relying more on patterns of what their audience expects from them. We also now use the internet to our advantage to download music samples faster, and can create a recording booth in our basement if we feel like it. Music has become a very easily spread luxury. 

In 2012 a group of researchers named Joan Serrà, Álvaro Corral, Marián Boguñá, Martín Haro and  Josep Ll. Arcos analyzed music patterns between the 1950s and 2010s. They looked at pitch development, timbre, rhythm, vocal ranges, and other important musical aspects. Using the online Million Song Dataset they were able to determine several interesting facts about our musical evolution.  They found that ever since the 1960s, our musical timbre range has started to decline, as well as the use of chord progressions and less traditional music recording techniques. 

However, they also found that music is becoming louder. At a rate of one decibel every eight years. This increase of decibels is also taking away the dynamic range of music as our ears start to no longer recognize the softer parts of a song. Simply because the entire song is just plain loud.

Video games have also helped procedural music in more ways than it appears. Most games require the same sound effects to be played repetitively, and at the same time sound a bit different through different actions the characters make. For example, if a character hits another with his sword, it is going to sound a bit different than if he hit a tree or a rock with the same object. Sometimes the same tune has to be repeated in different character encounters, and maybe slightly different based on the character's choices. Such as Untertale. These sounds and rhythms would be hard to replicate without a software proactively analyzing and editing the sounds we create. Plus, for video games on a budget, hiring a composer would be incredibly expensive. 

Another interesting study done in 2013 by researchers Héctor P. Martínez, and Stefania Serafin studied if gamers could react and sense the difference between proceduralized music, or traditionally created ones through motion controllers in video games. As well if it would influence their reactions change in the game differently. Through their work they concluded that gamers could not tell the difference, or did not care either way. It did not have a single change on the game performance. Proving that we don't really care how the music is made, we just like listening to it.  However, there was one outlier game in this experiment. When the test subjects played a skiing game under the testing environments, they preferred the procedural music to the traditionally recorded ones. A good take away from this might be that the players preferred electronic music to the standard already. This might show how we have grown accustomed to modernized music, and are ok with not changing from our current musical genre.

Are We Losing Creativity in the Music Industry?

With all the new ways to generate music, and how easy it has become to distribute it, one could argue that we don't get more reward out of creating music as we used to. Therefore, our effort for creating it has become lax. When music could only be replicated by sheet music, sharing it was expensive. But now with the creation of the internet, the price of purchasing a song has dropped so low, not much profit is made on sharing music. Putting in the same amount of time into a song that you did 20 years ago will no longer give you the same profits that you generated 20 years ago. 

One could also argue that more creativity in music is being shifted to the brand development of the artist, rather than the music itself. As we have previously mentioned, our musical ranges are declining as well as our chord progressions.  As well as our acclimation to procedural generated audio. When everything starts to sound the same, how do you still make those artists stand out? 

Branding is the overall answer. If the artist can look differently, understand social media, and be their own character; then they no longer have to look out for their music being compared to others. Plus, they will be recognized more.

The music industry today relies more on the other activities their artists complete on the side, rather than selling music. Overall, it does not care about the traditional artist's sound and creativity. It just wants to figure out a way to make more money as it's main resources  have dried up.

The Future of Music 

This need to create audio fast has led to the development of tools such as Oscillators, Magenta, and others. Oscillators run on a simple set up of creating a waveform such as a sine wave. Since waves generate tones, we can switch through these waves to create alternating rhythms.

Magenta is something a bit more complex. Magenta is an open source project designed to help use machine learning in creative processes. Currently, it is being used for generating music and images by using python. Right now from the Magenta website you can download an extension to make your browsers sing. Whether that be from instruments or computer generated vocals, it's up to you. Most of this software is designed to use internet content you see visually, and translate your mouse movement or internet images into sound waves. It's worth a listen.

Interactive music in the video game industry has also taken music to otherworldly places. In the GDC 2017 conference several musical systems were presented that already have an impact on how we further understand music in video games. Interactive music is where all these systems focus, as they wish to reduce the repetition of sounds in games. However, they also create a focus of where and how they repeat theme songs to make them more memorable. By making the music interactive, it can also help the player have a more enjoyable experience in the game. Such as timing music with cut scenes, basing player interactions on where music is added, and adding selective music based on the player's choices.

However, Interactive music also has its downfalls. If the character is interacting with the music too much, this can lead to the music cutting in and out at the wrong time, and making the sounds seem staggered. Also, if you are using interactive music to remix a popular song, then copy write issues may also start to occur.

For a recording artist, the music industry does look bleak. Not many producers see a talented musician, they want to see a person that they can easily market. Their profit and income are also no longer based on selling records, but from touring, streaming licenses, and brand deals. Unless music distribution for them changes, making a living in the recording world is dire.

References

Procedural Audio On the Web: Part One: https://medium.com/@berraknil/procedural-audio-on-the-web-part-one-166462e7be1e

Procedural Audio in Computer Games Using Motion Controllers: An Evaluation on the Effect and Perception: https://www.hindawi.com/journals/ijcgt/2013/371374/

Audio Implementation Greats #8: Procedural Audio Now: http://designingsound.org/2010/09/24/audio-implementation-greats-8-procedural-audio-now/

Is anyone familiar with procedural audio?: https://www.reddit.com/r/audioengineering/comments/2hss0i/is_anyone_familiar_with_procedural_audio/

Composer in your Pocket: Procedural Music in Mobile Devices: https://www.musicologyresearch.co.uk/publications/eliseplans-composerinyourpocket

Procedural Audio for Game using GAF: https://cedric.cnam.fr/fichiers/RC1568.pdf

An introduction to procedural audio and its application in computer games (2007): http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.531.2707

Sound Synthesis Research in the Centre for Digital Music: https://c4dm.eecs.qmul.ac.uk/soundsynthesis.html

Sonic Mechanics: Audio as Gameplay: http://gamestudies.org/1301/articles/oldenburg_sonic_mechanics

BEHIND THE SOUND OF ‘NO MAN’S SKY‘: A Q&A WITH PAUL WEIR ON PROCEDURAL AUDIO: https://www.asoundeffect.com/no-mans-sky-sound-procedural-audio/

Video game music systems at GDC 2017: pros and cons for composers: https://winifredphillips.wordpress.com/2017/07/17/video-game-music-systems-at-gdc-2017-pros-and-cons-for-composers/comment-page-1/

PROCEDURAL AUDIO AND MUSIC IN GAMES: https://danielgamesound.wordpress.com/2014/10/29/procedural-audio-and-music-in-games/

System for generation of musical audio composition: https://patents.justia.com/patent/10446126

Serrà, J., Corral, Á., Boguñá, M. et al. Measuring the Evolution of Contemporary Western Popular Music. Sci Rep 2, 521 (2012). https://doi.org/10.1038/srep00521

Is Pop Music Evolving, or Is It Just Getting Louder?: https://blogs.scientificamerican.com/observations/is-pop-music-evolving-or-is-it-just-getting-louder/

Oscillator Information: https://developer.mozilla.org/en-US/docs/Web/API/OscillatorNode

Hello Magenta: https://colab.research.google.com/notebooks/magenta/hello_magenta/hello_magenta.ipynb

Capitalism, creativity and the crisis in the music industry: https://www.opendemocracy.net/en/opendemocracyuk/capitalism-creativity-and-crisis-in-music-industry/

Balancing Creativity Against Business in the Music Industry: https://www.huffpost.com/entry/the-struggle-of-balancing_b_5607865

How The Music Industry Is Putting Itself Out Of Business: https://www.forbes.com/sites/greatspeculations/2017/05/03/how-the-music-industry-is-putting-itself-out-of-business/#5e17dcf2e57a