A In Depth of Andrew Lowell's Work, and Further Analysis.


Coming from an analytical, and musically based family, there was something bothered me about Houdini when I was a beginner. L-systems. They were generated from letters instead of numbers. Which in my crazy mind correlated to music. So I began to ask myself if music to VFX was possible. Turns out it was.


Who is Andrew Lowell?

Andrew Lowell is a SideFX Consultant, and Professor of Visual Effects at VIA University College. Originally a sound engineer, he moved into visual effects and was able to create a fusion between sound design and the visual effects industry. This is a brief summary of his work, and I recommend reading more about his achievements.

In 2018, he presented something that will be the focus of this article here. Professor Lowell found a way to create music inside Houdini's 3D space. As well as how to influence dynamic simulations and animations through Houdini's CHOP network. Because Houdini works naively with audio data, and has waveform visualization, Lowell was able to easily create sound based creations with a few nodes.Some of his main discoveries were:

  • You can create music and sounds in Houdini and export them.

  • You can use musical notes to make the character move automatically. Including expressions of the character.

  • You can create particles using music and sounds, and vice versa. And then create further music based on how the particles interact.

  • You can control the volume and timbre of sounds based on impact collisions.

  • You can create a musical song and it's score in Houdini.

  • You can record sound in 3D space.

  • You can influence dynamic simulations with sound.

  • You can animate color correction and textures using sound.

Professor Lowell does note that since the mainstream music industry is based around human performances, this method of procedural music might not flourish as much as it could. But with the way electronic music genres are flourishing, who knows where it will go.

What Can we Learn from This?

Something that Andrew mentions under his discoveries is: "Things that only you can think of....". So let's think of some things.

In a later chapter I will talk about the AI capabilities of Houdini.(You'll be able to find that page HERE when it is available) Houdini is capable of creating AI systems that can drive animation, and textures inside of it. Theoretically, if Houdini can handle music and sound, as well as artificial intelligence; what's stopping us from creating intelligent beings and programs inside of Houdini? What would happen if we were to create an AI that grew through music that we fed it in 3D space?

Music is proven to accelerate brain development. Specifically helping the speech, reading, audio, and memory development of the mind. The more music you consume, the smarter you are. The same could be applied to a machine. Music also affects our mood, and how we chose to act. Andrew Lowell has shown we can also influence the behavior of characters with sound in Houdini.

Houdini would be an ideal software for monitoring the AI's growth. We have an automatic parameter spreadsheet that we could use to communicate with the AI, and watch which parts of it were growing. As well as if it could express itself through different shader sets, attributes, or geometric size. We could also watch it move through the animation editor, and render a 3D representation of itself based on what it would chose to look like. Whether or not we would chose to let the AI see it's surroundings, we could also let it, or manually add lights in 3D to let it explore more. With the Chops network we could export the sound and speech of the AI, and also communicate back to it in Audio.

How this AI would react would probably be something to be speculated.


  • Would the AI react to a sad or happy sounding song the same way we do?

  • Would it try to change it's form or appearance based on the sound in 3D space?

  • How would it chose to communicate back to us, and how would we do that in a safe manner?

  • Would it take on one of Houdini's test geometries for shapes, switch between them, or develop into something completely different?

  • Would we end up forming something like an infinite growing holodeck, or maybe something more sinister? Such as the AI in The X-File's Twilight Time Episode.

Who knows. But every AI needs a baseline of instructions to follow. Or at least needs to be programmed with a purpose. Here are some baseline functions we could program into an AI.

  • Recognize if the music exists in a minor or major key, and make a character act happy or sad based on the musical notation. And change the expression based on the progression of the song.

  • If your film or project has a musical score make sure the attributes of your effect are in sync with your music. If not, flag the key frames that the effect needs to be triggered on.

  • Automatically shorten effects based on the length of a sound effect.

  • Create procedural terrain based on the volume, timbre, key, or tempo of a musical score.

  • Base an exported CHOP response on the imported song or music.

  • Raise or lower attributes based on the fade in and out of a sound effect or song.

  • Interpret the script for a shot, and activate created effects in time with the character's voices.

  • Automatically Develop LUTs, and color correction for the shot based on the playback of the sound for the chosen scene. As well as making required changes.

  • Many more ideas.


Houdini is an incredible powerful software. It's proceduralization is evolving rapidly. At this point we can create more than animation and visual effects. We can also create music, 3D representations of emotions, data sets for non film industry related use, and much more.

Professor Andrew Lowell has done an excellent job in taking a step to integrate two different industries. As well as making it our job to explore what he has discovered further. AI in Houdini may be the next step in using his discoveries. It's something that is not fully researched in the visual effects field, As scary as it may be. We could use it for simplifying animation and effects in the future, or for plain research purposes.

References Regarding Andrew Lowell.