He creates custom tools using Cycling ‘74’ Max for Live and JUCE, pulls apart plug-ins to make them better, and builds physical ‘contraptions’ to produce specific, customizable sounds. Hear about his unique and inspiring approach to working with sound below:
Written by Jennifer Walden
Popular on A Sound Effect right now - article continues below:
Darren Blondin, senior sound designer at Raven Software in Madison, WI, has worked on eight Call of Duty game titles so far, earning several awards and nominations for the sound work on Call of Duty WWII (2017), Call of Duty Infinite Warfare (2016), Advanced Warfare Zombies (2015), Call of Duty Advanced Warfare (2014), and Call of Duty Modern Warfare 3 (2011). On the latest release, Call of Duty: WWII, Blondin and other members of the Raven team worked in close partnership with the Sledgehammer Games audio team. Raven handled the sound on four single-player missions: “Liberation,” “Ambush,” “The Rhine,” and also the “Battle of the Bulge” on which Blondin was personally responsible for the sound design, implementation, and mix. He and the Raven team also contributed additional sound work for the game’s multiplayer maps.
Blondin takes a unique, in-depth approach to sound design. And by that I mean he literally pulls apart the plug-ins that help him to manipulate sounds in order to create his own versions of those tools. This gives him the deepest possible understanding of what’s happening to the sound, and therefore ultimate control over how that processing can be modified for better results. Here, Blondin talks about how he got started down this extraordinary path and the impact that it’s had on his particular method of designing sound. Plus, he talks about using Cycling ’74’s Max for Live and JUCE to create his own custom audio tools.
Can you give a general overview of your approach to sound design? How did this approach come about?
Darren Blondin (DB): I was able to design my own studies at Goddard College’s Individualized Bachelor of Arts program. It’s self-directed learning, so students decide what they want to research and work with advisors to get there. I titled my study “Interactive Sound Design,” focusing mainly on the different aspects of game audio.
In order to fulfill the academic conditions, it was necessary to adequately incorporate required disciplines like math and science. I always had to identify and explore the relationships between sound and other different areas of study. So it became natural to consider how the things I’m doing with sound in the computer relate to acoustics, resonance, and our perceptions.
If someone designed a plug-in that’s helping to make my sound better in some way, but I don’t know why, I’ll break down what’s actually going on and re-create it on my own with the hope of gaining more focus and understanding.
Sometimes it’s hard to unravel all these fine details with DSP tools that are designed to take the complexity away and make decisions for us. I guess it’s like cooking — you can buy something that comes in a box and get it on the table in a half hour or take your time and grow your vegetables and prepare the individual ingredients. Both approaches are valid and I’m certainly not opposed to taking shortcuts, but I strive to do things from scratch. So if someone designed a plug-in that’s helping to make my sound better in some way, but I don’t know why, I’ll break down what’s actually going on and re-create it on my own with the hope of gaining more focus and understanding. And I think that leads to better results, even though it can be time consuming to “roll your own” all the time.
I also try and front load my effort so that most of the time is spent creating the signal chain or tool that will be able to efficiently make the sounds. Sometimes there is no need to make a tool; it’s just the way the task is approached. For example, the ground battle ambience for “The Bulge” in Call of Duty: WWII was created by recording a WWII re-enactment in Illinois. I imported the ambience into Ableton Live and converted the audio to a MIDI drum pattern with the tank firing sounds representing the kick, close machine guns being the snares, distant guns as hi-hats, etc. Then I used the MIDI data to replace and/or enhance the battle sounds with Live’s samplers. This is similar to the way one might approach music made with modular synthesizers, putting most of the effort into the patch, and the least amount into actually playing it back to record it.
When starting on a new project, how do you like to prepare for the sound work ahead? Do you like to research your subject before diving in?
DB: After a project is complete it’s helpful to pause and analyze how it went — the successes and struggles. At Raven, we have had the great opportunity to work on a AAA title every year with a couple months time between to do postmortems and prep for the next. This down time is our chance to course correct, which includes fleshing out lacking areas of our sound library and evaluating our latest work.
When the fist is pulling back to hit the next project, we assess our last shot to improve our aim.
Recently, we each took a section of Call of Duty: WWII (which we just finished) and completely redesigned the audio. We pushed ourselves in bold and risky ways to help get a better understanding of what was possible if we had more control and time. The most valuable part of this exercise was the discussion and critique, which lasted days. Collaborative thinking of this scale can’t happen mid-project as we have our heads down in the tasks and can’t be overly distracted, considering the tight schedule. But when the fist is pulling back to hit the next project, we assess our last shot to improve our aim. There are always areas we can do better and often times we have to look behind us to understand where we need to go next.
Research is constant. It might be as simple as grabbing a reference, like a recording of an old submarine Klaxon alarm, and I’m trying to create this sound in whatever way I can because we don’t have one on hand. A picture of the inside of the alarm reveals the teeth of a gear are pressed against a metal diaphragm. So that leads me to try and get that character using a drill and hole saw vibrating up against an aluminum travel mug. Having to fake things is part of the fun and often times the unofficial version sounds more interesting. References keep us from going too far off track.
I think sci-fi and fantasy genres are somewhat forgiving as the listener won’t be caught off guard as much by unusual sound effects. In contrast, historical accuracy and realism is a different thing. We’ve had a lifetime of exposure to some of these sounds and so there are expectations of what they should sound like. A little research will help you make sounds that will not attract too much attention because they are slightly off.
Your DAW of choice is Ableton Live. What do you like about this program? Why is it a good fit for you as a sound designer?
DB: Ableton Live is amazing for game sound design. It certainly hasn’t been marketed as such; it’s aimed more at live performance and loop-based music production. But a number of game audio designers are using it. Matt Piersall at Gl33k has a good video demonstrating his approach to game sound with Live.
Hopefully, Live will get more traction in this field as it continues to expand its capability. With the addition of Max MSP integration, it can do things I have not seen in any other workstation. It’s an odd duck for sure and might not rub everyone the right way — you can’t customize the interface much like Reaper allows, for example. And the content browser is quite lacking. But the creative possibilities it provides far outweigh the cons.
Because it’s been honed as a live performance tool, there is very little to distract creative flow.
Because it’s been honed as a live performance tool, there is very little to distract creative flow. It’s easy to stay ‘in the zone’ while you work. You never have to stop the sound or dig through menus. It’s all right there. You can even design or rewire plug-ins spontaneously and interactively while your session is playing. Cycling ’74’ has done a great job keeping the interface clean and intuitive too. Rarely is it necessary to dig into the manual. If you want to dig in and approach sound at a code level, the capability is there, and if you just need to get something done quickly, nothing will get in your way. It’s perfect!
How did you get started with Cycling ’74’s Max for Live? How did it help you with your sound design?
DB: I had fiddled around with Max/MSP a bit back in 2005, enough to get a sense of its flexibility and depth. But it seemed too removed from my workflow with Ableton to be useful and the complexity was a deterrent. Many years later when the Max extension arrived that was a game changer — this would supposedly open up the hood, so we could tinker with Ableton’s innards. So, I just had to learn it.
Popular on A Sound Effect right now - article continues below:
HIGHLIGHTS:
-
82 %OFFEnds 1733353199
-
70 %OFFEnds 1733353199
What was the first ‘tool’ you created with Max?
DB: My first original M4L device was a surround panner as they didn’t provide a solution for this initially. I wanted a quick way to create surround ambience in Ableton by dropping sounds in and positioning and moving them. I added the ability for sounds to drop off with distance, then added filtering, and reverb processing.
That’s where Max for Live comes in — there are no limits and you can seal it all up behind a simple UI.
Ultimately the surround panning ended up being the less interesting part and I explored propagation and reflection. I’ve never found an outdoor DSP solution I’ve particularly liked. Exterior convolution reverbs sound like rooms to me, for example, delay taps alone are not enough, and getting convincing EQ and volume curves that simulate sound traveling over distance, doppler and phase cancellation can be tedious. There are a lot of parameters to manipulate and they all need to interact. Nearly any DAW has the ability to alter all these things but having to tweak everything individually is slow. And it might be hard, later on, to reproduce that magical combination of processes that took so long to dial in a few days ago. It’s possible to get this sort of system working using macro mappings within Ableton’s Audio Effects rack but things can get very large and inefficient; it’s hard to see at a glance where all the parameters are sitting, and sometimes you need to incorporate complex logic to handle how things interact. That’s where Max for Live comes in — there are no limits and you can seal it all up behind a simple UI. So, to put sounds in an outdoor environment, it can be as simple as selecting a starting point with a button (eg. forest, mountains, parking lot), then placing some reflections around with a simple X,Y visualization.
It can be a bit tough getting ramped up with Max but over the years they have incorporated loads of starting points and it’s getting easier as time goes on (now surround panner code is provided, for example). And the Max programming community is long established and super active. People will take time out of their day to answer your questions and even code some things for you if you get stuck. Sharing ideas is as simple as selecting and copying portions of your device from Max into a blog entry. Someone else can then copy that text and paste it into Max.
What was the most complex one you created?
DB: I’m not sure if it’s the most complex Max thing I’ve explored but certainly one of the most rewarding — batch processing within Live. At Raven, we sometimes have to process hundreds, even thousands of sound files very quickly. And I’d rather be doing this in Live rather than an editor to take advantage of Max processing. There are ways to cheat this by freezing, flattening and cropping tracks, and using a batch renamer. But it’s multi-step not super-fast.
In a production environment automation can have a big impact on the quality of games because we are not being taken hostage on repetitive processes.
Now, I’ll just automate the process in Max, including normalization, trimming/extending file lengths, naming, etc. It’s even possible to approach such automation in a generative way. For example, Max can wait for a sound or MIDI note, record some sound, then save the file out and wait for the next. In a production environment automation can have a big impact on the quality of games because we are not being taken hostage on repetitive processes.
Another tool you use is JUCE. What is JUCE and how does it differ from Max?
DB: JUCE is a C++ application framework that can be used for coding audio software. Max, which started as a Mac only program has been rewritten with JUCE. The intention of Max is to take away the complexity of coding while offering users the power and flexibility of programming things from the ground up. I got interested in exploring JUCE for its cross-platform capability, as my co-workers are using different audio workstations and I’d like to share my work with them more easily.
Would you recommend these tools for other sound designers?
DB: As you can imagine, while more powerful than Max, working directly with JUCE is better suited for someone with coding experience. JUCE documentation assumes you have a good grasp on C++. It’s a good option for a programmer planning to market their audio tools but definitely not the right choice if you need to get something complex done in a short amount time or think more visually.
Max is super-fast — once you get the basics down, it’s possible to build a simple audio tool in minutes by simply connecting some rudimentary objects. Anyone who is comfortable designing Reaktor ensembles might get up to speed quickly.
For anyone using Ableton Live standard, I suggest adding on Max for Live — it’s already included in Live Suite. The standalone version of Max offers smoother performance and more unlimited video features not available in Max for Live.
What advice would you give other sound designers who may be interested in getting into Max or JUCE?
An overly-complex first project may bring disappointment as it will be impossible to plan it out properly.
DB: Start simple, just creating something like a tool that performs volume change, for example. An overly-complex first project may bring disappointment as it will be impossible to plan it out properly. When you bring in new features you initially didn’t think of, you may discover that you have to throw out existing work and start over to accommodate it.
Also, try and utilize other people’s work as a starting point. The Max documentation includes examples of how the different modules work, and you can copy and paste these examples into your own work as a starting point.
So you’ve created a few amazing contraptions (like a ‘rain machine’) to generate specific sounds? Can you share some example of these contraptions? What went into their design, and ultimately, what sounds were you able to get out of them?
Water collects on the tips of the wires and drips steadily, raining on whatever is underneath it — tarps, metal, wood, a cement floor, puddles, foliage, etc.
DB: Haha. The ‘rain machine’ is nothing more than a large plastic container with dozens of holes in the bottom and small lengths of wire protruding from each hole. The container is secured to the ceiling in the recording space and filled with water. Water collects on the tips of the wires and drips steadily, raining on whatever is underneath it — tarps, metal, wood, a cement floor, puddles, foliage, etc.
As simple as this contraption is though, it is possible to create very customizable rainstorm loops in this way. By simply positioning the sounds on the objects in our games, the rainstorm convincingly feels like it’s hitting the objects and sounds different depending on where the player or camera is.
A couple of weeks ago I designed a device for dumping large amounts of broken glass down from high-up over an extended time, safely. It looks like a long rectangular box that rests horizontally with an opening at one end. The inside of the box is covered with sound absorber. The glass is loaded by climbing a ladder and pouring it into an opening at the top. The box can swivel forward, dumping the glass. The sound of it sliding towards the opening is largely inaudible because of the sound absorber. And you can dump a lot at once. This will be for the sound of glass getting thrown after an explosion. We’ve probably dumped it about 30 times at this point with no injuries so I’m calling it a success. It should also work well with rocks and dirt, which I’ll try next.
Since we’re only concerned with the sounds they make, such devices are not much to look at, and often times are made from discarded items like recycled wood. Other times there are some very specific items needed and it can get pricier. If we have to spend money to make a sound it will likely be for a large library of sounds. My next project is to attach a recording device, an Instamic Pro, to an arrow and launch it past speakers to accurately process movement sounds for projectiles and fast-moving vehicles. We’ll likely get a lot of mileage out of the stuff.
What are you working on now? What’s new or unique in your approach to your current project?
DB: We’re all working on fleshing out our library sounds right now. This is probably the most recording preparation we’ve done before a project. I can’t be specific about what we’re gearing up for, but it might be the most creatively demanding project yet.
A big thanks to Darren Blondin for giving us a look at his interesting sound processing workflows – and to Jennifer Walden for the interview!
Please share this:
-
82 %OFFEnds 1733353199
-
20 %OFF
-
25 %OFF
-
70 %OFFEnds 1733266799
-
30 %OFF