They cover everything from creating realistic sci-fi weapon sounds, building robot languages, improved audio tools and refreshing existing sounds, to recording new sounds (like massive shipping container impacts), sounding Titan BT in story mode, crafting rich backgrounds, and much (much!) more.
Written by Jennifer Walden, images courtesy of Respawn Entertainment.
Respawn Entertainment’s nearly perfect online multiplayer FPS Titanfall (2014) had it all: addictive gameplay, amazing visuals, and stellar sound. But it was missing one thing — single player mode. Problem solved in Titanfall 2. They added a single-player story mode where man and mech form a meaningful friendship. Really, though, this mode is the cherry on top of an already award-winning game franchise. Recently at The Game Awards in December, Titanfall 2 was nominated for Game of the Year, Best Action Game, Best Multiplayer, and Respawn was nominated for Best Game Direction.
Respawn’s award-winning Senior Audio Director Erik Kraber and his sound team of Senior Sound Designers Tyler Parsons and Rick Hernandez, Sound Designer Brad Snyder, and Dialogue Lead Joshua Nelson share their sound story on Titanfall 2. Here they discuss their improved audio tools, refreshing existing sounds, creating realistic sci-fi weapon sounds, building robot languages, recording new sounds (like shipping container impacts), sounding Titan BT in story mode, crafting rich backgrounds, and much more.
The Titanfall 2 sound team
What is your background and how did you get involved with the Titanfall franchise?
Erik Kraber (EK): After spending 15 years as the Audio Director on the Medal of Honor franchise I came to Respawn as the Audio Director on Titanfall about a year before it shipped, back in early 2013, joining Brad [Snyder]. Tyler [Parsons] and Joshua [Nelson] started with us shortly after that and Rick [Hernandez] started working with us on Titanfall 2 back in late 2015.
Brad Synder (BS): I was hired during the development of the first Titanfall in 2012. Back then, it was only a two-man audio team. As the project progressed, it became very clear that it was going to be a much bigger game than two people could finish with any sort of competitive quality, so that’s when the team expanded. At that point, the team had been working on the game for about two years.
Tyler Parsons (TP): I had been at EA for nine years working primarily on Medal of Honor titles — lots of gritty, realistic sound design set in World War II or present day. Erik and I actually worked on a lot of those titles together! During that period, he and I also collaborated on a sci-fi tactical shooter called Tiberium which was unfortunately canceled. We both had a blast working on it, though, and it really whetted our appetites for sci-fi sound work. In early 2013, I was hired by Erik at Respawn.
Joshua Nelson (JN): As a teenager I was really into music production and synth programming. My first game dev job was for EA, working on dialogue and vocal processing for a couple of projects in the LA area. The first Titanfall reached a point in production where they needed a dialogue supervisor, when Erik, whom I had known for a number of years on other projects, contacted me to see if I’d be interested.
Rick Hernandez (RH): After graduating from Berklee College of Music, I moved to LA with my wife and started working for Lionsgate Films doing sound design for feature films. In 2006, I started working full time on video game sound design and shipped six titles to finally get an Audio Director title on the Lost Planet franchise, followed by a one year contract with Visceral games working on Battlefield: Hardline. My former Audio Director at Visceral informed me that Erik and the Respawn team were looking for a Senior Sound Designer. Joining Respawn has been amazing, and I truly look forward to coming in every morning and working with these amazing talented guys.
[tweet_box]Creating Titanfall 2’s terrific sci-fi sound[/tweet_box]
Is Titanfall 2 using the same modified version of the Source game engine as the first title? Have there been any updates to the audio system for this release? Any new capabilities available for the audio team?
BS: Titanfall 2 is still using the Source engine, which has been heavily modified by our game team. We shipped Titanfall (2014) with a very basic implementation of the XAudio system with the Source engine, and it was incredibly limited. We had no DSP, which meant no reverb, no filter based occlusion, no compression, and basically little to no dynamic control over the mix.
EK: From an audio tech perspective, we were about a decade behind what we had been used to working with on other projects prior to Titanfall. So we began investigating middleware solutions and after comparing a number of packages, we decided to work with Rad Tools to develop a new version of Miles – Miles 10. The performance of the audio engine was fast — exactly what we needed, and Dan Thompson at Rad Tools was incredibly dedicated to making a version of Miles with features that unlocked our creativity and solved a number of technical issues.
Occlusion consisting of attenuation plus filtering was a huge help in improving situational awareness
TP: Miles gave us access to numerous filters and reverb and allowed us to drive many sound parameters via various game data on the fly. We could use variables like speed, angle, distance, or health at runtime to create a much more dynamic mix. This provided the basis for our procedural mixing system, which prioritizes important sounds based on various factors, like whether a friendly or enemy is making the sound, whether that weapon is aimed at or away from the player, how injured the player is, etc., and makes the most important sounds more prominent via changes in level, EQ, and distance modeling. We used reverb and asset swapping to model different room and environment types, especially for elements like weapon tails and explosions. Occlusion consisting of attenuation plus filtering was a huge help in improving situational awareness — it becomes much easier to sort through the chaos of combat when sounds occurring through walls or behind cover are muffled, leaving the closer and potentially more threatening sounds crisp and noticeable. Most of these “new” features have been in use in games for over a decade, so it was a relief being able to add them to Titanfall 2 and sculpt a more informative, cleaner mix.
Popular on A Sound Effect right now - article continues below:
HIGHLIGHTS:
-
60 %OFFEnds 1733526000
-
60 %OFFEnds 1733526000
What were some of the main sounds that carried over from Titanfall to Titanfall 2 ?
EK: The objective on Titanfall 2 was to keep what was familiar in Titanfall but refresh and upgrade everything. With the new audio tech we were able to do much more with modeling the weapons and abilities from multiple perspectives and distances, so we reworked and sweetened nearly all of the original material.
TP: The main sounds we left alone were a few critical “tells,” like the warmup for a Titan’s nuclear eject explosion. These stayed the same because players have such a deep association with their meaning in game. Also, many bullet impact sounds vs. things like Titans, robots, energy shields, human targets, and a few others stayed the same.
BS: Sometimes it’s hard to go back and make changes to something so many players have become so accustomed to. The sounds have become iconic and even if you make an objectively better change to them, players will react negatively. We have to balance our wanting to improve the audio with the players’ wanting to feel at home in the game they love.
What have been some of the most challenging sounds to get right for the game?
EK: Overall, the biggest challenge for me was how to make science fiction weaponry sound real in the game world. Ballistic weapons have a report that excites the environment and even though it is a simple “pop” sound, the character of that sound varies dramatically depending on what environment it was fired in, what it passes through to the listener, how far away it is, and what environment the listener is in. We tried to apply the same rules of physics to many of the completely artificial energy-based weapons, both through the sound design and via real-time in-game processing and mixing.
I actually layered in heavily processed ‘bulldog mastiff barks’ into the weapon shot to give it a slight sci-fi characteristic
BS: I personally worked on the Mastiff Shotgun, and when you see it operate, it launches a bunch of fiery energy projectiles out in a Contra Spread Gun pattern. It was challenging for me to make the gun sound futuristic without compromising on the punch and impact of the ballistic nature of the weapon. I actually layered in heavily processed ‘bulldog mastiff barks’ into the weapon shot to give it a slight sci-fi characteristic. It was fun to take an organic sound that doesn’t really belong with the rest of the layers of the weapon and finesse it into a unique asset that enhances the rest of the weapon.
TP: Creating distinctive robot vocals that feel right for each robot type has been a challenge for me. With the exception of the Titans, Titanfall 2‘s robots don’t speak English, and coming up with a unique “machine language” that feels right for each robot took some experimentation. We have agile “Spectre” infantry units, plodding Terminator-like “Stalkers,” and hulking, devastating “Reapers,” as well as a decayed zombie-like version of Stalkers in the “Effect and Cause” campaign mission. Their vocals all need to sound like they’re in the same family because they’re related models, but they need to reflect the individual character designs. Back on the first Titanfall, our art director Joel Emslie liked the idea of the Spectre speech sounding like an evil fax machine. I used Prosoniq Morph (now Zynaptiq) to convolve interesting drones, scrapes, and metal hits with things like teletype machines and radioized dialogue, then layered the most interesting results and arranged them into barks and chatters which had a sinister vibe to them. On Titanfall 2, Reapers got a similar approach, but with animal and massively distorted sine wave elements added for a much larger, bestial sound. I made Stalkers sound Spectre-esque but a bit more saturated and less staccato, and finally zombie Stalkers, created from heavily processed, time-stretched animal snarls and layered with creepy, rusty-sounding synth elements, have a sort of mechanical zombie-moan character.
What were some new sounds you needed to design, like new weapons, Titans, or modes of transportation? How about defensive features, like the holographic pilot?
EK: We had to design thousands of new sounds for this game. The scope was daunting with nearly triple the amount of sounds compared to Titanfall — more than double the number of weapons, abilities and Titans, and the addition of an entire single-player campaign.
BS: I worked on some of the new AI types: the flying enemy drones and friendly Marvin drones, and the Prowlers — enemy wildlife that’s like a cross between wolves and lizards.
TP: I did the weaponfire sounds and Foley for new weapons like the L-STAR energy LMG, the updated magnetic grenade launcher, the gravity star, and the Reapers’ plasma cannons. I also made sounds for new pilot abilities like Phase Shift and the grapple, as well as player low health and death, Foley for the pilot’s slide mechanic, movement and vocals for all the new non-Titan robot units, and others. Steve Johnson worked on a number of Scorch’s flame ability sounds as well as the Firestar thermite grenade. Dave Nazario created a number of UI sounds, nearly all of our grunt soldiers’ Foley, the holopilot, and pulse blade ability sounds, and sentry turret activation sounds.
What’s the key to great robotic/mech sound design, weapon sound design, and sci-fi sound in general?
RH: I created the design for the majority of the Titans in the game. I used a few key processes in order to create believable, signature robotic sounds. One of the principles of my approach was to record lots of metal-on-metal layers and to avoid using library sounds. I used sheet metal, anvils, bolts, thin metal, and thick metal, and tried to think outside the box when I went out to record them. I would hit them against each other, scrape them, hammer them…recording as much of it as possible in as many different ways I could think of.
Getting good servo sounds was also important for the Titans. Good servo sounds are hard to create, and again, I wanted to avoid library sounds because they have been used by everyone, so the result would not be signature but rather generic. As an alternative to stereo miking or digital plug-ins, I used something called an Arduino micro controller. It let me easily create a servo that I could automate using software such as Ableton and create stereo servo sounds by creating multiple takes and panning them hard left/right. These could then be pitched down and manipulated using digital plug-ins. Mashing sounds together is also a great technique to create more complex robotic sounds. After I record my metal layers, debris, and servos, I can mash them together using software such as TimeFlux and Wave Warper by SoundMorph, or I can layer them traditionally in a DAW.
To work in the sci-fi world of the game, sounds need to be evocative and believable even though they belong to things that don’t exist
TP: Getting the right character into each sound is crucial. To work in the sci-fi world of the game, sounds need to be evocative and believable even though they belong to things that don’t exist. Titans should be massive, imposing, sometimes threatening; weaponfire and explosions need to sound punchy and powerful, “realistic” but also more interesting than present-day weaponry; sci-fi elements in the world should be cool and abstract but have some tie to relatable, real-world sounds that help the player connect with them and instantly believe “Yeah, it does sound like that.”
I think the best approach to designing any of these is just to keep an ear out for interesting, inspiring sounds and pay close attention to how each one makes me feel. Throughout the process, it’s important to keep the overall aesthetic of the game universe in mind. Titanfall is a fairly gritty future world, so few if any sound effects should sound too musical or whimsical. Eventually, our team developed a sort of language to the sounds of the world we were creating, and started naturally building our designs in support of that. Also, as Randy Thom has very wisely recommended, it helps to start experimenting as early on in the project as possible and make lots of mistakes then to figure out what works and what doesn’t.
Any fun field recordings for the game?
TP: We worked with an incredibly talented team of veterans at Warner Bros. including Bryan Watkins and Mitch Osias, and occasionally the amazing John Fasal, recording the sounds of huge metal cargo containers being dropped onto one another from 50 feet up.
To capture gigantic impact sounds, we would hoist each via a forklift and drop it again and again, onto the ground or other cargo containers, until it was nearly destroyed. We then had the forklift push the wreckage across the ground or rock it back and forth so we could capture epic creaks, groans, and shudders. I used a lot of this later to create and sweeten metal platform and dish top ambience for the “Beacon” platform and sections of the “Eden” map. For the drops, we had stationary mic setups and stayed a ways back from the impact zone, as there was the chance a container door or other piece of debris might fly off in our direction. For the pushes and rocking, some of us went mobile with our Sound Devices 702Ts and 744Ts, recording from right next to the containers, which resulted in a couple of scary moments when the containers nearly tipped over onto us.
We also dropped huge concrete “K-rail” freeway dividers onto one another from different heights to generate massive concrete crunches and smashes. Dragging them around across concrete and dirt debris gave us plenty of nice explosion debris sweeteners and rock destruction sounds.
We also visited a shelter for orphaned pet pigs called “Lil’ Orphan Hammies” and recorded pigs of various sizes and temperaments. I used a lot of the predator and pig recordings in creating the Reapers’ roaring vocals.
For new creature elements, we spent some time with Randy Miller of “Predators In Action,” who let us record various animals including a bear, a panther, a raccoon, a mountain lion, a lion, and a tiger. (Predators in Action specializes in supplying location and studio-trained exotic animals for print and film projects). We also visited a shelter for orphaned pet pigs called “Lil’ Orphan Hammies” and recorded pigs of various sizes and temperaments. I used a lot of the predator and pig recordings in creating the Reapers’ roaring vocals.
One other useful shoot had us capturing various factory machines and forklift servos. We close-miked and contact-miked all sorts of machines and vehicles to capture a variety of whines, buzzes, ratchets, clanks, and clunks. I found the contact mic tracks worked really well for adding realism and weight to certain robotic or big-mechanical sounds, like Reaper footsteps or large doors and devices — they provided an interior-esque heaviness that could be used to suggest chassis resonance.
Recording Foley for Titanfall 2
We spent a day with the team from Drones Plus recording drones of all shapes and sizes. The in-flight sounds gave us a palette of interesting tones and textures to play with, and recording them up close with the drone wranglers holding them resulted in cool whirs and purrs, details we could layer into other sounds to give them more sci-fi flavor. Brad pitched and processed a lot of these when creating Titanfall 2’s drone and big machinery sounds.
We got lots of useful material from the Warner Bros. stages, too. Foley artists John Roesch, Alyson Dee Moore, and Mary Jo Lang showed their awesome skill at creating sounds that we used to build things like future weapon Foley, Prowler footsteps, and wing flaps.
In terms of sound, what was required for the single-player story campaign?
BS: Each level was assigned to a single sound designer who was responsible for all the ambient sound design and the scripted moments in the level. The one exception was “The Beacon” which was the first campaign level to be finished — that was more of a group effort. Each single-player campaign level has unique assets created for all of the ambiences and scripted moments. We try and make as much original content as possible to keep the experience fresh and interesting.
It was a massive undertaking for audio. Not only did we need to make sure all of the weaponry for the player and the AI units translated in the single-player context, but there were tons of unique environments and scripted moments that needed a lot of custom assets.
One of the most challenging aspects of the campaign for myself was my work on “Into The Abyss.” In that level, there’s a giant factory assembly line with dozens of giant robot arms, all with custom scripted movements of arranging and placing giant pieces of metal around.
The assembly line part of the level alone has somewhere around 600-700 unique sound assets just for the machinery
Since script actually had custom movements for every single factory arm in that level, Bryan [Watkins], Mitch [Osias] and I had to design unique assets for every single motion — we couldn’t take a universal design approach. And because you could stand near any of the machinery at a close distance or be far away from it, I had to design close and distant variations for each movement. This was a massive undertaking. One section of that level took almost a month of design work. The assembly line part of the level alone has somewhere around 600-700 unique sound assets just for the machinery.
TP: There was constant communication within the audio team as the missions shaped up, a flow of feedback as we heard one another’s missions and would discuss what we loved, liked, or didn’t agree with in each.
In the “Effect and Cause” mission, I really enjoyed creating the contrast between the rainy, fiery environments of the destroyed present-day timeline and the pristine past. One of the little details I had fun with was the sound of “where an enemy used to be” when the player uses the timeshift device to switch timelines. There’s a little blue visual effect that looks like a miniature portal closing up which happens wherever an enemy unit was standing when the player time-travels. I created an energy sound of the “rift” closing as well as a warped version of that enemy unit’s sounds to play along with it. So, for example, if the player is next to a grunt and timeshifts, he’ll hear the rift closing along with the grunt shouting about how he’s trying to locate the vanished player, garbled by the energies of the time travel. (Or Prowler snarls if it was a Prowler, a Stalker reporting lost contact if it was a Stalker, etc.) It’s subtle to the point of being almost subliminal at times, but it was a neat addition and I’m glad we were able to get it in there.
The single-player campaign also called for a huge amount of dialogue, which Joshua [Nelson] can address much better than I could, and the creation and implementation of a terrific musical score, which composer Stephen Barton and our music supervisor Nick Laviers knocked out of the park.
In-game footage from the multiplayer campaign
You can talk to your Titan during the campaign. Can you share some details about the process of deciding to have a voice for BT and how you went about auditioning for and casting it?
JN: The story team decided early in development that they wanted BT to be a prominent central element throughout the gameplay experience; a driving force that helped keep the story moving forward. We really wanted to create opportunity to bond with BT, to have the player work with a Titan who has real personality and seems more advanced, more intelligent, less disposable than regular run-of-the-mill Titans they’ve encountered previously.
We really wanted to create opportunity to bond with BT
So that really made a case for him being capable of conversation and having a voice that is pleasant to listen to often, since the player would be hearing it throughout the gameplay experience. It was important that he seem confident, calm, authoritative, and warm towards the player.
The voice talent auditions for BT took several months to complete. We asked talent to try a few different directions for him that ranged from friendly and very humanlike, to colder and more computer-like with less personality. We tested in-game dialogue interactions that were military and forceful vs. reassuring and helpful. We eventually got down to a few performances that we really liked from a short list of voice actors, including Glenn Steinbaum, who joined us in the studio to read a bit more for BT, and we thought he hit the mark just right for BT’s personality. It was great working with him in the studio; there were multiple sessions over a couple months. After getting the first big chunk of dialogue into the game, it started feeling like BT shouldn’t sound any other way. So we were pretty happy with Glenn’s work.
We had a lot of experimentation toward how and when to use BT’s dialogue to reveal plot and objectives during gameplay. Sometimes lore and information result just from the player interacting with the surrounding environment; other times he might be eavesdropping on enemy communications. Ultimately we wanted BT around to tie together these moments, and occasionally provide his own thoughts on the situation to the player, with a bit of humour added in.
The conversation system with BT is based around short, interactive exchanges. Players can choose whether they want to ignore or respond to the conversational prompts; there is some choice depending how much an individual likes story and lore. If you aren’t into it or are just busy blowing up Reapers, BT won’t take it personally if you just ignore his conversational prompts…probably.
How did you approach BT’s vocal processing?
JN: BT’s vocal processing evolved over the course of development. There were a number of aesthetic and technical considerations: BT’s dialogue is usually conveying important tactical or story information, so whatever we did with processing, he needed to remain highly intelligible during all gameplay. I didn’t want to alter Glenn’s voice too aggressively; a lot of the feel of BT and inflections were already there in his performances.
Whenever I work on voice processing for characters, especially voices that are supposed to sound synthetic, I think in terms of at least two processing stages. The first stage is the essence of the voice — what it sounds like in its purest form, omitting environmental reverb, echo, amplified speaker effect. There might still be some processing there, but the environment is not a factor yet. Then the second stage is how the essence of that voice is colored by the environment and the source of whatever is producing the sound (the AI must physically have an audio speaker on it, big or small, and of a certain transducer quality, which actually broadcasts its synthesized speech. Then it hits the environment in the form of sound waves). All of this contributes to making the vocals fit: coming from a particular character, in a 3D environment, where the distance from the player to the character can vary at any time.
For the first stage, I consider the AI’s core characteristics: how smart is it, does it attempt to sound human, is it programmed with a personality or is it cold and stiff? Should the core processing effect reflect that its physical mass is of a certain size (coffee-maker AI or huge robot AI)? There’s going to be a choice for how much delay, flange, or other effect is applied here to the core effect depending upon how synthesized the voice should sound. BT needed to sound clear, large, friendly, and pleasant to listen to, but also needed just enough grit.
During gameplay, we can’t always control whether the player chooses to stay with BT, go out on foot, or get very far away from him, so all of BT’s voice processes need to sound like they belong together and transition smoothly
The second stage was more challenging to achieve. We wanted BT to have a strong presence and realism in shifting environments. When the player is inside and piloting BT, the Titan cockpit fictionally has its own set of interior speakers, so that dialogue has a process like a small speaker-amp with a tight reverb as though you are in a small, confined space. It sounds fairly big, with his voice reverberating off the surfaces of that small cockpit area. If the player exits BT but stays nearby, BT’s voice is now emitting from his external PA system on his chassis — it’s loud, has some extra bass to it, is a bit more distorted, and has some slap delay on it that changes in real-time depending how far away the player is from BT. Since the PA is coming from BT’s position, the player can tell how far away BT is, and in which direction. Finally, as the player moves further away from BT, the PA version crossfades into a radioized version of BT’s voice emitting from the player’s helmet comms system. During gameplay, we can’t always control whether the player chooses to stay with BT, go out on foot, or get very far away from him, so all of BT’s voice processes need to sound like they belong together and transition smoothly. All this requires three sets or layers of .wavs for each line of dialogue. In the sound engine for the game, all three layers of BT’s processing are always triggering together in sync when he speaks. The player only hears one layer at a time or a crossfade of two, because the game’s environmental conditions determine the volume and EQ curve of each layer in real time.
Can you talk about your approach to the atmospheres/ambiences in the story campaign?
TP: The first Titanfall was multiplayer-only, and both the PVP and co-op modes were so chaotic all the time that ambience always took a backseat to everything else. For this reason, we kept the designs fairly minimalistic, as they spent about ninety percent of the game just getting ducked out of the way for the crazy combat to blast through. With Titanfall 2, we have a single-player campaign featuring a lot of exploration, and quite a few multiplayer game modes that aren’t populated with lots of AI soldiers, so the ambience has more of an opportunity to color the player’s perception of the Frontier, from exotic alien wilderness to massive automated factories to the fuselage of giant dropships in flight. Our goal, besides just matching the visuals, is to make the player feel adventurous, threatened, curious, unsettled, overwhelmed, or whatever serves each chapter of the story as the player journeys through it.
There are also timed platform ronks (metal groans) and shudders playing at strategic points along the scaffolding to help sell the feeling of instability and stress
Our technical implementation of ambience is a fairly common one in games: we create a quad or 5.1-channel bed for each environment which plays “on the player,” non-positionally, to provide a base atmosphere. We then place mono or stereo emitters throughout the world which play looping or semi-randomly timed one-shot sweeteners. The emitters are used for obvious visible things like fires, rivers, dripping water, etc, but also to spice up areas by suggesting activity where there isn’t anything visually going on. For example, in “The Beacon,” as you climb platforms to reach the top of the transmitter tower; the 5.1-channel bed surrounds the player with gusty wind and creaking, shaking metal, but there are also timed platform ronks (metal groans) and shudders playing at strategic points along the scaffolding to help sell the feeling of instability and stress.
In “Effect and Cause,” there are subtle glass and ice cracks playing from the cryo pods in the past timeline if the player gets close enough to hear them, and in the destroyed, present-day timeline, the same cracks play plus some interiorized dripping water, since the freezer system has partially failed. In the same mission, there are distinctive quad or 5.1-channel rainy backgrounds playing for each area, but the timed crossfades between these when moving from one area to the next are smoothed by small positional stereo point-source emitters which gradually give the player a taste of the next area’s environment before the player actually enters it and the crossfade occurs.
BS: I know my personal approach is to take the environment and bring it to life. I learned most of what I know from our Senior Sound Designer Tyler Parsons — he’s somewhat of a master at taking a static environment and making it feel dynamic and alive. Basically, I want to try to encapsulate the visual environment into the audio bed that I’m working on. Then, once I feel like I’ve sold the size and character of the space, I start to find interesting pieces to add on and layer into the ambience. The key is to keep it moving— have layers fade in, fade out, pan them around to different locations, and really just keep the ears excited and interested. The most challenging part is that at the same time you want to make the environment alive and exciting, you need to remember it is just an ambience. Most players won’t be focusing on it, so you need to make sure nothing jumps out and actually grabs the player’s attention. You want to reserve that part of the ambience design for environmental emitters, such as mechanics, or steam pipes, or waterfalls, or anything that you can see visually moving in the environment. And ambience needs to make way for any more important scripted events or moments that you want to really shine!
Thanks guys for doing this. So cool :)