Interview by Jennifer Walden, photos courtesy of Ubisoft

Ubisoft’s Assassin’s Creed Shadows delivers a satisfying sonic experience that immerses the player into 16th century feudal Japan during a time of intense civil war. The world around the player lives and breathes with activity and conflict. The weather and the seasons change, affecting the environment and enemy behaviour. As always, the parkour/navigation feels fun, and it sounds incredibly realistic, dynamically changing in real-time.
Here, members of the sound team on Assassin’s Creed Shadows at Ubisoft Québec: Audio Director Greig Newby, Associate Audio Director Arnaud Libeyre, Audio Artist Frédéric Vekeman-Julien, Audio Designer Vili Viitaniemi, Audio Designer Alexandre Fortier, Audio Designer Steve Blezy, Audio Technical Director Daniel Sykora, and Music Supervisor Jerome Angelot talk about creating an immersive and varied (i.e., is different every time) game experience. They share details on their approach to developing the weather system and the navigation/parkour system, collaborating with composers Alexis Smith and Joe Henson of The Flight, designing the illusion of huge NPC fights happening around the player, creating thousands of combat-related sound effects, crafting The Animus ‘virtual world,’ creating effective haptic feedback, and so much more!
Assassin’s Creed Shadows: Official World Premiere Trailer
Since there’s a civil war in Assassin’s Creed Shadows, a lot of fighting is happening around the player. Can you talk about creating the sound for these fights? What was your approach to mixing fights happening around the player (that the player isn’t engaged in)?
Steve Blezy (SB): Whenever you have a massive battle that comprises hundreds of combatants, you need to create a multi-layer illusion based on the distance between the player and the individual NPC. The challenge for the sound designer is the voice count. Starting from the farthest distance, we have a single “multi-point” audio track playing that represents the entire battlefield with a far perspective. It is a full mix of the weapons attacks, impacts, foot and body falls, and voice.
we have a single ‘multi-point’ audio track playing that represents the entire battlefield with a far perspective.
Once the player begins to get closer, we then calculate logical groups or pockets of fighters, and each one of those groups has their own audio track representing the group size, weapons used, etc. At this point, we are still hearing the distant battlefield and individual localized groups/pockets.
Once the player begins to approach a group, their ‘single voice’ SFX track […] is replaced with the frame-accurate SFX on their individual animations.
If a group size changes, we continue to recalculate the newly formed groups. Once the player begins to approach a group, their “single voice” SFX track begins to fade out and is replaced with the frame-accurate SFX on their individual animations.
By using this approach, we can reduce the number of global voices in use, provide the illusion of a huge fight in the distance, allow the player to localize small groups around them, and ultimately join a group battle and not notice anything being out of place or sync.
Shadows features weapons like swords and bows, as well as long guns (like the teppo). Can you talk about your sound work for the weapons in the game? Any custom recording sessions? Any helpful indie libraries?
SB: I did not use any foley sessions on this project. It would have been fun, but I started by collecting a huge amount of source material and set up sessions in Steinberg Nuendo. This source material ranged from your typical sword whooshes, bamboo swipes, whip cracks, metal “tap” ring-outs, etc. Many of the source sounds were, in fact, not even weapon-related.
Many of the source sounds were, in fact, not even weapon-related.
Each weapon attack sound effect is probably a composite of up to 8-12 tracks bounced down to a single asset. When it came to creating variations, I was able to tweak mix aspects of those 8-12 tracks, so the new variation had something in common (design language) but a slightly different sound texture. Every weapon type in Shadows had 96 “base” attack sound effects and probably 24 extra versions for special cases.
The same applied when creating the weapon-to-weapon impact sounds. I wanted variety to prevent listener fatigue, but I also wanted them to reflect the textures of which weapon type was hitting another weapon type — not just “metal weapon hits metal weapon.” Following that, these assets were created for a slashing action, a stabbing action, and for each “action” there was four levels of intensity per weapon. By the time I was finished, there were just under 1,400 individual assets for the weapon-to-weapon impacts.
By the time I was finished, there were just under 1,400 individual assets for the weapon-to-weapon impacts.
When it came to the weapon hitting an enemy, the same approach was used, but these weapon-to-body impacts were the “dry” component representing the weapon type characteristics, while a similar “wet” group of assets was created for the blood and gore component. By separating the “dry” and “wet” elements, it also helps with age-rating if the player wants to use the game menu option to turn off “blood and gore” for children/younger players. The overall sound design “takes a hit” (when blood is turned off), but it helps with age-rating.
Source sounds ranged from vegetables, water, ketchup, and mayonnaise splats, etc., all layered in a Nuendo session and exported to a final asset.
The game score by composers Alexis Smith and Joe Henson of The Flight is incredible, and adds such an important flavor to the overall sound of the game. Can you talk about your collaboration with the composers? How did their score influence the game’s sound design?
Jerome Angelot (JA): When we started our collaboration with The Flight, we didn’t want a typical Hollywood score with traditional instruments sprinkled on top; it was clear that the traditional Japanese instruments needed to be the center of the score.
we were able to create combined moments with the modern songs in our game and the soundtrack they crafted.
Alexis Smith and Joe Henson from The Flight did a great job with this approach. They have a lot of expertise when it comes to open-world games. They get the brand and the gameplay, and they are able to support our designs. They have a fantastic collaborative approach, and we were able to create combined moments with the modern songs in our game and the soundtrack they crafted. They also did a great job of finding traditional Japanese performers who helped the score be both grounded and innovative.
For the main theme, we had a different approach on Shadows. On AC Odyssey, the main theme of the game was also the main character theme, whereas Shadows has two protagonists, so we felt the need for a theme for each character, in addition to the main theme that represents the foundation of the game expressed through both the turmoil and the hope of the era.
The weather in Shadows sounds amazing! The sounds of the strong winds whistling past your ears, the thunder, and the sound of the drenching rain. Can you talk about creating and implementing weather sounds in the game? What were some of your challenges? What was unique or new in your approach to weather sounds?
Frédéric Vekeman-Julien (FVJ): We have developed multi-level systems: different elements combining dynamically, following the changing values provided by the game engine to produce the sound rendition of different weather conditions in our world. Emphasis was placed on the wide dynamic range of these elements.
In the case of wind, we have six main components that together form the sonic experience of various wind conditions:
• Global wind base layer
• Grass/fields vegetation
• Distant groups of trees (various types: deciduous, coniferous, bamboos)
• Closer Individual trees (various types, as above)
• Local gusts
• Fallen leaves flying around us/snow squall (when conditions apply)
These layers are in constant crossfade with each other and are modulated in volume and pitch according to the changing weather conditions in the game.
These different subsystems are generally made up of distinct layers of sound sources representing different intensities. These layers are in constant crossfade with each other and are modulated in volume and pitch according to the changing weather conditions in the game. The interaction of these sub-systems with each other, following changes in global and local wind intensity parameters and the nature of the surrounding environment, creates a constantly varying wind-related sound experience.
The same is true of rain, which is created by the interaction of distinct systems consisting of the general perspective, the local texture of the ground, and the surrounding trees. Here again, the interaction of these sub-systems with each other, according to changes in rain intensity parameters and the nature of the immediate environment, creates a constant variation in the sounds associated with rain.
The ‘sound sculpture’ of this environment comes from both carefully selected and isolated real-world sources and sound synthesis
Developing these systems proved to be a complex and long-term endeavour. Some of the sound layers used date back to earlier productions, to which new elements were added over the years, leading up to the completion of this production. The “sound sculpture” of this environment comes from both carefully selected and isolated real-world sources and sound synthesis.
One of the challenges was to deliver a balanced mix, allowing each component to “shine,” creating an immersive and powerful experience without detracting from the requirements of the gameplay and narrative dimension of the game.
What went into your sound work for The Animus ‘virtual world’ in Shadows?
FVJ: Even before we had a completely clear idea of what our Animus would be like, I set about exploring sound textures produced from various generative synthesizers in NI Reaktor. The result was a generous collection of drones, ready to serve as the basic sound material for the world that was about to take shape.
I built the general ambience of the Loading Room by applying granular time stretching and Tonal/Noise balancing in PaulXStretch
The artistic direction of the Animus became clearer, and guidelines emerged: “High-Tech,” “Smooth,” “Light,” and “Ethereal.” On this basis, I built the general ambience of the Loading Room by applying granular time stretching and Tonal/Noise balancing in PaulXStretch to some files from my collection. The resulting textures, once layered together, created an evolving pad reminiscent of ethereal wind tones and bell resonances.
To this, I added distant elements of chimes, many layers of light electronic murmurs and “particle waves,” created by processing certain sounds with tools such as GRM Tools Shuffling and multiple delays. In constant motion around us, these elements combine to create a meditative ambience that is both futuristic and timeless.
many Animus interface elements use these wave-like particle textures, sometimes based on purely synthetic sounds and sometimes even on the very real sound of ocean waves
Other components, like Animus transition effects, dissolving Torii Gates, and many Animus interface elements use these wave-like particle textures, sometimes based on purely synthetic sounds and sometimes even on the very real sound of ocean waves breaking slowly on a shore… A timeless shoreline in a distant non-place.
What went into the UI sounds for the game?
Alexandre Fortier (AF): For the UI, I pulled a lot of inspiration from traditional Japanese instruments and mixed that with the slick, techy feel of the Animus. I played around with frequency shifting and layering to get that clean, shiny, almost anime-like ring that gives the sounds a bit of sparkle.
The idea was to merge those old-world textures with a futuristic edge in a way that felt smooth and natural
The idea was to merge those old-world textures with a futuristic edge in a way that felt smooth and natural, very much in line with the style of Assassin’s Creed Shadows.
Popular on A Sound Effect right now - article continues below:
-
19 %OFF
-
21 %OFF
The parkour in Assassin’s Creed games is always so fun. Can you talk about the foley work, and how you implemented that for the parkour for Shadows? I love the sound of jumping around on the clay roof tiles, and parkour on the paths in nature, like the long slides and the grapple/rope swings!
SB: The navigation/parkour system in Shadows is a massive system. As with most games, we have a very large list of “material” types assigned to the world and props. This way, when navigating on any surface, the material texture is present.
On top of that is the navigation action. Is the player walking, or running? Is it a hand grab or a body fall? What type of shoe is the character wearing, what type of clothing does the character have on their upper and lower body, etc.
I decided to develop a speed-dependent/physics-based approach instead of utilizing a ‘fixed’ walk asset method.
On Shadows, from the early days of development, I decided to develop a speed-dependent/physics-based approach instead of utilizing a “fixed” walk asset method. I wanted to avoid a common repetitive “clop, clop, clop.” This way, when walking slowly, the length of time for the foot contact is longer than at higher speeds, the heavy heel strike is removed, the surface material texture like the “grit” or gravel is emphasized, etc.
As the player speeds up, all these elements dynamically change in real-time: the volume, pitch, high-pass, and low-pass filtering. We even calculate the character weight so Yasuke sounds much heavier than Naoe. The same “dynamic” system was used for jump landing, hand grabs, and climbing up and down. This is the first AC that never plays a single “pre-canned” navigation asset for the player.
This is the first ‘AC’ that never plays a single “pre-canned” navigation asset for the player
Everything is dynamic and constantly changing. Essentially, the navigation and parkour is “infinitely granular” step by step. This dynamic system is also applied to the character’s upper and lower cloth types, so the cloth also tracks the player’s movements and speed. You can see at different speeds that Yusuke’s shoulder armor begins to move as he transitions from a fast jog to a slow run. If you listen closely, you will also hear the shoulder armor starting to be more present in the mix.
Sadly, as most players simply run at full speed, they may never notice these systems in action
When walking in water, you can hear the difference in the splash/gurgles based on the water depth, character speed, and the character’s body mass. You can hear the gradual transition as the water depth increases from being ankle deep to lower calf to mid-calf, knee deep, mid-thigh, all the way up to your chest. Then, because we use body weight and size, if you compare Yasuke to Naoe, even at the same relative “body part” water depth, you can hear that Yasuke is physically “pushing” a greater volume of water than Naoe because of his sheer size and volume.
The same can be observed when comparing the two characters “absolute” water depth navigation, i.e., they are both walking in four inches of water. Sadly, as most players simply run at full speed, they may never notice these systems in action, but they are always present for those who want to take the time to notice them.
As that drift angle increases, a variety of extra filters are gradually applied to enhance the loose debris being thrown off to the side.
Speed dependence also plays a role for the long slides, as does the surface material underneath the player. A variety of filters are applied the faster you slide, but at the same time, I also calculate the sideways drift “off of the centerline” to the left and right. As that drift angle increases, a variety of extra filters are gradually applied to enhance the loose debris being thrown off to the side.
The grappling hook also takes advantage of the above-mentioned concepts, but one of the key elements is the character’s body movements and “tagging” the cloth for the body movements to highlight the changes of inertia and direction.
I encourage both players and other sound designers to play ‘Shadows,’ take time to slow down from just running at full tilt, and listen to the details.
This is just scratching the surface. Post launch, I proceeded to write an in-house document to share the changes that I made for Shadows, the breaking away from the traditional AC formula. The document is currently 57 pages long and is not 100% complete. I encourage both players and other sound designers to play Shadows, take time to slow down from just running at full tilt, and listen to the details. My personal goal was to make the navigation sound natural and “as expected” for what you see is what you hear. If the navigation sounds jump out and try to grab your attention, you missed the mark as a sound designer. Unnatural sounds will grab your attention, while naturally sounding elements will cement you in the experience.
The haptic feedback in Shadows is quite remarkable. Can you explain the approach taken?
Vili Viitaniemi (VV): Defining clear and meaningful rules for haptic feedback was important for us. One of those rules was to create and implement all content from the perspective of the Animus user, for moments that Naoe, Yasuke, or the person living their memories would feel. During gameplay, players experience sensations related to both the physical character (interacting with the world) and the Animus (elements of the simulation). In cinematics, haptic feedback is used only for actions involving these three actors.
players experience sensations related to both the physical character (interacting with the world) and the Animus (elements of the simulation).
A considerable amount of time was spent on experimenting with various haptic generation methods to find the right recipes for different gameplay features. We referenced a dozen PS5 games we admired, noting what worked well and why. Sony’s Vibration Designer software was an essential tool throughout production and allowed us to leverage the team’s sound design work to convert and reshape sound effects into haptic waveforms. The Tone Generator in Wwise, which outputs sinewaves and noise, was equally useful for creating custom haptics for UI, grass rustling, raindrops, parrying, breaking the guard of the enemy, shockwaves of explosions, and incorporating the feeling of stretching to objects such as the grappling hook rope and the kusarigama chain, as well as the feeling of resistance when navigating in water or snow. Combining and layering content from both tools helped us achieve the most appropriate results.
When handling a weapon, you can perceive the movement of the swings based on the emitter that follows the tip of the blade
We also took advantage of the DualSense’s capability to provide a sense of directionality. Most of our haptic assets pan at least slightly between the left and right motors of the controller, depending on the positioning of the sound emitters in the game world. When handling a weapon, you can perceive the movement of the swings based on the emitter that follows the tip of the blade. This can feel especially satisfying and wide when performing 360-degree attacks with Yasuke’s naginata or Naoe’s kusarigama. We increased the amount of spatialization for events in the world that the player wants to pinpoint more accurately, such as bombs, hitting a temple bell with a kunai, and enemy heartbeats while using eagle vision. Other features benefited from predetermined panning, such as the meditation minigame, where the button press feedback would be felt on the left, right, or center depending on the icon placement.
We dedicated specific frequency ranges for actions like weapon swings versus impacts, set strict playback limits, and created auto-ducking behaviors based on feedback priority.
Unlike with sound effects, there is much less room to play simultaneous pieces of haptic feedback since overlapping frequencies can easily create an unpleasant response on the controller. We dedicated specific frequency ranges for actions like weapon swings versus impacts, set strict playback limits, and created auto-ducking behaviors based on feedback priority. When debugging haptic content, I often brought the controller close to my ear to listen to the vibrations and adaptive trigger motors, rather than relying solely on the feeling of holding the controller. This technique repeatedly proved surprisingly effective.
What was your approach to mixing Shadows (for exploration, combat, and transition in and out of cinematics)? What were some of your biggest challenges in terms of the mix? Or, what was unique in your approach to mixing this game?
Arnaud Libeyre (AL): The dynamic nature of our world was probably the biggest challenge in terms of mixing Shadows.
As Frédéric explained already, all our ecosystems are closely interconnected and drive each other dynamically: time of day, weather conditions, season transitions, wildlife, insects, trees, vegetation, ground wetness, etc. The first step of mixing was done at the system level, as we defined the ruleset on how those systems would influence each other.
The first step of mixing was done at the system level, as we defined the ruleset on how those systems would influence each other.
This very realistic world simulation is great from a replayability standpoint, allowing our players to experience the same locations in the world in very different contexts, both visually and sonically. Every time they put their hands on the controller, they are guaranteed to have a different experience.
From a mixing perspective, the complexity comes from the fact that you can play the same quest, gameplay segment, or cutscene in a wide variety of possible contexts, ranging from very quiet and peaceful to absolute chaos (e.g., a thunderstorm) depending on where the simulation is at the time you are playing.
We used dynamic mixing RMS side chaining recipes at various levels in the bus structure, as well as various context-specific mixing states.
It took a lot of mixing efforts to balance all our other audio systems (VOs, Music, Foley, Gameplay SFX, and scripted content) in relation to the world simulation. We used dynamic mixing RMS side chaining recipes at various levels in the bus structure, as well as various context-specific mixing states.
We managed to successfully implement those mixing systems in the game without sacrificing the wide dynamic range of our world simulation, which is one of the highlights of our game.
What were your biggest creative challenges in terms of sound on Shadows?
Greig Newby (GN): Creatively, we walked a delicate line on Assassin’s Creed Shadows. We were simultaneously students of history, looking to best represent a hyper-realistic experience of feudal Japan while also looking for ways to have our game stand out and to innovate within the brand and to defy expectations.
we were able to create surreal moments which were more resonant because they were so sharply contrasted with such a believable world.
The careful attention that was poured into the sound of the world — the weather, the weapons, the flora, and the fauna — came from a place of wanting to transport the player back in time and to be completely immersed in an iconic setting. This was met with a progressive and ambitious soundtrack, time-rifts, and over-the-top special combat abilities that are meant to impress the player and grab their attention, and pull them out of their comfort zone.
Ultimately, the art of balancing these two approaches was extremely gratifying when we were able to cleanly zoom in and out of the immersion. In the storytelling, we were able to create surreal moments which were more resonant because they were so sharply contrasted with such a believable world. Working with the sound, this equilibrium wasn’t always easy to establish, and we had many iterations before we felt we had it working just right. In the end, our hard work paid off, and we are extremely proud of what our game has to offer to players.
What were your biggest technical challenges in terms of sound on Shadows?
Daniel Sykora (DS): In a game the size of Shadows, and with the level of detail our Audio & Voice team managed to pack in, you can probably start to imagine some of the challenges we faced during development.
Profiling sound memory became a regular occurrence. Wherever possible, we aimed to dynamically load SoundBanks
One recurring issue was hitting the maximum number of active sound events, whether it stemmed from detailed destruction, ambient layers, or another system we had crafted, even something as simple as a brazier could become problematic, especially when all of them turned on simultaneously around the player in a city. Other times, we discovered specific sounds that wouldn’t stop properly. Tracking these down was sometimes like finding needles in haystacks (thankfully, something Naoe and Yasuke don’t have to deal with).
Profiling sound memory became a regular occurrence. Wherever possible, we aimed to dynamically load SoundBanks — even if it only saved us 0.3 MB. This approach allowed us to include more sounds at higher audio quality settings, all without exceeding our sound memory budget. The alternative would’ve involved some very pleasant conversations with team members, along the lines of:
“Hi, could you please reduce the number of loaded sounds you’ve spent time and effort on…by about 10–25%?”
we discovered that our build machine’s generation of tree positions […] wasn’t 100% reliable and had missing data in some areas
We also prioritized using the Opus codec over Vorbis for its advantages in quality and space. However, we eventually ran into the PS5’s hardware decode limit of 80 concurrent voices when using Opus. To balance things out, we had to use Vorbis for certain sounds.
One of my favorite issues came up very late in development, when we discovered that our build machine’s generation of tree positions — which we use to trigger wind and rain sounds — wasn’t 100% reliable and had missing data in some areas. As a result, we had to regenerate the tree positions for the entire game world locally on our own machines. And to make things more exciting, we had to do it in time for the game to ship on disc, since the patch size for a fix would’ve been too large. In the end, we pulled it off with just days to spare. Shoutout to Adam Walsh for helping regenerate most of Japan at the last minute!
we overcame all of these technical challenges as a team
Finally, I want to highlight that we overcame all of these technical challenges as a team. We’re fortunate to have many talented individuals — not only on the Audio and Voice team, but also on the Audio Programming team — who played a key role in helping us achieve the final audible results.
What have you learned or gained from your experience of crafting the sound of Assassin’s Creed Shadows? What was your biggest personal achievement?
GN and AL: Coming off of Assassin’s Creed Odyssey, the Quebec studio was eager to take on this new setting. We are fortunate to have a very experienced team that understands the core systems of the brand and wants to push our features forward in exciting ways.
We invested heavily in the world and the craft of making great assets, and we also pushed a lot of our systems and technology forward.
It’s difficult to name one single achievement, but we are collectively very proud of the following areas, which underwent significant growth on this game:
• A music direction evolution within the brand – the addition of bands into our soundtrack and a unique character-driven musical signature
• The evolution of the ambiance system (dynamic range, realism, spatialization)
• The evolution of the destructible objects’ technology – physics-based objects that break, shred, tear, and shatter
• Blurring the lines between cutscenes and gameplay in terms of foley – our cutscenes use the core navigation physics to drive the systemic foley
• 3C and Fight Improvements – developing a character-based system allowing Naoe and Yasuke to share one foley structure with very different results based on their size/intensity of their motions
• A new track music system – improving the interactivity in the score and opening creative possibilities for music integration by letting the engine filter what is the most important element for the score to focus on.
All in all, we dreamed big on this game, and the contribution of every team member had a very real impact on what you will hear in your playthrough.
A big thanks to Greig Newby, Arnaud Libeyre, Frédéric Vekeman-Julien, Vili Viitaniemi, Alexandre Fortier, Steve Blezy, Daniel Sykora, and Jerome Angelot for giving us a behind-the-scenes look at the sound of Assassin’s Creed Shadows and to Jennifer Walden for the interview!
Please share this:
-
19 %OFF
-
21 %OFF