Interview by Jennifer Walden, photos courtesy of Square-Enix Japan

There’s so much pressure when remaking a classic that the end result can only be diamonds or dust. You either come out of the experience with a shiny new version that everyone wants or it’s landfill fodder.
Either way, fans of the original will no doubt have panned opinions. That’s just par for the course.
SQUARE ENIX CO., LTD recently released FINAL FANTASY 7 REMAKE for PS4 — an updated and re-imagined version of the beloved RPG classic that came out in 1997 — and fans are loving the stunning soundtrack and visuals.
The remake doesn’t strictly retell the original story; instead, it’s a more in-depth exploration to be told in parts. So, this first part focuses on Midgar. And Part 2 — now in full development — will continue the saga.
Here, we talk to members of the Square Enix sound team —Makoto Ise (Sound Director), Keiji Kawamori (Music Supervisor), and Atsushi Suganuma (Supervising Sound Editor/Audio Director) — about their approach to remaking a classic, finding inspiration from the original and forging new sounds for awesome new boss battles, developing systems to handle abundant dialogue, music, and effects assets, mixing to maximize player immersion, and much more!
FINAL FANTASY 7 REMAKE – Final Trailer | PS4
What was your approach to updating the sound of FINAL FANTASY 7 for this remake? (The sound assets from 1997 were probably only useful as a reference!) Any iconic sounds that you recreated?
Makoto Ise (MI): The overview for this project was that it’s “an exploration of a hybrid between the original and new sounds so that even non-hardcore players can enjoy it without dishonoring the original game.”
As for the music, from the inception of the project, we tried to build a plan where players can hear as much music as possible throughout the game, like in the original game. But at the same time, we tried not to overdo it — to break the immersion due to playing too much music im-game.
…we tried not to hang too much on the original sound effects and decided to concentrate on how we can conceptualize the world of Midgar with a modern approach.
With the sound design, we tried not to hang too much on the original sound effects and decided to concentrate on how we can conceptualize the world of Midgar with a modern approach. We took our reference from the 2005 CG Animation Film FINAL FANTASY VII ADVENT CHILDREN because I was actually involved with the film as a Foley artist and my experience from the film influenced this project as well.
Of course, the updated visuals in the Remake opened the door for more sonic detail, both for the characters and for the environments. Can you talk about the depth of the character sounds? Did you cover those with Foley?
MI: To be able to handle real-time changes in any situation, our team tried their best to optimize our proprietary sound driver from predecessors.
Our proprietary sound system (Motion Automated Sound Triggering System) enables each character to play pretty much most of their character movement sound automatically in real-time. I felt that integrating this system was a true win; it worked so tremendously well to express very subtle movement sounds for each character in real-time.
From a design aspect, we focused on designing character movement sounds to be as detailed as possible. First, every material for each character was broken down and recorded in Foley sessions.
After that, we started to implement every asset for each body part (neck, shoulders, arms, torso, hip, thigh, ankle, and feet). These body parts can have separate sound parameters, including variable envelope info.
… assets for each character were assigned to each bone asset…
Then, all of these assets for each character were assigned to each bone asset in order to play that specific asset. The calculation was done based on the acceleration speed of movement of each bone that was triggered. With this sound system, we were able to avoid the huge manual labor necessary to create each individual character sound. This system, in fact, manages most of the main characters’ sounds for exploration, battle, and cut-scenes in this game.
[tweet_box] The Fantastic Sound of ‘FINAL FANTASY 7 REMAKE’ [/tweet_box]Because this system is not dependent on any animation data, the character sound itself would not be affected by all the revisions and modification of animation data so it still plays in sync.
The advantage of this system is that it can effectively handle when the animation has many transitions with the motion blend. Because this system is not dependent on any animation data, the character sound itself would not be affected by all the revisions and modification of animation data so it still plays in sync. Even in the situation of cut-scenes, once the character is set into the game, the sound can play in sync with any current build, and any picture changes have no effect on sound.
Also, unlike a traditional workflow, the sound team would not have to wait for picture-lock rendered cut-scenes to be delivered and then manually edit the whole sequence. Especially in the many lengthy conversational cut-scenes, we would not have to cover the whole scene and we managed to reduce a huge chunk of the workload because of this system. But I have to admit that this system has its limitations in covering every situation, especially during the cut-scenes where Foley requires us to exaggerate the reaction in an aesthetic sense. In those scenarios, we recorded Foley for each cut-scene in a traditional way.
Atsushi Suganuma (AS): Due to the huge amount of materials that we needed to record for this game, we had to organize literally weeks of Foley sessions. We planned our sessions with traditional sync-to-picture Foley recordings to create cut-scenes aesthetically and we also planned sessions for material recording for in-game usage.
Because I also was involved with developing this system on previous AAA titles, I knew exactly what kind of recordings should take place and how final production sound would have to be. That made it incredibly easier for me to organize and manage the workflow of all the Foley sessions, sound editing, in-game mixing, and implementation process afterward.
Let’s look at the environmental sounds. What were your challenges in creating the sounds of Midgar? Did you record custom sounds to create this world? Any useful sound libraries?
MI: Constructing the spatial audio environment for this game was truly tough work for me because the whole process gets complicated when you have to implement each indoor room reverb, variable sound parameters, and distance attenuation filtering all manually.
As with a custom field recording, because this game only was at Midgar, we felt no need for pursuing custom recording. Instead, we used our custom field recording library from the past.
What were some challenges in building the dialogue systems, particularly during battle? You switch between characters when attacking and they talk to each other, taunt, comment on health, and so on….
MI: For character dialogue, it was tough purely from the large volume of lines we had to manage for this game. Also, we had the challenge of managing simultaneous worldwide releases in Japanese, English, German and French. This tough experience led us to build a specific system to handle multi-language asset management.
We accomplished this by building a specific dialogue system that enables us to create any ensemble voice combination with any of the main characters.
During a battle scene, character dialogue varies amongst different lines depending on which enemy group the player encounters. We accomplished this by building a specific dialogue system that enables us to create any ensemble voice combination with any of the main characters. Of course, ADR sessions were recorded especially to achieve this, so we were able to implement any ensemble voice combination.
Also, character dialogue can change depending on what kind of situation the players are in, what command was set for the battle, and which ability and magic they choose to use.
The only down side for our team was that we needed to build a special debugging tool to debug every possible outcome for dialogue.
There are tons of ways to attack an opponent in this game, from spells and abilities to different weapons and moves for each character. How did you break down (and keep track of) what sounds you needed to create for each character’s attacks? What were your favorite attack sounds for each playable character? What went into the design?
MI: We had 4 to 5 sound designers working on the battle sounds. I cannot speak on their behalf because each member has their own setup, but in my personal setup, I usually design on a DAW with VST synths, like Omnisphere by Spectrasonics and Komplete 12 by Native Instrument. Commercial libraries like UVI Meteor and Whoosh FX were used frequently. I also have a personal, custom library from decades-old projects and I combine that with the latest libraries to edit and create something new.
Popular on A Sound Effect right now - article continues below:

My favorite attack sound is a limit break called “Cross Slash.” From the inception of the project, new and fresh attack sounds were required and I worked extra hard to create this sound.
I deliberately made it into three parts because it can pause in between. This attack sound is embedded into the VFX directly so it can be positioned in the exact 3D position in the game.
Let’s look at some opponents/enemies because there are some fun boss fights in this game! What went into the sound of: Scorpion Sentinel?
MI: This is the 1st boss you encounter in this game.
We decided to go for a bit of an old and rusty mech sound rather than the latest slick mech sound.
Each movement sound was divided into small parts and randomized to implement well into the game. Its footsteps use the same sound system as the main characters. Again, this helped to reduce our manual labor since this boss has six legs.
This boss`s missile attack sound uses a particular sound limitation system to achieve a big punchy sound as well as keeping a consistent loudness level.
Shiva:
We had a hard time creating sound design for Shiva.
All of Shiva`s abilities consist of ice elements so it was tough to differentiate each ability. Also, Shiva`s movement contains ice sounds that are quite similar to the rest of the sound and it was a headache for us to solve. In the end, we decided to characterize with a unique impact sound to make a difference. We were happy with what we accomplished — a sound that represents both iciness and the subtleness of Shiva.
Abzu:
Abzu is a boss from the original game who has a comical side but there’s also a berserk side of the character as well.
Because the encounter with Abzu happens twice in the game, we needed extra assets to differentiate between those. There are some special animations with two versions, and that required extra detailed work.
The chain on each arm is triggered by the sound system that we used with main characters and it can play a different sound by calculating acceleration speed for the hit sound and the movement sound of the chain.
Eligor:
Eligor is a new boss in this game and has several different attack abilities.
Eligor was hard to create one distinct sound for because it’s part horse, part wheel, part human and it flies around as well. The terrain sound was created using both a wheel sound and flying sound crossfaded in each situation using collision detection.
We worked especially careful for the ability where Eligor uses a rain of spears. We added an extra whoosh sound when the players are attacked to create more tension in the battle.
Jenova Dreamweaver:
Jenova is a new boss in this game and I guess many players who played the original must have been surprised with this Jenova.
During the battle, Jenova uses huge tentacles to fight. But the idea of managing many tentacles in one battlespace was technically a big challenge for us. We went through a bit of trial and error in controlling the audio range for these tentacles. Sound design-wise, we tried to have as much liquid and flesh-feel to the tentacle sound as possible. We actually gave up on the idea of having extra sound for cutting off the tentacles due to the cost and the limitation of the memory.
For the music side, the score was beautifully structured out to reflect how powerful this boss battle can be. The music starts off from the cut-scene and it plays seamlessly throughout the boss battle.
The boss battle has three phases. It begins with a broody sounding arrangement to show how difficult this boss is to eliminate. And then on the third phase of the boss battle, the music is carefully structured to emphasize that Jenova is full strength, with Jenova`s theme playing throughout. In the cut-scene just after the boss battle, the music is scored to sync with the picture perfectly.
Bahamut:
You can fight Bahamut in the battle simulator with the characters who have the ability to summon Bahamut. We carefully adjusted the audio range as Bahamut has the ability to fly around with such a large range. For Bahamut’s feather sounds, we attached those into the left and right bones individually. Doing that, we were able to create a large scale sound with just the right amount of width. We wanted the sound of Bahamut to precisely match where it was emitted from in a 3D position rather than making it phantom stereo.
In my opinion, Bahamut might have ended up sounding larger than any other bosses from the main story.
What was your favorite fight to design? What went into it?
MI: My favorite sound to design was “Tactical Mode” during battle where it makes every sound in slo-mo. We achieved a good sound for “Tactical Mode” by using specific parameters for Reverb and Delay that we sent via different BUS routing. We also paused all the environmental sound and decreased the volume and pitch of the VO to make a great slo-mo effect but it was hard to get it right. You can experience the full effect of “Tactical Mode” by muting the music during battle.
Music is a big part of this game — not just for the fights but for the exploration and cut-scenes too. From a tech standpoint, how did you handle the transition between pieces of music to make the music feel reactive and reflective of the player experience?
Keiji Kawamori (KK): Because the FINAL FANTASY 7 REMAKE lets the users play seamlessly throughout the game, we were careful to not interfere with the flow of the music, so it could play seamlessly. We let the music lead the way. We wanted the music to be fully relative to how the gameplay changes are made and we paid extra attention to how the emotional cue changes.
You can see this example especially in “Sector 8,” where the gameplay takes the characters down Sector 8. During this part of the game, you will hear a smooth transition of three different arrangement of “Those Who Fight” (combat, non-combat, and in crisis) depending on the situation of the combat. During the last combat, you will be able to hear the entire part of the theme from “Those Who Fight” for the first time since the beginning of the game in Mako Reactor1, to maximize the player experience emotionally.
The other example is during the scene where you first visit Aerith`s family home. You can already hear the intro of the music from exploration mode before entering the home and once you are inside the house, the music transitions into the other arrangement to make the player-experience richer. And then two other transitions happen when you exit the house and leave the house to start exploration mode.
We tried our best to craft as much emotional player experience as we can, not just through visual perception but also through sound perception as well to feel each character’s emotional changes for the entire game.
Looking at the mix, you have these battles that are extremely intense, with a ton of music and effects and dialogue happening all the time. What elements took priority in the mix?
MI: The basic priority for the mix is Dialogue (high) > Music (Low) = Sound Effect (Low).
The priority of Music and SE changes depending on what situation the players are in. As an example, SE has higher priority than music during the battle scene but Music has higher priority during a cut-scene. Generally speaking, throughout the entire FINAL FANTASY series Dialogue was the highest priority in the mix hierarchy and for this game, in particular, we tried our best to have Dialogue as the higher priority during the mix as well. Especially during battle scenes, we wanted players to hear some important lines as clearly as possible. The reason why we challenged ourselves for music to be slightly louder than normal was that we wanted all our devoted fans of the original game to be able to hear music crystal clear in the mix.
…the sound of this game is more similar to the Japanese Anime genre rather than Hollywood-style action films.
AS: The other important thing I want to point out about the mixing is that conceptually, the sound of this game is more similar to the Japanese Anime genre rather than Hollywood-style action films. We spent many months of trial and error just to get the right feel of what this game should sound like by mixing whole chapters a number of times. We worked incredibly hard for our mix to create as much seamless transition as it can between pre-rendered cinematics, cut-scenes, exploration, and battle scenes. Putting this particular mix priority into practice was very effective for all of the gameplay, to have a smooth consistency in the mix.
Mix-wise, how did you keep the battles under control yet exciting? Anything unusual/innovative about the systemic ducking rules?
MI: Of course we had a sound limitation system in place, otherwise, the whole soundtrack would end up extremely saturated.
For instance, when a sound is triggered repeatedly, the sound is prioritized by how often the sound is played, when it is triggered, how far it is triggered, how much of the same sound is triggered and so on.
Highlights from A Sound Effect - article continues below:
…during a battle scene, we limit the number of sounds by applying certain audio range, attenuation, and category-based bus routing.
With this sound limitation hierarchy, this system enables us to control the total number of sounds that play as well as total gain volume. The other technique worth mentioning is that, especially during a battle scene, we limit the number of sounds by applying certain audio range, attenuation, and category-based bus routing.
In order for players to hear the conversation clearly, ducking was applied in the mix when any important dialogue with subtitles is played. On top of ducking, we tried to adjust by using different bus routing to smooth things out.
We actually applied ducking during “Tactical Mode” as I mentioned earlier.
Since there was a team of composers for this game, were there music mastering guidelines that helped to make sure all the different composers sounded consistent?
MI: For mixing music, we tried to outsource mixing work to the same mixing engineer to retain consistency. Before implementing any music, a mastering process was done beforehand to keep a consistency even on an asset level. Every composer did their best to compose the music with a consistent level, and then we applied a mastering process to the music by ourselves.
FINAL FANTASY 7 REMAKE used the UE4 engine. Was this a good fit for the sound team? Why?
MI: UE 4 engine was truly a good fit for this game.
We were able to share our know-how among our sound division from different game titles that had already been developed using UE4.
The other advantage was that our audio programmers were able to develop several extensional functions with UE4 easily and our proprietary sound driver worked well and smoothly with UE4.
Technically, what was your biggest challenge in creating the sound for FINAL FANTASY 7 REMAKE?
MI: Personally, my biggest challenge was how difficult it was to build and develop several sound systems to control each element of dialogue, music, and sound effects in real-time.
In my opinion, especially in recent game development, creating the best sound you can for the game is very important. But more than before, I find it’s important to build the right sound spec that matches perfectly with what you have created in the game. As you know for example, it is a nightmare to have so many sound assets that are controlled by physics in the game as we did on this game. But as much as it was a real challenge for me, I really feel a sense of an accomplishment to be able to build the right sound spec to control such a nightmare situation.
In terms of sound, what are you most proud of on FINAL FANTASY 7 REMAKE?
MI: What I am most proud of on FINAL FANTASY 7 REMAKE is that we were able to establish a great hybrid between the original game and this brand new game, without dishonoring the original game, and the game achieved a new evolution for the series. I had a sense of accomplishment by successfully managing technically-challenging tasks. Through the development of this game, all of the technical achievement and technical know-how from our previous AAA titles over a few decades directly lead to the technical advancement of our proprietary sound driver.
I feel incredibly honored that this game was created not by me as an individual, but by our entire sound division.
A big thanks to Makoto Ise, Atsushi Suganuma, and Keiji Kawamori for giving us a behind-the-scenes look at the sound of FINAL FANTASY 7 REMAKE. A big thanks to composer Will Roget for his inspiration and insights into the game’s sound, and to Jennifer Walden for the interview!
Please share this:
-
25 %OFF