Interview by Jennifer Walden, photos courtesy of Microsoft Game Studios/Asobo Studios/Pole Position Productions/Dynamedion
Popular on A Sound Effect right now - article continues below:
If you’re learning to fly, but you ain’t got wings… then you’ve probably test piloted the aircraft in Microsoft Flight Simulator. This sim has been around since MS-DOS, and it’s the longest-running and most comprehensive amateur flight simulator created for home use. The most recent release (and eleventh in the series), Microsoft Flight Simulator 2020, is slated to be the first version available for a gaming console — specifically the newest Xbox sometime in 2021.
Microsoft Flight Simulator 2020 (developed by Asobo Studio, published by Xbox Game Studios) is a massive game, requiring 150 GB of free space, but the game has amazing features like a fully mapped Earth with real-time weather and traffic, changes between day and night, over 20 different planes with realistic control surfaces (all those glorious switches and knobs), thousands of airports to land in, and so much more. You can fly to (pretty much) anywhere, even Antarctica or Area 51. And you can experience your flights from inside the cockpit or outside the plane and, of course, the sound will match your POV.
Here, Microsoft Game Studios Audio Director Boyd Post, Asobo Studios Audio Director Aurélien Piters and Lead Audio Designer Emmanuel Doulut, Dynamedion Managing Director Pierre Langer, Lead Recordist (& Sourcing Manager) Patrice Börding and Recordist & Turbofan Engine Sound Designer Severin Schwarz, Pole Position Production Recordist Mats Lundgren, and BOOM Library COO Axel Rohrbach talk about capturing, creating, and implementing audio assets to design an immersive flight experience that’s incredibly realistic.
Microsoft Flight Simulator – Official Trailer
There are a lot of moving parts to the sound of Flight Simulator 2020 . Where did you start? How did you allocate responsibilities for collecting the sounds and creating the game audio systems to make Flight Simulator 2020 the most realistic, immersive experience possible?
Boyd Post (BP): From the very beginning of this project it was clear that we were being charged with the challenging task of creating a next-generation audio presentation for what was going to be the future of flight simulators.
So, as with all challenging projects, we had to start by putting together the best team possible for this task.
Once we had determined how ambitious the scope of our recording needs were going to be, we decided to add additional support to help with recordings and turbine simulations.
When I first joined the project I was very excited to learn that we were going to be working with Asobo Studio. Aurélien [Piters] and I have worked together in the past and I knew that they would be the perfect fit all around — not only for their overall engineering expertise but also for the fact that they have a talented in-house audio team with lots of experience using Wwise.
Once we had determined how ambitious the scope of our recording needs were going to be, we decided to add additional support to help with recordings and turbine simulations. Dynamedion and Pole Position were the obvious choices for us and we really didn’t look any further. Both Asobo and Microsoft have had lots of experience working with both companies and we know that their vast experience and commitment to quality would make them a great addition to our team.
Aurélien Piters (AP): When we started working on the new Flight Simulator, the first thing that came to our minds was the huge legacy of the product; the franchise is more than 30 years old (even older than Windows) and has millions of loyal users.
So before working on audio, we studied the legacy of Microsoft Flight Simulator; the planes and how they behave, the flight simulator community, and of course we played the previous version of Flight Simulator. We had a lot to learn, and expectations were really high!
The previous iteration of the product was Flight Simulator X, which came out in 2006. Our task was to take this legacy and evolve it into a next-gen simulation.
Fortunately, that ten-year gap was a huge one for interactive audio tools and the available middleware had gotten much more sophisticated. Wwise has been integrated into our in-house game engine for around ten years, so we naturally went for this solution. Lucky for us it’s also the perfect fit for the project.
…the most important thing to consider was that we were not developing a “game,” but rather a simulator.
As for the audio design, the most important thing to consider was that we were not developing a “game,” but rather a simulator. This totally changed our approach to audio and systems design. We needed to focus on “re-creating” rather than “creating,” to make an experience that sounds as authentic as possible and not just creating pleasing or imaginative sounds.
Our audio is driven by a lot of real-time parameters that come directly from the simulation. This was the key to creating an evolving and realistic soundscape. Any change in the pilot’s input, the plane’s flight characteristics, or the weather can be reflected in the audio which really helps to create an immersive experience.
BP: At some point early on, we realized that we needed a way to verify the results of all this work. We had spent a lot of time and effort finding and recording specific planes under very prescriptive conditions, and implementing the results with a very consistent method: how could we measure our success towards our goal of simulating the “real thing”?
We produced A/B videos for every plane that compared the real-world version side by side with the same plane in our sim.
The answer for us turned out to be A/B comparisons. We produced A/B videos for every plane that compared the real-world version side by side with the same plane in our sim. This gave us the confidence to know that we were hitting our goal, or sometimes that we needed to go back and make adjustments.
Mats Lundgren (ML): From Pole Position’s side, we had on the team a professional pilot, Claes Martinsson, who has very good connections within the pilot’s community, and he was able to source the right planes for each session.
He set it up so that each plane that was on the agenda would fly in from its respective home-field at very specific times and then land at very specific pre-chosen spots along the landing strip. This way we were able to get fly-bys, landings, and taxiing in one sequence. After landing, the plane then halted at a spot that was already set up for the on-ground recording.
We had a really tight schedule that allowed us approximately 2 ½ hours before the next plane would fly in for landing, and we managed to record three planes per day/session with this setup.
Looking at the recording side, how many different planes did you need to capture for Flight Simulator 2020? (Were you able to pull assets from past versions of Flight Simulator? Did you source sounds from libraries or past projects? Or did you have to start fresh for this game?)
Emmanuel Doulut (ED): We needed 30 original planes for the release version, including 5 turbofans.
Since we were only recording prop planes, this meant we would need to source and record at least 25 turbo-prop or piston aircraft.
We did look at legacy assets, but the most recent recordings we had from earlier versions of Flight Simulator dated back to 2006, and even though some of these recordings were really good, they didn’t meet our specific needs. We quickly realized we would have to start from scratch and record all original content if we were going to reach our goal of defining a next-gen audio standard.
We quickly realized we would have to start from scratch and record all original content if we were going to reach our goal of defining a next-gen audio standard.
We needed more perspectives to create a breathing 3D audio mix, multi-channel layers, more variations and longer loops (especially as you frequently maintain a steady RPM while flying in an aircraft simulation).
In practice many more aircraft than this were recorded, either for reference or research purposes. Some were experiments with microphone placement, and some were collected as reference for distance attenuation and flyby sound behavior analysis, or simply to expand our sound library for possible future needs.
We tried to divide the work according to the sourcing availability of the aircraft. For those included in the release version, most of them were easy to find in France, and 20 of them were recorded by the Asobo audio crew (Sacha Wittorski, Robin Cugniet, Aurélien Piters and myself).
For the remaining aircraft, we reached out to Pole Position and Dynamedion, who were a great help due to their vast experience in the field recording of planes and motorized vehicles, their flexibility, and their contacts in the aviation world.
All aircraft sound different depending on their engine, propeller specs, exhaust type, fuselage, cockpit, and materials. In order to faithfully recreate the audio experience, we captured everything we could get in our field recording sessions: engines, hydraulics, electrical and mechanical systems, Foley for any physical controls, cockpit impulse responses, and so on.
No existing sound library could provide what we needed, especially for all the specific planes featured in Microsoft Flight Simulator.
Beyond the recording process, we quickly came to understand that listening and “feeling” a plane in real life was the key for creating an authentic audio experience.
Speaking with pilots was also invaluable. They were able to provide us with specific information about the planes; how they sounded in flight and under certain conditions.
What did your recording setups look like — what mics did you choose and where did you place each mic to capture what specific sound?
Patrice Börding (PB): We captured the aircraft from all possible angles and distances. Most important was to get clean and isolated sounds from the engine, exhaust, and propeller. A whole bunch of different mics were used for this. A lot of Sennheiser mics from the MKH series, like 8040 in X/Y, 8050 and 30 in MS and two 416 shotguns. Then we used several DPA, from 4007 for the props, to 4062 and 4061 for the exhaust. And some dynamic mics like Audix D6, Sennheiser MD421 or SM57 and more.
For the interior we used a Sennheiser Ambeo and DPA 4061 in quad.
And as always various handheld recorders were placed all around the plane on more distant and experimental positions.
ML: The people at Asobo and Microsoft had a very clear vision of what they needed from these recordings and were very keen on getting all details needed to be able to create the level of immersion that they were aiming for.
Before we even sourced and booked the planes and recording sites, we had long discussions with them about which mics to use and what maneuvers and takes that we needed to capture.
We had already in our portfolio a whole number of plane recordings that they were able to study, so departing from those it was easy to tailor these new recordings to perfectly match Flight Simulator’s needs. For instance, we needed to get even smaller intervals between each steady RPM than we’d done before, and we also made some changes in our interior setup to better accommodate their needs.
After each session, they would come back with extremely detailed feedback, so we were able to raise the bar even further for the next session.
From a miking point of view, we’ve always found that using shotguns on each side of the plane, pointing from the wings forward towards the engine, gives a really good impression of the engine and exhaust, an essence of the plane as a whole.
…using shotguns on each side of the plane, pointing from the wings forward towards the engine, gives a really good impression of the engine and exhaust…
So in the past, we’ve used different types for that, but for the Flight Simulator recordings we used Sennheiser MKH 8060s. A surprising finding was that the Shure VP88 was really good at capturing the propellers; at first, it was mostly a hit and miss choice but in the end we used it on all recordings.
The exhausts are usually the hardest to record on airplanes since they are in the direction of the wind-flow and you have to find ways to avoid that. So we both attached mics to the fuselage, on which we put massive amounts of wind protection and tried to hide it in all ways possible, and then we had a number of stands at different distances with mics pointing directly at the exhausts. This way we both covered ourselves in having as many backups as possible, and we also got very different characteristics from the various positions. I would say that in almost all instances we were able to get clean material from all positions.
Then, to really get the details right, we added a couple of bass microphones, the Moon Mic and AKG D12.
AP: Our recording process is very detailed and features lots of channels and perspectives. This was necessary to ensure that we were accurately capturing the sound of the plane from every perspective — both inside and out.
Our approach was similar to miking a modern drum-kit and then at mix choosing the best channels based on what the mix needs…
We experimented a lot and chose the microphone for each perspective based on how well it captured the specific qualities we were looking for.
Even so, every recording provided some surprises and we would find that some microphones had performed better on a specific plane. Our approach was similar to miking a modern drum-kit and then at mix choosing the best channels based on what the mix needs: more bass? More air? Each mic was able to provide different colors in the mix.
When it comes to cockpit perspectives, we created a small portable rig (pictured) that allows us to record in both quad and ambisonic simultaneously. It turned out to be really useful as cockpits tend to be very small and cramped spaces. It also allowed us to capture a consistent overall cockpit experience between planes.
What were some challenges in capturing these plane sounds? What was the trickiest plane to record?
PB: Although we recorded while the aircrafts were on the ground, we had to deal with a lot of wind from the propellers. The wind is even stronger than when recording the onboard sound of a supercar driving 300km/h. Airplanes produce a lot of wind where you ideally would like to place your microphones. So you need to properly protect them from wind.
Airplanes produce a lot of wind where you ideally would like to place your microphones.
Sometimes a single windfur (e.g., Rycote) will not be enough when the mic needs to be in the direct stream. It’s best to find spots where there is less wind and to point them away from the wind.
The trickiest plane for us was the Extra 330LT. It has so much power that when recording the highest RPM it will not stay still on the ground but will start to rumble and move a bit. Also, there are no good spots to place the microphones. The exhaust is directly behind the prop and in the windstream so that the propeller is audible and most dominant.
Axel Rohrbach (AR): To deal a bit better with the wind, we developed a special windscreen used with the DPA 4061 and 4062 lav microphones. We 3D printed the screen a bunch of times to make some adjustments to the air damping and manually added an inner layer of really thin windscreen and an outer layer of fur. It also had three nice little “feet” so it could be duct-taped easily onto the body of the planes. This helped a lot specifically for the exhaust microphones.
ML: One of the more difficult ones in my opinion was the twin-engine turbo-prop plane that we recorded. With the gas turbine blending in, there are so many different sounds on those that you have to distinguish. On top of it all, it was a really big piece of machinery so we needed to have really long lengths of cables to reach all the different spots that we wanted to capture.
ED: I totally agree with Patrice [Börding] and Mats [Lundgren] about propeller winds and turboprop aircrafts. In addition, we encountered the following challenges:
• Cockpit recordings can be a challenge in small planes. We are trying to isolate the sound of the engine; however, the reality is that there are many other noises we don’t want: rattles and squeaks generated by the vibration of the engine, seat squeaks, instruments noises, etc. Sometimes we had to press our hands against the windows to stop them from rattling!
• We wanted to get recordings of the full RPM range of each aircraft, but reaching the highest RPMs was occasionally difficult to achieve as the engine temperature would rise too quickly on the ground. We often needed to wait for the temperature to drop between takes. After a while we discovered that the best workaround for this is to record the highest RPMs first, just after ignition.
• When trying to capture the reverse thrust position on some specific planes such as turboprops, you can be caught off guard by the propwash, which changes direction violently and can easily knock your mic stands over. The takeaway is, never forget to bring sandbags, tent pegs, and tape to secure your microphones.
• The electronic systems of modern aircraft can also create issues. For example, when we were recording the TBM-930 we encountered electromagnetic interference on a lav mic we had attached close to the PT6 turbine. Fortunately, we were able to swap it out for a dynamic mic that wasn’t as sensitive to the interference. When it comes to recording planes, the more mics you have in your kit, the more prepared you are!
We also found that the more cooperative the pilot, the aircraft mechanics, engineers and all the external contributors are the easier it was for us to deal with all the small issues we encountered during our recording sessions.
Did you have specific nomenclature for cataloging what you recorded, for ease of searchability and implementation later on?
ED: Absolutely. As we were usually recording over 16 tracks per take and frequently more than 2 takes per plane (sometimes over 4 takes for twin-engine aircraft) we definitely needed to set up a good naming convention.
For the raw files exported from the recording session and going into the audio editor, we used the following:
Plane name_ Engine name_Interior or Exterior _Mic Name_Mic positionning_Take Number
For the edited files in Wwise, it was :
Plane name_Engine name_Interior or Exterior_Mic Name_Mic positionning_engine state (RPM, startup, starter, ignition, shutdown…etc )
Did you capture impulse responses of cabin interiors?
PB: Yes, we did capture IRs of the cockpits. In the beginning, we tested different impulses like bursts and sweeps to see which will sound better and which is most convenient to use. The first recordings we did were at an aerobatic flight training camp. After we popped two balloons inside a stunt plane we had to find all the pieces of the broken balloon. It took forever because we had to find every little piece of the balloons. Otherwise, it would be much too dangerous for the stunt pilot to take off and make extreme maneuvers. Besides that, sweeps simply sounded better and gave a much more detailed reverberation.
AP: We recorded ambisonic IRs allowing us to adapt to any channel output in the sim. Ambisonic is just great for that purpose, as it allows you to really capture a 3D IR.
You can land the plane anywhere in the world? Cool! And the soundscapes are specific to each location on Earth? For these environmental sounds — that also change from day to night sonically — did you source sounds from libraries? Did you hire sound recordists in specific locations to capture the environments nearest them? What was your approach here?
ED: The world is a really important aspect of MSFS and the soundscape definitely has its role to play in the simulation. This is especially true of VFR flights, where you can land in the Alaskan bush and enjoy the sunset!
It quickly became clear that we would have to design a dynamic and generative system based on biomes if we were to cover the whole world with any kind of accuracy.
The main challenge for us was the scope of the world. We couldn’t manually place audio emitters over the entire world, and we obviously couldn’t accurately cover all the ecosystems and regional cultures of the planet. It quickly became clear that we would have to design a dynamic and generative system based on biomes if we were to cover the whole world with any kind of accuracy.
We started with a lot of data. This included the legacy terrain data from Flight Simulator X, as well as additional layers from both Bing and BlackShark, and our live weather data. We then narrowed it down to the specific data that would be useful for our biome system.
For any given location we start with the live weather conditions: temperature, precipitation, wind direction and speed, time of day and the season. We also consider the urbanization rate, traffic location and density, the size of each airport and proximity to it, as well as ecological classification and regional culture data.
We then consider what biome we are in. Our world is divided into biomes such as alpine, cold and hot desert, tropical, tundra, taiga, tempered, savannah, Mediterranean, and so on. Each biome has its own unique set of layers for fauna, insects, bed layers, and material textures like grass or dust.
The simulation data informs the assets used in a given layer, achieving a completely modular living sound environment.
The simulation data informs the assets used in a given layer, achieving a completely modular living sound environment. Of course, much of this data isn’t static, so the sonic elements within a given location will evolve over time-based on the changes in the environment, continuously changing.
Of course, in addition to creating the system we also do a lot of work digging through our recordings and libraries to identify, edit, and classify all of the assets we need to bring this system to life. It’s an ongoing process.
What about the variable winds, turbulence, other air sounds you had to record for this game?
ED: Wind sounds provide important flight feedback and are an essential feature in a flight simulator, so we wanted a meaningful and living sound design for each wind source.
Beyond the simulation feedback, they also add drama to the soundscape which contributes greatly to the immersion.
In MSFS, we have two types of wind that are both physical elements and are therefore fully dynamic.
The first type is the weather wind which is dynamically placed in 3D around the listener depending on the gust’s origins and driven by the simulation wind velocity variable. We have many layers that gradually widen as you get closer to the ground to depict wind impact on physical objects on the ground (e.g., trees, grass, buildings, ocean waves…etc).
…the weather wind…is dynamically placed in 3D around the listener depending on the gust’s origins and driven by the simulation wind velocity variable.
The second type is the airflow kinematic wind feedback on the aircraft itself. They are driven by the aircraft velocity and acceleration variables to emphasize true airspeed, vertical speed, pitch, roll and yaw movements.
For both types of wind we wanted the sounds to be fully dynamic and we needed to have full control of the sounds’ parameters. We have therefore opted for a hybrid approach by using Wwise SoundSeed Air plug-in that allows us to hook all variables needed on most synth parameters, as well as true wind sound recordings that provide a more natural texture.
Wwise SoundSeed Air plug-in…allows us to hook all variables needed on most synth parameters, as well as true wind sound recordings that provide a more natural texture.
In addition to wind sounds, and for both weather turbulence and kinematic airflow gameplay elements, we needed to depict the fuselage physical constraints and vibrations of solid objects inside the cockpit. To do so we added many rattle and squeak layers in several material types that are driven by acceleration movements on the 3 axes.
A fun fact here is that some of these rattles and squeaks were recorded directly on light aircraft by the audio crew physically shaking the plane.
You can hear examples of our wind system in the Feature Discovery Series about soundscape:
Feature Discovery Series Episode 5: Soundscape
Popular on A Sound Effect right now - article continues below:
HIGHLIGHTS:
-
82 %OFFEnds 1733353199
-
70 %OFFEnds 1733353199
Any other sounds you recorded for Flight Simulator 2020?
ML: Thanks to ”our” pilot Claes [Martinsson], we managed to get access to a commercial aircraft simulator with an all authentic cockpit. That way we could record the Foley of a massive airliner, which in itself is a luxury, but we could also have the warning systems go off. Those are sounds you really don’t want to hear in a real situation, and that are really hard to come by. We were able to capture them completely realistically.
…we managed to get access to a commercial aircraft simulator with an all authentic cockpit.
We also recorded extensive Foley for each plane. On one occasion the recording schedule got so tight that we had to make an extra trip to an airfield and do it separately for one of the aircraft.
ED: As Mats [Lundgren] says, we needed a lot of Foley to sonically represent every switch, lever, rudder pedal, fuel pump, flight control (e.g., flaps, ailerons, rudder, elevator), all electronic buzzes, and gyro. We took great care to make sure we had accurate recordings for all of these sounds.
While we were on location we also captured additional recordings that were useful, including rattles in the cockpit, rain from the interior of the cockpit, radio communication (for reference) as well as hangar and airfield ambiences from the cockpit using our multichannel and ambisonic microphone setup.
AP: As sound designers, we of course faked some Foley! For example, our concrete tire skids were recorded with a hot water bottle, as this was just the best way to have the different intensities needed for our dynamic sound landing.
[tweet_box]Creating the Soaring Sound of Microsoft Flight Simulator 2020[/tweet_box]
What were the most challenging technical aspects of creating the sounds in-game for Flight Simulator?
AP: In FS, almost every sound is driven by simulation variables. If the variables change or are unstable it can make the results sound totally off. This was a real challenge for us, as the simulation itself was continually evolving during development, which meant we had to constantly revisit our implementation to correct for these changes in the data and make sure it was sounding good.
This is the price you pay when you create an interactive soundscape based on physics.
…we also needed to create an SDK that allows third parties to tweak or even add sounds for their own planes.
Beyond the creation of our own sounds for FS, we also needed to create an SDK that allows third parties to tweak or even add sounds for their own planes.
MSFS is more than a simulation, it’s a platform where content creators or even simmers can tweak almost anything, so not only did we create sounds for our simulation, but we had to make sure we also created an architecture around our sounds that was meant to be used by content creators.
How did you update the Turbine plug-in to work within Flight Simulator to generate the engine sounds?
AR: We did not really use the Turbine plug-in from Boom Library but the technology behind the simulation of the plug-in to create completely new and specific sets for the Microsoft Flight Simulator.
We have a highly advanced engine here with way over 40,000 parameters to dial in. One of the biggest tasks here is to find out the physical details of a specific turbofan engine first, to get the resulting sound the engine produces in reality. This is the RPM and relation to each other from every stage (most jet engines have more than one fan per engine), how many fan blades there are, what the proportions of the engine are, and the resulting airflow sound characteristics. Only once that has been researched, we are able to reproduce the correct harmonics structure, resonating RPMs and potential howls due to flap positions.
Only once that has been researched, we are able to reproduce the correct harmonics structure, resonating RPMs and potential howls due to flap positions.
Using this method, we are able to deliver a bunch of single sound elements that are impossible to get with real recordings, however essential for runtime interactive simulation for jet planes. In its sole parts, this does sound very weird, but after implementation it all comes together nicely.
The results were exported in loops, featuring similar RPMs as the prop planes plus very specific “RPMs of interest” for integration.
BP: When we started looking into how we would simulate the complex sounds of turbofans, we were faced with a number of difficult challenges — especially if we wanted to approach them in the same way we had been recording prop aircraft.
…neither Boeing nor Airbus were able to just let us roll onto the tarmac and record these planes in the very specific ways that we needed to.
First, there was the logistical and legal challenge of getting access to these planes. After a lot of inquiry, it turned out that unlike smaller plane manufacturers, neither Boeing nor Airbus were able to just let us roll onto the tarmac and record these planes in the very specific ways that we needed to. Combine this with the obvious difficulty of recording such large and loud sources, the potential danger to recordists and equipment, and the prohibitive cost of renting a 747 for several hours (I’m guessing at cost as this is not something anyone offers!) and we had a real problem to solve.
It was about this time that BOOM approached us with an early version of their turbine modeling tech. We started talking with Axel about the results we wanted to achieve and he assured us that he and his team could pull it off.
Highlights from A Sound Effect - article continues below:
After checking out some examples of the results they were able to achieve, we were super impressed and determined that modeling was likely the best approach for us. We decided a good test of the technology would be the Boeing 747-8i which was at an early stage of development at this time.
We decided a good test of the technology would be the Boeing 747-8i which was at an early stage of development at this time.
ED: Once we had the modeled assets from the Dynamedion team, we had to bring them to life in our sim. We used a number of parameters to drive the implementation: fan RPM, torque, thrust, reverse thrust, bleed air, etc. We also used atmospheric conditions such as ground elevation, air pressure, and so on.
All these parameters were combined into a system that would transform our looped assets into a convincing sonic result. To add to the realism, we added a lot of dynamic post-processing in Wwise.
For example, the combustion and whine loops from BOOM were then distorted, EQed, and dynamically pitched based on the variables coming from the simulation real-time.
Wwise post-processing allowed us to easily manage the filtering and sound proofing that would be experienced when hearing the engines from the pilot’s perspective inside the cockpit.
Indeed this part was a bit tricky, because in the cockpit of an actual modern airliner the sound of the turbines is almost obscured by the sound of the fans, air conditioning, and all the other instruments in the cockpit.
This was one of those times when we needed to find the right balance between creating an accurate simulation of the real world result while also creating an experience that would satisfy the expectations of simmers who have come to expect that they will hear some feedback from the engines when they push the throttle.
Can you tell me more about the improved Doppler effect processing? How did you arrive at the best possible solution for each instance of Doppler effect?
AP: Our Doppler is based on the real-life Doppler formula for accurate flybys in-game; it was the only way to make sure our Doppler was accurate! We also did many A/B videos of real-life flybys vs. in-game flybys.
We, of course, worked a lot on distances and perspectives that we wanted to be as faithful as possible. The key was once again to find the sweet spot between reverberation and dry signal, and to use simulation parameters to drive reverberation parameters.
How did Wwise help to simplify or improve the sound integration process?
AP: Wwise has been integrated into Asobo’s in-house engine since 2010, so the Wwise pipeline and low-level technologies were already in place and we are very familiar with them. This allowed the audio implementation to be achieved completely by audio designers. When we needed new features such as Doppler or 3D directional wind, we relied heavily on our talented Audio programmer Thomas Rocher.
In terms of audio integration, what results are you most pleased with? What has given you the greatest sense of accomplishment in terms of sound on this game?
AP: We are obviously very proud of our aircraft audio integration. When we read comments from real pilots about how accurate they think their aircraft sounds in the sim, this really feels like the ultimate accomplishment.
It’s also really rewarding to see long-time simmers landing in the countryside, cutting their engine and just enjoying the beauty of the world. The ambient sound of the world is something we’re really proud of!
…our commitment to accuracy and authenticity was the right approach and has been recognized and rewarded.
BP: The public’s response to FS has been really rewarding for all of us who have been working on this project for so long. It has confirmed for me that our commitment to accuracy and authenticity was the right approach and has been recognized and rewarded.
Furthermore, I personally love it when audio can fulfill its role as a vital part of a complex experience — enhancing the immersion, while not overtly drawing attention to itself. I think we’ve achieved this in what we have created so far and I’m excited to take it further in the future!
What were some of your challenges in mixing Flight Simulator 2020? What did your mix hierarchy look like? What elements took precedence in the mix?
ED: Our goal was to make our mix as close to a real-world flight experience as possible. Of course, being a software simulation, some adjustments were necessary.
The actual dynamic range of an aircraft engine is so large that it would be impossible to represent it at the same scale in software. We had to find the sweet spots and dynamic EQ curves that create the feeling of that full-throttle engine sound while retaining a limited dynamic range that would work for playback systems.
Our mix hierarchy was natural and obvious: engines are the most emphasized element in the mix…
Our mix hierarchy was natural and obvious: engines are the most emphasized element in the mix because they are simply louder than anything else.
The engine audio also provides the most important aircraft feedback: RPMs, injected fuel, ignition, startups and shutdowns, propeller positions, reverse thrusts or overstressed engines. These are the sounds a pilot needs to hear clearly in order to operate his plane.
Then, the second most important element is radio communications. They need to be audible even at full RPM and on all kinds of audio systems. To enable this, we implemented a virtual filter that emulates the noise reduction circuit in modern pilot headsets. This mode is the closest to a real-world flight experience.
Users can also choose to separate out radio communications from other aircraft and environment sounds and route them to different audio outputs.
Users can also choose to separate out radio communications from other aircraft and environment sounds and route them to different audio outputs. This allows for the use of headphones for comms, while keeping other sounds “in the cockpit.”
Next is physics audio which gives feedback about the flight model behavior, user operations, landings success, and so on. They bring speed and acceleration feelings, kinematic moments, smooth or hard landings, dangerousness of the weather, etc.
And last but not least, all of the other elements that play an important role in reinforcing immersion — for instance, the audio from other aircraft you encounter in the world that brings life to airports and multiplayer sessions.
Finally, while we did spend a great amount of time creating our own mix, we also gave the user lots of control. We added quite a few volume sliders in the menu that allow users to create or modify their own mix, something the simming community is known for wanting.
What has this game taught you about the potential for sound and the use of sound in simulations? How can sound further help enhance simulations and add to the educational experience of what a simulation can provide?
AP: Audio has a huge potential for simulation. The benefits of an audio experience that is connected to and driven by the simulation in real-time are a really powerful aspect of the user experience. Simulators used for training actual pilots are already using audio as part of the experience. We are just taking it to the next level!
BP: We have actually put a certain amount of effort into emphasizing that we are a “simulation” rather than a “game,” mostly because of the connotations that those words convey to the user. That said, I think that many experiences considered to be “games” could also benefit from a “simulation” approach, based on generative systems and informed by real-time variables.
Human beings have an intimate and innate understanding of the way sound works in the world. The closer we as software developers can come to replicating that relationship in our experiences, the better we will be at drawing them into our worlds and both entertaining and educating them.
Please share this:
-
82 %OFFEnds 1733353199
-
20 %OFF
-
25 %OFF
-
70 %OFFEnds 1733266799
-
30 %OFF