Interview by Anne-Sophie Mongeau
Trailer for the new Pathfinder update for No Man’s Sky
The game No Man’s Sky was an ambitious project which presented considerable challenges regarding audio, due to both its procedurally generated universe, as well as its style and art. How did those challenges reflect on audio design and implementation?
Paul Weir (PW): From the beginning, I aimed to keep the ambiences as natural as possible, using lots of original recordings of weather effects and nature sounds. It was a sensible decision to use Wwise and drive the ambiences using the state and switch systems. The advantage of this approach is that you can relatively easily construct an expandable infrastructure into which you can add layers of sound design that respond to the game state.
With a game like No Man’s Sky you need to pass as much information as practical from the game to the audio systems in order to understand the environment and state of play. For example, what planet biome you’re on, what the weather is doing, where you are relative to trees, water or buildings, whether you’re close to a cave or in a cave, underwater, in a vehicle, engaged in combat and so on.
A simple example of how this information can be brought together without additional programmer support is the introduction of interior storm ambience. We have a control value (an RTPC in Wwise terminology) for ‘storminess’ and know whether the player is indoors or out. It was a simple job then to add different audio, such as shakes and creaks, when indoors and a storm is raging, without having to rely on a programmer to add this.
It helps that nearly all of our audio is streamed, so I have few restrictions on the quantity of audio I can incorporate.
There’s a certain pride I take in recording unassuming everyday objects and using them for key sounds
I wouldn’t usually use electronic sounds as much as recorded acoustic material, but given the sci-fi nature of the game, a lot of the obviously sci-fi features do use synth sounds, although often combined with real-world mechanical sounds. There’s a certain pride I take in recording unassuming everyday objects and using them for key sounds. For example in the most recent update where we added vehicles, the buggy is my own unglamorous car, recorded using contact microphones, the hovercraft is a combination of a desktop fan and air conditioning unit and the large vehicle sounds come from programmer Dave’s Range Rover, I just dropped a microphone into the engine then we went for a spin around Guildford.
Apart from my usual rule of every sound being original, which I appreciate is in itself pretty dogmatic, I have no set approach as to where the sounds come from. It’s whatever works.
Can you define in a few words the difference between generative and procedural for the readers?
PW: There is no recognised definition for either term, so it’s not possible to definitively describe the difference. For me, generative means it is a randomised process with some rules of logic to control the range of values, it does not need to be interactive. Procedural is different in that it involves real-time synthesis that is live and interactive, controlled by data coming back from the game systems. This differentiation works reasonably well for audio but graphics programmers will no doubt have their own definitions.
How much of the game’s audio is procedurally generated and how would you compare these new innovative techniques to the more common sound design approaches?
PW: Very little of the audio is procedurally created, only the creature vocals and background fauna. At the moment it’s too expensive and risky to widely use this approach, although there are several tools in development that may help with this. Procedural audio is just one more option amongst more traditional approaches and the best approach as always is to use whatever combination best works for a particular project.
Can you tell us about the generative music system (Pulse) – the goals, what it allows to do, and its strengths compared to other implementation tools?
PW: Pulse, at its heart, is really just a glorified random file player with the ability to control sets of sounds based on gameplay mechanics. We have a concept of an instrument which is an arbitrary collection of sounds, usually variations of a single type of sound. This is placed within a ‘canvas’ and given certain amounts of playback logic, such as how often the sound can play, its pitch, pan and volume information. When these instruments play depends on the logic for each soundscape type, of which there are four general variations consisting of planet, space, wanted and map. So for example when in space, instruments in the ‘higher interest’ area play as you face a planet in your ship or when you’re warping. In the map the music changes depending on whether you’re moving and in your direction of travel.
We currently have 24 sets of soundscapes, so that’s 60 basic soundscapes, plus special cases like the map, space stations, photo mode, etc.
Pulse also makes the implementation of soundscapes relatively simple. Once you drag the wavs into the tool it creates all the Wwise XML data itself and injects it into the project, so you never manually touch anything to do with the soundscapes from Wwise.
In NMS, how are music and sound effects interacting together? What was your approach towards mixing those 2, and do you have any recommendations on how to mix music and SFX dynamically?
PW: I always mix as I go, the mix process wasn’t as difficult as you might expect and as a PS4 title, we’re mixed to the EBU R128 standard.
You have to accept that you’re never going to have a perfect mix with this type of title, so just embrace the chaos
Whilst there’s a lot of randomisation in the game, I always know the upper and lower limits of any sound and so over time you reach a reasonably satisfactory equilibrium in the mix. It helps a lot that we don’t have any dialogue. You also have to accept that you’re never going to have a perfect mix with this type of title, so just embrace the chaos.
I do have to be careful with the music though. 65 Day’s of Static like creating sounds with very resonant frequencies so sometimes I use EQ to avoid these from standing out too much. Similarly I’ll take out sounds that are too noise-based as they might sound like a sound effect. On the whole though, 90% of what the 65’ers make goes straight into the game.
What’s your opinion on sourcing any audio from libraries VS creating original content?
PW: On larger projects I am most irritating in insisting that all of the audio is original and not a single sound is sourced from a library, if at all possible. It does depend largely on the game and practicalities but I’ve been able to do this on No Man’s Sky so far. On smaller projects or where time is of the essence, then obviously it makes sense to dip into libraries. Over the years I’ve amassed a large personal collection of sounds that I’m constantly adding to.
Can you tell us about the tools you used for NMS’s procedural/synthesised audio, what other software was involved in its creation?
PW: Early in development we used Flowstone to prototype the VocAlien synthesis component. Flowstone has the advantage of being able to export a VST so Sandy White, the programmer behind VocAlien, wrote a simple VST bridge to host plugins in Wwise. For release though it obviously needs to be C++ and cross-compile to PS4 and Windows. VocAlien is not just a synthesiser, it’s several components, including a MIDI control surface and MIDI read/write module.
Popular on A Sound Effect right now - article continues below:
Be sure to check out Soundlister - you'll find 100s of audio professionals there already.
On a more technical point of view, how was audio optimisation handled? Did using procedural audio improve CPU/Memory usage?
PW: VocAlien is very efficient and on average our CPU usage is low. However due to the nature of the game, where we can’t predict the range of creatures or sound emitting objects on a planet, the voice allocation can jump around substantially. We have to use a lot of voice limiting based on distance to constantly prioritise the sounds closest to the player.
What would you think is the best use of procedural audio? Would it be more adequate for some types of projects or sounds than others?
PW: Procedural audio, according to my suggested definition above, only makes sense if it solves a problem for you that would otherwise be difficult to resolve using conventional sound design.
There are complex qualities that we instantly react to with natural sounds, it’s a lot harder to do this with synthetic sound
It’s still a poor way to create realistic sounds. I’m not generally in favour of using it to create wind or rain effects for example. As a sound designer I find this a very functional approach to sound, ignoring the emotive qualities that natural sound can have. Wind can be cold, gentle, spooky, reassuring. There are complex qualities that we instantly react to with natural sounds, it’s a lot harder to do this with synthetic sound.
Finally, NMS’s audio is of a such greatly varied nature and represents a massive achievement overall, do you have a few favourite sounds in game?
PW: Thank you, I’ll very gratefully take any compliments. Although it started off quite incidental, I like how we’ve managed to insert so many different flavours of rain into the game. I thanked Sean recently for letting me make SimRain, the game itself is incidental.
What gives me pleasure is knowing the everyday items that make it into the game, such as an electric water pump, vending machine, garage motor. I’ve included some examples of the raw sounds that were used as source material below.
Please share this:
License agreement for users of Sound Examples downloaded through A SOUND EFFECT (www.asoundeffect.com) (as “Distributor”).
This end user license agreement (the “Agreement”) is entered into between you, a single user natural person (the “Licensee”), who has downloaded one or more Sound Examples through the Distributor, and the creator or creators of these Sound Examples (the “Licensor”). For multi-user licenses, please contact firstname.lastname@example.org.
This Agreement covers one or more Sound Examples downloaded by the Licensee via the Distributor.
The Licensor is the creator or creators of the Sound Examples, stated as such in the downloaded file(s) (“File”) the Licensee receives after registering with a valid email address and name.
By downloading, the Licensee accepts this EULA and agrees to be bound by the terms and conditions set out in this EULA and the EULA’s with similar terms for each Licensor in the File. Any files or material included in the File not specifically mentioned in each Licensor’s EULA is covered by the terms below. By downloading the Sound Examples, you'll also receive the A Sound Effect newsletter from time to time. You can unsubscribe from this anytime.
1. Grant of License
In consideration for the download of the Sound Examples via the Distributor, the Licensor grants the Licensee a worldwide, non-exclusive, perpetual, royalty free license to use the Sound Examples (“Sounds”) on the terms and conditions set out in this Agreement.
2. Rights Granted
The license granted in this agreement allows the Licensee to:
a. install and use the Sound Examples on one workstation at a time, although the Licensee is permitted to make and keep backup copies of the Sound Examples on other storage devices, and
b. distribute and publicly perform reproductions of the Sounds, where these are incorporated in and synchronized with other media productions, which shall mean products that contains at least one additional media element to the Sounds (music, voice, image, etc.), including but not limited to radio and television broadcasts, film, music compositions, web sites, podcasts, mobile apps, advertising, multi-media presentations, video games and similar.
The Licensee is not permitted to distribute or perform reproductions of the Sounds where these are not incorporated in and synchronized with other media productions, including but not limited to in toys, product design, greeting cards, ringtones, applications such as soundboards, hardware devices, media authoring tools etc.
To the furthest extension permitted by law, the Licensee is prohibited from adapting, modifying or repackaging any Sounds, except as permitted in Clause 2.
4. Intellectual property rights
All rights to the Sound Examples are owned by the Licensor and other than the license rights granted in this Agreement all rights in the Sounds and Sound Effect Libraries remain the property of the Licensor. The Licensee must not claim ownership or authorship of the Sounds or the Sound Examples.
The Licensee’s right to use the Sound Examples will automatically terminate in the event of any breach by the Licensee of the terms of this Agreement. In the event of termination, the Licensee shall delete or destroy all copies of the Sound Examples which the Licensee has produced.
The Licensee shall indemnify Licensor and Distributor from, and against any and all claims, demands, suits, awards, damages, suits, injuries, liabilities and all reasonable expenses, including attorney’s fees incurred by the Licensor and the Distributor with respect to any matter that arises as a result of the Licensee’s breach of this Agreement.
Licensor and/or Distributor shall not be liable for any damages or for any loss of business or business profits, business interruption, or any other direct or indirect loss resulting directly or indirectly from the use of any of Licensor’s Sounds.
To the furthest extension permitted by law, the Licensee must not assign, license, sublicense, sell or otherwise assign the Sounds to any third party, except as set out in Clause 2.B.
9. Applicable Law
This Agreement is governed by the law of Denmark without giving effect to the Uniform Law on the International Sale of Goods and the Uniform Law on the Formation of Contracts for the International Sale of Goods.