Interview by Jennifer Walden, photos courtesy of Apple TV+. Note: Contains spoilers
https://www.youtube.com/watch?v=7Rg0y7NT1gU/
Imagine a future where everyone is born visually impaired. That’s the premise of Apple TV+ series See. An outbreak of a deadly virus in the 21st century decimated the population and those who survived lost their sense of sight, as did the generations that followed for centuries afterwards. The idea of vision is considered a myth — one that’s even taboo to talk about. Those accused of having the sense of sight are hunted down and killed.
People have adapted to their visionless existence; their other senses have sharpened. They’re keenly aware of sounds, for instance, and use their hearing to help navigate the world and to hunt. They can even detect subtle emotional cues in other’s voices so well they know when someone is lying. Sound tells them so much and so it’s a vital aspect of the storytelling.
Here, supervising sound editor/sound designer/re-recording mixer Jeremy Peirson and MPSE award-winning re-recording mixer Michael Babcock at Warner Bros. Sound in Burbank, CA, talk about creating the hyper-realistic post-apocalyptic sonic world of See and how they used the Dolby Atmos surround field to immerse the audience in the characters’ unique experience of reality.
What were series creator Steven Knight’s goals for sound on See? Being that sound is such an integral part of this story, were you asked to come on to this show early?
Jeremy Peirson (JP): We didn’t deal with directly with Steven Knight. I dealt more with director/producer Francis Lawrence, who I’ve had a long relationship with. He definitely wanted me to come in early. I spent about six weeks in the cutting room, probably close to a year before the first episode came out. I started out last November on the show just doing pre-design work/proof of concept design work to sell the sound part of the early episodes to the studio. It all needed to land there and that’s how we started.
Sound was definitely an aspect that needed to be worked out. There were a lot of logic issues we needed to figure out in the language of the storytelling and how we were going to go about laying out the blueprint for the show.
What were some of the initial sounds and scenes that you tackled?
JP: The big thing for me — and the first thing that they wanted — was the rock wall sequence in Ep. 1 “Godflame.” Unlike a conventional battle, you can’t have people running around screaming and yelling because that gives you away and it also limits your ability to hear where your next target is. So figuring out how to land sound cues for certain characters to know where to attack or when they should be listening for things, those issues came up early and we needed to figure them out based on how the picture was cut.
Baba Voss (Jason Momoa)’s tribe speaks their own language at times. Was that written out for loop group? What were some of your challenges in building the groups knowing you have this alternate language happening?
JP: The language was written out to a certain degree and there were certain things they discussed in pre-production, like how they were going to interact with each other and what words would trigger the big groups to halt, and that sort of thing.
In terms of group, we needed to figure out locational cues. Were they doing whistles or mouth clicking? That was tried and abandoned pretty quickly because it seemed a little too generic. We tried certain whistles for certain groups, like the Witch Finders would have a certain whistle when they wanted to grab somebody’s attention. It was a lot of trial and error, trying things in the group stage to see what stuck and felt real. Early on, we had people using bird whistles or bird calls but that became a little too forced. It didn’t seem real enough. It didn’t seem like that is what the people would be doing. So we abandoned that into more fundamental whistles and the occasional mouth click. Really, it required a lot of trial and error.
https://www.youtube.com/watch?v=ROEajGEuev4/
SEE — Creating the World Featurette | Apple TV+
How much of that was guided by production? Were they whistling on-set? How about the hand claps and slaps, like when they strike their chests? Was that captured with the production sound on-set?
JP: The whistles weren’t, and some of the other sound cues weren’t, but a lot of the tapping and banging and snapping were all done on-set. That was part of their blocking. And that was captured pretty well in the production tracks. So we had that to use as a guide.
I went out to the woods near here and recorded whips — whips on trees, whips on logs, and sticks on those objects too. I went around being pretty noisy in the woods that day.
For instance, in the first episode there is a big overhead shot of Baba Voss’s tribe going through the woods on their way to do battle. There are a lot of people with whips who are searching for things; there are people tapping on rocks and trees, notating them to the people behind them. That was all in the production tracks and we did what we could to enhance it. Some of that was Foley and some were sounds that I recorded when there was a specific sound I was looking for, which included the whips. I went out to the woods near here and recorded whips — whips on trees, whips on logs, and sticks on those objects too. I went around being pretty noisy in the woods that day.
Popular on A Sound Effect right now - article continues below:
So much of what the characters experience is expressed through hyper-realistic sounds, such as lines through the settlement that rattle and clatter and the sound of Haniwa (Nesta Cooper)’s bow in the woods when she shoots the turkey. What was your approach to creating those hyper-realistic sounds?
…their technology is wood and bone based. There are wooden windchimes and wooden rattles and bone rattles that respond when they shake the ropes.
JP: I was looking for props and objects that could sell the sounds of this world. One thing we discovered that wasn’t inherently clear right off the bat was that when we’re in the Alkenny village with Baba Voss and his community, their technology is wood and bone based. There are wooden windchimes and wooden rattles and bone rattles that respond when they shake the ropes.
When Jerlamarel (Joshua Henry) comes into the picture, and he’s leading the Alkenny tribe away, all of his items are metal-based. He has this new technology. He has access to this material. It’s a sonic difference between all of the wood and bone the villagers are used to, and it comes to guide them towards safety eventually.
There’s a great scene in Ep. 3 “Fresh Blood” when Queen Kane (Sylvia Hoeks) allows Tamacti Jun (Christian Camargo) to take his own life. As he leaves the Queen’s room and ascends the stairs, you can hear the Queen’s music box playing, the roar of the waterfall outside, and Tamacti Jun’s footsteps up the metal stairs (which sometimes fall apart under his feet). Despite the sound of water rushing through the dam (which eats up a lot of sonic real estate), all the details important to the scene still come through in the mix…
…the floor in the Queen’s room was a very specific design. It’s called a Nightingale floor, based upon an old Japanese floor design from feudal Japan that was meant to be an anti-ninja defense system.
JP: Thank you. Another thing to note is that room was devised to be loud and have all of that water be there but there was a conscious choice made to make it less about the water and more about what was going on inside. In particular, the floor in the Queen’s room was a very specific design. It squeaks. That tells her where people are and how many people are coming. It’s called a Nightingale floor, based upon an old Japanese floor design from feudal Japan that was meant to be an anti-ninja defense system. It was like an early burglar alarm. The whole design is that it’s meant to squeak when someone walks across it. It’s a design they had been using for hundreds of years.
I like how they incorporated that into the script. That told us how to approach that aspect of the sound.I don’t think you can have a show like this, with this kind of concept, without having the sound grounded in the script and be a part of the production. It would be extremely difficult to make this entire scenario up in post if they didn’t give us a visual reference they wanted us to follow.
There was a great fight between Baba Voss and the slave traders in that episode. It was brutal, and rhythmic at times. Can you tell me about your design for that scene?
JP: We had to figure out how to have Baba Voss sneak up into there, and what sound cues he could use to know where to go and where to attack.
In terms of the rhythm of the fight, that was all designed in the picture cut. They had a good feel for that.
Baba Voss scatters a handful of gravel on the ground around him and that helps tell him that people are coming up and where they are and he can use that to attack.
The big thing for me was that Baba Voss scatters a handful of gravel on the ground around him and that helps tell him that people are coming up and where they are and he can use that to attack. That was a fun scene to work on for a variety of reasons because there was a lot of violence and gore but it was also about maximizing the amount of detail because it was effects-driven. We were there in full gory glory to have fun with it.
[tweet_box]Heightened Sound is Hallmark of Apple TV+ Series ‘See'[/tweet_box]
The other half of what makes the sound of this series so effective is the mix — how prominently the sound effects play and how detailed (or not) the environments are. The show is designed and mixed in native Atmos. How were you able to take advantage of all that spatial separation to help tell this story?
I spent five days in the woods recording cool new nature backgrounds in 7.0 so we have that fundamental starting point of this wide, detailed sound field.
JP: Atmos gives you the ability to have such definition in a variety of locations. For me, one of the important things was knowing that much of the story was going to be set in nature. I spent five days in the woods recording cool new nature backgrounds in 7.0 so we have that fundamental starting point of this wide, detailed sound field. Once you put that in Atmos, and you start putting sounds into the ceiling speakers, it really puts you in that location.
In terms of the fights, Atmos allows for pinpoint resolution of where sounds should occur and where you want your character to be driven to attack his next target. Or is there something he needs to be aware of, like a metal chime from Jerlamarel? Or is there approaching armor movement from bad guys coming into frame?
Michael Babcock (MB): For me, the whole approach from the beginning was completely theatrical. My biggest challenge was that this was supposed to be a parallel futuristic world with no planes or modern-day transportation. It was a fairly large team effort between Tom Jones (who was responsible for much of the dialogue) and myself to deal with the location sound and lav mics — it’s the bane of my existence to make lav mics sound natural. To put them in immersive spaces, many of them exterior, Jeremy and I were using a bunch of reverb plug-ins like Stratus, Symphony, and R2 (from iZotope/Exponential Audio, and Cargo Cult’s Slapper. R2 has some of my favorite exterior slaps for forests. We were using the 3D versions of Stratus and Symphony, putting them into the upper speakers to give the sound a lot of depth. Even on the close scenes, they had a bit of extra special “immersive sauce” on them.
We were using the 3D versions of Stratus and Symphony, putting them into the upper speakers to give the sound a lot of depth.
On the music side, composer Bear McCreary wrote a pretty electronic score but in a way that didn’t come across as synth heavy. It’s very electronic and moody. It does have themes where they use acoustic instruments, both percussion and cellos and singing. But that also had to be made immersive. Bear did a great job of serving the scenes but there was definitely a need to spread out the tracks. When I first started the show, we had to decide where the music sat. That was one of the biggest challenges of the show. The music was scored like it would normally be for a battle scene but you have to follow the action with sound effects, to reinforce what the characters are hearing. So, we couldn’t be bombastic with the music. We had to get the details and the dynamics of how that can work. That was a fun, interesting challenge.
What was really fun for me is that I got to do some scoring mixer duties. I had a bunch of stereo stems with different elements and that allowed me to open the mix up into the Atmos space. There was one track that had a combination of low-end synth elements with ethereal, wide sounds. In order to get that to work, I had to feed it through a Waves UM226 (stereo to surround plug-in) so the low-end would go to the front but not be collapsed in the center and the immersive wide things would feed out into the room. There were other tracks, like string tracks, that I would widen into objects and widen them out to bring them into the room a little bit. There were really reverberant things that were brought far into the room and into the upper speakers. There’s a theme that plays a lot for Jerlamarel and for when his name is said— it’s a solo female vocal theme. A lot of that plays in the upper channels.
Because we did the mix and edit in native Atmos, all the formats going down (be it 5.1 or stereo) benefitted from the width and depth we gave the Atmos mix.
In addition to all that, I found a really good setting that I tailored on Stratus 3d that made a space for all that stuff to live. It wasn’t too reverberant but it spread it out to all the speakers and into objects placed in the ceiling.
JP: Because we did the mix and edit in native Atmos, all the formats going down (be it 5.1 or stereo) benefitted from the width and depth we gave the Atmos mix. I listened to some of the initial temps I had done a year ago, and I listened to our Atmos version and the 5.1 mix we made from the Atmos version and it was an amazing and dramatic difference in what the 5.1 sounded like. It had more width to it, and it made it a much more rich experience to listen to. That being said, I’m a huge fan of Atmos. Anytime I can, I want to mix in native Atmos.
The re-rendering down is a pretty amazing tool we have now. Many of the streaming services have the Atmos mix as the primary deliverable. Theoretically, you could play everything through the Atmos format. There are some things to consider, but all in all it’s a pretty amazing format to work in.
How did you divide up the Atmos object tracks? Did you designate a specific amount for effects, for music, and for dialogue?
MB: That’s pretty much what we did. We had to consider the workflow downstream from us because they were doing M&Es in Atmos. It was helpful to assign specific objects for each element for the entire show.
We did allocate a certain amount for dialogue, for music, and for effects. Sometimes there was a limitation as far as choosing what would be an Atmos object and what wouldn’t but it wasn’t necessarily limiting.
I wouldn’t mind having more object tracks not because I need more object tracks to swim things around the room, but because as a workflow solution it would be nice not to have to think which tracks are objects and which are not.
I wouldn’t mind having more object tracks not because I need more object tracks to swim things around the room, but because as a workflow solution it would be nice not to have to think which tracks are objects and which are not. I like to mix without thinking too hard about that. I think if you need more than 118 object tracks to actually play with sound-wise at the same time, then you are making something too sonically complicated.
JP: For me, since I was also doing all of the sound design, I worked in Atmos in my cutting room so I was starting out in that world. I was pre-planning where effects were going to go and what elements I wanted to be on object tracks. Once we get to the stage, I’m not thinking about it too much aside from the newer considerations of what the music is doing in relation to the sound design, or if somebody comes up with the request to have a character’s voice coming from a back speaker or positioned in a different place.
What was the most challenging scene to mix in terms of effects?
JP: The last episode was probably the most challenging from a track count standpoint. There is a place called the silk farm which we get to in Ep. 5 “Plastic.” That particular location — as simple as it seems — is probably the most complicated set up I had on the entire show just because they’re in this abandoned barn with sheets that are hung up and flapping in this giant wind. The challenge was to sell what we saw on screen but to tone it back once they start talking intimately. You have to scale back the intensity and volume of wind moving.
MB: There was some dicey dialogue because of that wind.
JP: Anything that was really loud was challenging strictly from a dynamics standpoint. Once you are in Atmos you still have these limitations where you have a hard limit that you can’t go above for QC. You want it to still sound cool without going over spec. The rock wall was a prime example of hitting the limit and the dam break too. That to me was the most challenging thing.
What was the most challenging scene to mix in terms of the dialogue?
MB: It was probably the silk farm. The wind that you see blowing, that’s real. Jeremy used that to his advantage. He was able to design all of these great layers of wind gusts and sheets flapping, which also provided cover for exterior shots. But when you are inside and people are literally whispering some really important lines of dialogue, making that work with as much production as possible was challenging. There was some ADR done, but not a lot. In some ways, that was the biggest dialogue challenge because I had to be pretty aggressive with noise reduction. Tom [Jones] did too. But it couldn’t sound processed. That also made matching ADR more challenging because you had to make some nice sounding ADR sound like what is going on in the production tracks.
There were many similar situations throughout the show where the characters are next to something loud but the silk farm is definitely one of the top two or three locations that were the most challenging.
…lav mics to me never sound natural. Getting the lav mics to sound natural without over compressing or over processing them is not easy.
I want to reiterate — lav mics to me never sound natural. Getting the lav mics to sound natural without over compressing or over processing them is not easy. It’s great when it works and you can high-five the air. But it’s really great for the mixing chops doing scenes like that.
To clean up the tracks, Tom used iZotope RX, but sparingly, to help remove clicks and pops and a little bit of noise reduction. I like to use Waves WNS. I find that I have to re-brighten after it goes through the WNS but you can get pretty aggressive with it without it sounding processed. Those were the biggest tools. But I also used the McDSP SA-2, which was a great protective device to make really pokey things not so pokey. I like to keep dialogue fairly bright, especially since the series is intended to play in a near field environment. I kept the dialogue on the bright side and the McDSP SA-2 made it feel warm and bright without feeling harsh.
[tweet_box]Heightened Sound is Hallmark of Apple TV+ Series ‘See'[/tweet_box]
What was the most challenging scene to mix in terms of the music?
MB: It was any scene where it felt like the music needed to soar, which of course is when most of the sound effects were happening. The challenge was dealing with dynamics. For instance, the dam break in Ep. 5 “Plastic” and during that first battle were probably the most challenging because of dynamics. You want to get all of the great details in the music through but at the same time you don’t want it to feel pushed (and you don’t want to hit the limiters too hard). And we can’t go over the spec levels otherwise we’d get in trouble with QC.
The score had a lot of low-end in it, which I like a lot and at times enhanced with subharmonics. I used that to poke through but when you have a huge, dense sound effects scene, like the dam breaking, and you need to hear certain dramatic things, I really have to go sound by sound and decide whether we need to hear that particular thing or not. Those were probably the biggest challenges.
In terms of sound, how was See a unique experience for you?
JP: It was unique because we got to play and live in that world. All the backgrounds got to be loud and proud. You got to experience what it was like for the characters living there. We got to cover a wide range of approaches, from naturalistic environments to a pseudo high-tech albeit broken-down remnant of a more advanced time. We got to try a lot of things and we had the space to let it breathe. This was unique because sound was integral to the storytelling process. Typically the audience is supposed to be unaware of our (the sound team’s) presence but in this particular case our presence was part of the character of the show. In that respect, this was a lot of fun.
This was unique because sound was integral to the storytelling process. Typically the audience is supposed to be unaware of our (the sound team’s) presence but in this particular case our presence was part of the character of the show.
MB: I agree. This was a chance to build a world where sound is really the most important thing. There are beautiful visuals: it’s shot very well. It’s directed very well and it’s a very interesting story. The fact that we got a chance to build a world and come up with rules for what that world sounds like and how integral sound is for the storytelling was fun.
JP: It was a great canvas for dynamics and detail.
Please share this:
-
25 %OFFEnds 1738623599