Written by Doug Siebum, photos courtesy of Matt Morris
Doug Siebum (DS): Hi Brandon and thanks for joining us. Can you tell us about your background and how you got started in film sound?
Brandon Proctor (BP): Absolutely, my background was going to college and trying to play music. I had one recording class at American River College in Sacramento. It took me a couple of years to get into it because it was always full. Once I got in, I immediately thought “this is what I want to do”. So then I wanted to take every recording class I could. I had to figure out what four year college I was going to move to. I was looking up mostly California schools such as Northridge, USC, and SF State. I decided to stay local and go to SF State, which was only a couple of hours from Sacramento, so that I could be near family and friends. I was excited to move to San Francisco.
I transferred in as a Music major. I wanted to take the recording classes that were in the Broadcasting department (BECA). Most of those classes I couldn’t add. I had to add them from within the class itself (get an add number from the teacher). Because I wasn’t in the major, I wasn’t supposed to be able to take them. I added all of them. Everyone seemed to think that I was a Broadcasting major, because I didn’t tell them otherwise. I kept taking all of the broadcasting classes that I could, to get that major, because they wouldn’t let me transfer into it. The program was full.
I became the Audio Engineering Society (AES) chairman and I started working in the education committee for NARAS. They put on the Grammys. That’s where I met Leslie Ann Jones. I started doing anything I could to meet as many people as possible in the industry. When I had enough credits for the Broadcasting department, I went to the dean and asked him if I could petition to graduate and he said “oh sure Brandon, no problem”. Then I said “can I also petition to get into the department”? And he said “what do you mean?, you’re not in the department?” because he knew me. I told him “no, I just added all the classes” and he said “you’re not supposed to be able to do that” with a smile.
I started working in music studios like The Plant in Sausalito and Coast Recorders in San Francisco. Then I got a job offer in LA for a new studio, I think it was going to be called Lawnmower Studios. I thought “Before moving to LA, I should apply at Skywalker, maybe I could get a job there”. This was in 1997. So I called Leslie Ann Jones at Skywalker and Steve Shurtz at Fantasy, and they both told me to talk to John Mardesich about potential positions. They happened to be hiring for one position in the central machine room and I got the job. A week later I was working at Skywalker. Within 6 months, I started filling in as a mix tech assisting re-recording mixers on The Faculty and Fight Club. I would basically do any job that was available. I worked a little all over the building. I was a recordist, mix tech, digital archivist, I filled in on the Foley stage, did some ADR, and one time I even replaced the dirt in the Foley pits. I basically said yes to everything.
My film sound education was on the job at Skywalker working with mixers like Lora Hirschberg, Randy Thom, all the Gary’s (Rizzo, Rydstrom and Summers), Semanick, Parker, Myers, Boyes, there’s just too many to mention, but I would learn how to work with clients and how to service the story with sound. I would tech during the day and teach myself how to use Pro Tools at night and mix any indie I could find. One of my breakthroughs was when Lora Hirschberg asked me to mix Finding Neverland with her. It was such an incredible experience for me. I owe a lot to everyone at Skywalker for giving me opportunities along the way.
It was amazing because I would mix a reel, then he would come back for playbacks like he was the client. But then, he would sit with me and give me tips on how to make the updates he wanted. I remember thinking how crazy it was that I was getting paid for Randy to teach me something on the console.
I remember in 1999 helping Randy Thom with mixing some sound design that he and David Hughes cut. It was amazing because I would mix a reel, then he would come back for playbacks like he was the client. But then, he would sit with me and give me tips on how to make the updates he wanted. I remember thinking how crazy it was that I was getting paid for Randy to teach me something on the console. I was incredibly fortunate for these opportunities at Skywalker.
DS: The current topic is “technique”. You recently worked on a couple of films that were very successful in this past year, Black Panther and A Quiet Place. Can you talk a little about those films and some of the techniques that you used?
BP: Both, as far as technique wise, are pretty different from each other, the way that I went about them. I will say the one thing that is consistent on both of them is how I exclusively mixed then both in Pro Tools, without a traditional console. I used to use the Neve DFC for most mixes, at least for the Final mix.
Black Panther I mixed natively in Atmos. I shared mixing duties on that with Steve Boeddeker. We both would take turns mixing music and we sometimes would mix it simultaneously, which is unusual. We had the same objective of what we wanted to get out of it, so it worked out. It was also fun to do. It connected us very tightly with the mix. I also mixed the dialogue on the film. The final mix was done at Disney in LA, and it was done on an S5, but we used the EUCON protocol instead of mixing with the processing power of the console. That was the first full native mix for a Marvel film. Aspects have been mixed in the box for Marvel films, but that was the first one where we mixed dialogue, music, and effects top to bottom inside the box.
On A Quiet Place, I Final mixed everything but Brandon Jones did amazing pre mixing on the effects, and Michael Barry mixed a couple days of dialogue. It was all in the box, but we started in 7.1 and mixed the Atmos after. We started pre mixing / temping on an Icon, in LA at Warner Bros. Then I final mixed on an S6 in New York. The Atmos mix was on an S5 at Technicolor with the home theater mix back at Skywalker on an S3. When I did the Atmos mix at Technicolor, it was also native since I went back to the original units. I was able to place specific sounds in the ceiling speakers and spread it in a different fashion than with the 7.1.
I use a Wacom tablet, which is basically a big iPad. In a sense, it’s like a 22 inch iPad with a pen. I do so much with that pen because it kind of brings the physicality back to mixing, as opposed to a mouse or trackball.
I use a Wacom tablet, which is basically a big iPad. In a sense, it’s like a 22 inch iPad with a pen. I do so much with that pen because it kind of brings the physicality back to mixing, as opposed to a mouse or trackball. I constantly clip gain regions that are further back in the timeline as the transport is playing forward. So when I go back and play something over, you’ll hear the difference. On Black Panther, I also used a Manley tube compressor on the dialogue.
Curious to know more about the sound for Black Panther? Check out the A Sound Effect interview with sound designer and re-recording mixer Steve Boeddeker here
DS: I remember that I talked to you when you were working on A Quiet Place and you were trying to pull the stereo mix out of it. Can you talk about techniques of going from a larger format down to stereo?
BP: Yeah, A Quiet Place was tricky because there was very little dialogue. At the time we were talking I think I was doing the home theater two track version. Normally you would take your stems and play it down and listen to the dialogue and control the loud moments and also the quiet moments. But it’s all affected by the dialogue during a home theater mix. Since I didn’t have much dialogue, I had to figure out how I was going to level out the two track mix that’s going to be listened to on headphones, computers, and TVs. I wanted to make the listener get the same desired effect as the theatrical Atmos or 7.1 mix. I really had to mess with the dynamic range a lot on the specific formats to try and capture the essence of the theatrical mix. I spent a lot of time physically turning up specific sounds and ambiences and music in the low moments, but I just did it all manually. I did a little limiting to control the really loud moments, because it gets way too loud for a home theater environment, and I’d turn down certain things, but for the most part it was about what could I turn up so it wouldn’t just be a completely silent movie, which was a little tricky.
It took me a little work to find the right balance where I could play it down and say “oh yeah, that feels derived from the theatrical mix even though I’ve changed the levels of ambience or music”. It was probably the trickiest home theater mix that I’ve ever done.
At the end I feel like I accomplished the task where someone could listen to the two track version of it and actually still gather the same feeling that they would on an Atmos or 7.1 theatrical. It’s not going to have the same impact. Of course, you’re not going to have all of your subwoofers, especially in Atmos where you have multiple subwoofers, and I was using all of them. But you still want to be able to watch the film and enjoy it and get the same intent. That’s what I was doing for that, but it was a lot of trial and error.
At the end I feel like I accomplished the task where someone could listen to the two track version of it and actually still gather the same feeling that they would on an Atmos or 7.1 theatrical
My first thought was that I didn’t want to change the levels at all. That’s why the Atmos version of the home theater mix is the closest to the theatrical. I specifically did not turn up anything there. I tried to keep it as true as possible to the theatrical. So if you were to put that up in a home theater that’s calibrated properly, you should be able to get that same experience as you had in the theater.
The two track is a completely different beast. You’re squishing all of this stuff into two channels, but also what is your listening environment? I even checked some stuff on headphones. I was very curious to see what it would sound like in the real world since there is no dialogue. I didn’t want people to feel like they had to change the volume too much, that’s always a trick. People judge home theater mixes very different from each other. Some people judge “I had to turn the volume on my TV up and down too much so it’s not a very good home mix” and then other people that are listening to the same mix in their home theater are now upset because the music is too flat or the sound effects are too flat, they want to feel the dynamic range.
It’s tough because streaming services, DVD, Blu Ray – we don’t make three or four different 7.1s or three or four different Atmos mixes. We only get to make one mix each. Then the streaming services and whoever’s making the Blu Rays or DVDs, they decide what they do with those mixes after it’s gone from our hands. That’s the most difficult part. You don’t want to strip away whatever made that Atmos or 7.1 mix really strong theatrically. Then when they put in on a Blu Ray, you know, full quality in a home theater where they actually have Atmos at home, and it doesn’t feel like it did in the theater. I try to think of those as different mixes, but at the same time, try to make it so people don’t have to constantly adjust their TV volume up and down.
Behind the sound on A Quiet Place, on which Brandon Proctor was re-recording mixer. Learn more about the sound for A Quiet Place here
I prefer to use an S6 now, It’s the most flexible.
DS: You mentioned that you don’t really use traditional consoles anymore. Do you prefer the digital controllers such as the S6 or do you still like the old analog boards? What are the pros and cons of each?
BP: I prefer to use an S6 now, It’s the most flexible. Neve DFC’s sound amazing and their processing is really good, but plugins have come a long way. I also like that there are two different spill zones on the S6 that can spill 4 different ways. Because of that, I spill my VCAs up in different directions all the time. I also structure my pro tools templates to look like a console. VCAs for predub masters with aux tracks for each food group. You can then Final mix on the Aux outs and eq and compress each predub unit together. It’s like having your premixing and final automation in one desk / session.
Another reason that I like the S6 is that everything is live. Previously on traditional consoles where you printed premixes you would have to go back, and put it in fix mode to update the premix, then updates back to final mode. I like the idea that I can just mute any sound at any point. Directors have really caught onto that whole thing. They want to have as much flexibility at all times. It’s also really easy to fix automation in Pro Tools compared to a console.
Popular on A Sound Effect right now - article continues below:
DS: What are some of your favorite plugins?
BP: I really love the FabFilter stuff. So I use the FabFilter EQ. I’m using their Pro-Q 3 now. That’s huge. It’s great because it’s now multi format, so you can do a 7.1 EQ without doing multi mono. I have Spanner on all of my pre dub masters. If I have A FX or C FX, I can grab the surrounds and pull them out with Spanner, but I can also take that 7.1 grouping and pan it any other way. So if I want to mono up anything on a pre dub that I made 7.1 or if I want to move everything to the right or left or spin it around, I can. Spanner’s great for that. Pro Subharmonic from Avid is my favorite Subharmonic generator now. I use Phoenix and Altiverb for reverbs, I use a lot of the 480 settings in Altiverb. Izotope, Keyboard Maestro, Speakerphone, Slapper Delay and Soundtoys.
DS: Do you use your own EQ or plugins or do you use whatever EQ and plugins the editors have been using, if they have them in their edit sessions?
BP: Usually I’ll use my own. A lot of times I’ll work on sessions where they have EQ’s already used and I will keep those as well. I’ll layer my EQ’s on top of theirs. A lot of times I’ll have editors edit in my template, although there’s been a lot of projects in the last year and a half where I didn’t get that luxury. I’m just finishing a project where I was sent someone else’s template. They have all kinds of different stuff that I don’t normally use. It’s a little harder to get around. There’s little things like my main output, I don’t like to use. I like to use the first aux track as my main output and then put all the reverbs after that, so I can do blends of pre fader reverbs or post fader reverbs.
DS: Can you tell us about your template and layout?
BP: Yeah, it depends on what I’m doing. Each dialogue track has two different ProQ 3’s. I have one setup as filters and dips and the other setup as a bell curve. In between those two, I have a dialogue voice denoiser from Izotope. I like to be able to EQ after that and EQ before it. I think of it as clean up EQ before, and the EQ after is about making it sound better, or maybe pushing a little bit of a frequency that I took too much out of with the dialogue denoiser. I like having each channel with the denoiser, so that I can denoise each channel differently from each other. My 7.1 reverbs are actually a combination of a 5.1 with a stereo back so that I can control those differently with different faders. I also do a stereo side and back, so a lot of times a 7.1 will be 3 sets of stereos. I have faders for fronts, sides, and backs. I can control those differently a lot faster that way, and I can get a lot wider sound using 3 different stereo reverbs than I can with a 7.1 reverb. Then each one of those dialogue food groups, dialogue A, B, and dialogue C. Dialogue C is usually futz’s and special things. Then loop group and PFX and X tracks. All of those go to a 7.1 aux track. Each one of those aux tracks has EQ, Compressor, and Spanner, so I can control it like I would on a DFC.
I do the exact same thing on my effects. On the effects I use FabFilter Pro-Q 3 and Pro-C 2. I’ll go through that and all my pre dubs are the same, so there’s 8 monos and 8 stereos.
Every single pre dub has the same amount of tracks. If I need more tracks and I’m up to G FX, I have to create a whole new pre dub. I don’t just add a couple of stereo tracks. I create a whole new pre dub with 8 monos and 8 stereos. Just so it’s all the same when it’s spilled out on the console. It’s all consistent. I know where that track’s going to always be. It gives me the most flexibility and the quickest way around things. Each one of those food groups goes to a dialogue track, an effects track, like a dialogue 7.1, and effects 7.1, and a music 7.1 and those go out to the recorder from there.
DS: Do you prefer any specific color coding for tracks?
BP: A little bit. All of the buses are in red, all the aux tracks are green and all VCAs are orange. I don’t get too crazy with it.
DS: Can you talk about using reverb creatively, almost like a sound design effect, versus just using it to create an ambient space?
BP: I do both. I’ll use it to match ADR and production. I use it for spaces. I use it for sound design. I use it to create the tails on music to make them longer.
DS: Ever since Pro Tools 11 came out, we’ve had a faster than real time bounce. Can you talk about printing a mix versus the faster than real time bounce and any pros and cons to each?
BP: I do both. If I know I’m completely done with something, I might do an offline bounce. But a lot of times if I think that I’ll have to go back and punch in on something, I won’t offline bounce, because I want to have the flexibility of punching on the recorder.
The offline bouncing helps doing versions. The nice thing about doing offline bouncing is that you can do it real quick and then do a playback of it and QC it. Otherwise, you have to record it and play it back to QC it, and that takes more time.
I’ve also tried sometimes punching in on my stuff that I offline bounce. Sometimes it works, and sometimes it doesn’t. To do that you have to do a pop test to find out how many samples everything is off. Then move all of the media to be sample accurate. That’s all kind of a pain. I prefer to have a recorder, but sometimes an offline bounce is just what you need.
DS: Can you talk about the different mixes that you do for different formats?
BP: Atmos, Auro, 7.1, 5.1, the two track, and then all the home theater versions of the same. I would prefer, if I could do anything, is mix natively in Atmos and then from there, make a 7.1 or 5.1 and so forth. I think it’s the most flexible way to work. Especially if you’re cutting and mixing. I’d prefer to be able to cut in advance with Atmos, so that I know what I’m doing with the Atmos instead of getting on the stage and then deciding. I think it’s great to think about it in advance and make pre dubs specific for the Atmos mix so you could have different zones created. Then you could drop the sounds into certain predubs that you have setup already for certain zones. I prefer 7.1 beds with objects over 9.1.
DS: Have you done any films that wanted both an Atmos mix and an Auro mix?
BP: Yeah, it’s usually one or the other. I have done both. I think it’s DreamWorks that does all Auro mixes. I don’t know if they’re still doing that though. The last time I did an Auro mix was for How to Train your Dragon 2.
DS: Do you remix the film for stereo or use the fold down from a surround format like 5.1?
BP: I use a fold down. I use SoundCode. I downmix and then make adjustments to the mix.
DS: Are there any improvements that you’d like to see in any of the mixing formats that are out there?
BP: In the Atmos downmixes actually. They’ve been making some improvements, which is good. I just want to have full control over how everything downmixes. They’ve been giving more and more control and the downmixing from Atmos has been getting better. I would like more of the ability to have everything that I’m recording, everything at once, the Atmos, the 7.1, the 5.1, to be able to go back and listen to those things and make adjustments to them.
DS: How do you feel about the LTRT (Left Total Right Total) and is it still being used that much?
BP: We don’t do as many of those. They’re mostly LORO’s (Left Only Right Only) which is more of a stereo format. So they don’t have the LTRT format as much, but it’s still downmixed kind of the same way. I think LORO’s sound better. If we had to do the optical soundtrack, which we don’t do anymore, then we would need to do the LTRT. Most people are decoding differently at home with their amplifiers, or maybe they’re not decoding, so the LORO sounds a little nicer when they don’t decode it. It will still flip the dialogue and stuff to the center if you have an amp decoding it. That’s normally what we do now, unless we’re specifically asked to do an LTRT.
DS: Have you heard any surround upmixing software that you thought sounded good?
BP: I use them for music a lot. I use Waves 226 and Halo. I like Halo a lot. I think it sounds great. I use Waves 226 for more of a design tool. So if there’s something in a room like a boombox or something else, and I want it to sound a certain way, I use that. For transparency, I like to use Halo.
DS: Any advice to new people starting out?
BP: Work with as many filmmakers as you can. If you’re just starting out, edit or mix on a bunch of small films, student films and independent films. Even if you’re not in LA or New York, you can work from anywhere. Contact all the people you want to work with and tell them your available. I wouldn’t expect much pay from it though. It’s more about learning the craft.
Please share this:
+ free sounds with every issue: