Find out how they made sounds for the dark matter, Red Angel, spore drive, energy-based weapons, alien planets, unique vocal processes for different sentient species, a surprising use of the Wilhelm Scream, and much more — all while working on a time-restricted TV schedule.
Interview by Jennifer Walden, images courtesy of CBS All Access. May contain spoilers.
The official trailer for Star Trek: Discovery season 2
Star Trek: Discovery — a CBS All Access Original Series — has just wrapped up Season 2. If you haven’t checked out the series yet, now is the perfect time. With two entire seasons available for streaming, it’s a great way to spend a rainy spring weekend.
The series, created by Bryan Fuller and Alex Kurtzman, is set a decade before the original 1960’s Star Trek TV series and follows a timeline unrelated to the recently released feature films. It’s also set aboard the USS Discovery (not the Enterprise), which, in Season 2, travels around the galaxy chasing after the Red Angel.
Even for non-‘Trekkies’ — such as myself — the series has a lot to offer, like striking VFX, a solid story and cast, and of course top-notch sound from the Warner Bros. Sound team, led by supervising sound editor Matthew Taylor and lead sound designer Tim Farrell.
Here, Taylor and Farrell talk about the sound tools and plug-ins they used to make the awesome and unabashedly sci-fi sounds that characterize the Star Trek franchise. Find out how they made sounds for the dark matter, Red Angel, spore drive, energy-based weapons, alien planets, unique vocal processes for different sentient species, and much more — all while working on a time-restricted TV schedule.
Tell me about your schedule on this show. How long did you have per episode for sound editorial?
Regarding the timeline, we had about two weeks to work on an episode. That is what’s in the budget. In the beginning, they were turning over an episode a month in advance, and the S2, Ep.1 was turned over two months in advance so we could get the ball rolling on ideas. But typically, we are budgeted for a two-week turnaround. Close to the end of the season, it was more like one-and-a-half weeks.
Tim Farrell (TF): The schedule changed when they split the finale into two parts. Some of the issues were the number of VFX in the show, which take a large amount of time to do, update, and change. Our producers are so creative; they are coming up with ideas all the time that often change the VFX, which can lead to us then chasing this new vision and reworking material we thought we’d put to bed. That can make us feel like we’re really behind when another mix is looming close on the horizon.
By the end of the season, we were mixing once a week. So, there would two or three days between each mix. It became a big challenge for the second half of the season, juggling multiple shows that are mixing, preparing for mixing, and handling fixes for VFX changes — all that goes into prepping for a mix.
MT: Near the end, Tim and I brought on effects editors to help us out. Tim oversaw their work. We’d add the effects tracks to what we’ve been working on and then send that off to the stage. We had a team working behind us that we oversaw, and that’s how we got through the end of the season.
TF: The finale is probably one of the largest things I’ve ever seen on television. Even for the two episodes leading up to it I needed to get a lot of help. I had some amazing editors. Mike Schapiro has been working with me all season. He’s been my #2 all year. Then, towards the end, we brought in a wonderful effects editor, Clay Weber, and Dan Kenyon also came and helped with the finale. They were instrumental in helping us to get a number of things done in time for our deadline.
MT: There was a lot of subdividing of the food groups.
TF: At the end of the day, I would make sure that the over arching vision of the soundtrack came through.
Since it was near the end of the season, was there an established library of Discovery effects that your editors could use?
TF: Yes and no. We have all the sounds but they aren’t super organized into a library, per se. I work mostly in a super-session, so I’m able to go back to past episodes and find the sound I need and copy it over to where we need it. So, for example, the sound of the spore drive isn’t just one sound; it’s more like 30 sounds. There are a lot of different pieces that change with the animation. So I’m able to go back to my source, grab the pieces, and rearrange them for what I need in the current episode I’m working on. So that it matches that particular shot or movement.
What went into the spore drive sound?
TF: The spore drive is made from a number of things. TONSTURM put out a library called Whoosh a while ago. Then they made all of these weird processed versions of them for The Whoosh Processed. They used a Kyma to make these weird, growly whooshes from a bullroarer. That became one of the main elements.
In the spore drive, the high-pitched spinning element is me doing my best Curly impression
The spore engine does this wub-wub-wub-wub kind of thing. Back in Season 1, I was sitting there with another effects editor and I was saying the spore drive needed to do this Three Stooges-esk wub-wub-wub-wub thing. We thought it was funny so I actually recorded my voice doing that and we sped it up and pitched it and put it in. So in the spore drive, the high-pitched spinning element is me doing my best Curly impression.
Then, there are some massive booms and other weird sounds.
There’s a great library from SoundMorph called TimeFlux. It has some cool, granular sounds.
There’s a duck sound that I slowed down in there. And seaweed against a submarine hull.
There’s this really cool recording I have of ice breaking on a frozen lake. It makes these great ‘power-down’ sounds.
We don’t have as much time to record as we would like. We tend to seek out good libraries and find cool material. It’s about finding sounds that have character and won’t turn into white noise in the mix.
Are you cutting these in Pro Tools? Or, do you use a different software for layering and triggering sounds?
TF: I use Native Instruments Kontakt quite a bit, but ultimately Pro Tools is our final environment. That is what we deliver to the stage.
I create instruments in Kontakt all the time to help speed me along. I have all of the phasers mapped to Kontakt so I can basically play anything like it’s a video game. All the ship engines and phasers, those are made via Kontakt and I trigger those using a Midi Fighter Pro controller from DJ TechTools. It has all of these arcade-style buttons that trigger MIDI notes. It feels like you’re playing a videogame. It’s a fun little toy that helps me trigger all of my sounds.
For all of our ships’ beeps and boops, for each console on the ship, we made a custom Kontakt instrument. This helps to keep the sound consistent and also create variety
For all of our ships’ beeps and boops, for each console on the ship, we made a custom Kontakt instrument. This helps to keep the sound consistent and also create variety. This way we can perform the sounds for the Comms console or for the Navigation console. We can just perform them. And I use a MIDI keyboard for that.
Popular on A Sound Effect right now - article continues below:
What about the larger energy-based shields and weapons? What are some of your go-to creation tools for those?
TF: For the shields, I pulled a lot of spark sounds. There was a lot of material that I created for the spores that didn’t make it into the final. They sounded sparky and crackly. Those were the result of a lot of plug-in chain processing. So I ended up using those and I put a comb filter on it. MeldaProduction has this amazing comb filter that sounds awesome. So, just with the default setting, the sound of sparks become these great shields. It kind of gives it a nice buzz on top and you get the sharp transients from the sparks. That ended up becoming a lot of the shields.
One thing to note is that when the phasers shoot, there’s this screaming sound. That was a Wilhelm scream that I processed the heck out of
The energy-based weapons were mostly created last season. One thing to note is that when the phasers shoot, there’s this screaming sound. That was a Wilhelm scream that I processed the heck out of.
MT: So every time someone shoots it’s basically shooting a Wilhelm scream.
I really liked the dark matter energy discharge that sends Tilly (Mary Wiseman) flying across the room in S2, Ep. 2. It’s not an explosion, but more like a massive pulse of energy. How did you create that?
TF: Mike [Schapiro] made the first pass on that. He did an awesome job of creating that low, growly, tonally deep sound.
He’s a big fan of processing. He’s this mad-scientist genius, coming up with these crazy processing chains. So, he came up with a lot of the cool low-end growly elements and I put in some high-end accents.
The laser part was a challenge because I found this really cool sound but it was too short. So I used the Arturia Synclavier V’s resynthesis module to take this tiny little sound and make it into a big laser.
We also used some cinematic horns and I added a few zaps and chimes to that. I added in some pitched down metal. There are some sword shings that I pitched down. I love taking bells or a tuning fork or any ringing metal sound and pitching it way down. There are a lot of Tibetan bowls pitched down.
The trick is to make as much source as you can while you’re making it because you’ll never go back to that exact sound.
Behind-the-scenes featurette on Star Trek: Discovery Season 2
How about the sound of the time crystals? How did you make those?
TF: I found these obsidian chimes that make a cool tinkling sound. I pitched those way down and it made a cool, elongated metallic sound.
I also had some fun with the Paulstretch software from Paul Nasca. That’s great for when you want something to be really long and smeary and dreamy. We got some cool moments out of that for this season.
The Red Angel is a main player this season. What went into her sounds?
TF: She evolved over the season. We don’t get scripts ahead of time, so we find out about things as they happen. We get to enjoy the mystery too. We didn’t find out who she was until Ep. 10. When it came to the sound design, I wanted to know who she was but then I didn’t want to know.
We don’t get scripts ahead of time, so we find out about things as they happen. We get to enjoy the mystery too
MT: Also, they were working on the VFX for the Red Angel for a long time, so we didn’t see it. They teased it in Ep. 6, but we didn’t know who it was.
The direction was otherworldly and angelic, but not too angelic.
TF: Because of the image, we tried to create an ethereal vibe. There was such chaos before she showed up and so we wanted to carve out this moment of quiet. There’s a heart beat sound that I created from slowed-down Godzilla footsteps. There’s a choir and chimes slowed down. There’s a magic bubble pop sound that I slowed way down.
I love slowing down sounds until they almost break. It creates these weird, tonal textures that are really fun.
What do you like to use for pitch shifting?
TF: I have many pitch shifters that I keep under my shortcut keys all the time. I love the classic Pitch Shift Legacy plug-in for when I just want to pitch shift a sound but not lengthen it. Serato’s Pitch ‘n Time allows me to do both.
MT: To make things perform, you really need Pitch ‘n Time.
TF: I really like Avid’s X-Form for when I want to make something weird because X-Form allows you to pitch the transients separate from the tone. So you can make a sound really long, pitch it way up, and then bring the transients way down. You can get some really weird results with that. I learned about that plug-in from Skywalker sound designer Christopher Scarabosio, who helped us out on Season 1. Some of his files said “X-Form” and I asked him what that was. So he turned me onto that plug-in. He was really instrumental in helping us figure out the show. He’s had a lot of experience with spaceships and sci-fi. Getting to learn from him about his approach was such a master class in sound design for Star Trek. I owe him a lot of credit for that.
One of my favorite pitch tools is Soundminer. Their pitch shifter is so clean. It’s varispeed, but that’s okay. When I want to perform something, I’ll put it into record mode and play with the pitch to get these clean pitch ramps that sound really great. That’s a really useful tool to have at my disposal.
There’s GRM Warp that I used on the beeps to make them sound really scratchy and weird. It makes the pitch go crazy.
If you find yourself really pitching down sounds to the point where they start to break up, there’s this plug-in from Waves called Vitamin that can help recover some of the quality. That was really helpful on this show. Because of all the pitching down, the sound gets mangled and destroyed. Vitamin helps to bring it back to life.
As the Discovery crew follows the Red Angel, they end up visiting many new places this season. What are some things to consider when making the sound of a new planet/location?
TF: You try to figure out what the emotion is in the story, and let that dictate the feeling of the scene. Then you go to your toolkit and try to come up with sounds that would sell that emotion. Is this world dangerous? Is this world peaceful? Is this world idyllic? You want to bring out all of these various things.
For example, I worked a lot on the planet Kaminar for the Star Trek: Short Treks episode, “The Brightest Star.” I knew we’d be going back to Kaminar later in the season so I could spend some time on that short.
I wanted that place to have an otherworldly quality to it but also feel familiar. You have to come up with alien crickets and alien birds. These were a seafaring people and so I wanted to have seagulls, but not use seagulls. I tried to use a lot of sea related sources and modulate those until they felt alien and sounded like weird cicadas and bird calls. I created bird calls from reeds that I made from blades of grass. I spent a lot of time making these amazing cricket sounds from dolphin clicks that I sped up. It was so cool because they’re crickets but they aren’t. Then you put them in the track and bring them down and add the music and they sound like crickets. I probably could’ve just used crickets!
They gave us all of this wonderful original material and some of Uhura’s beeps have these really great bird-like qualities. Those are all throughout Kaminar as birds, but they are actually old computer noises from the ‘60s
My favorite sound from that episode came from an old recording I had from the original series. They gave us all of this wonderful original material and some of Uhura’s beeps have these really great bird-like qualities. Those are all throughout Kaminar as birds, but they are actually old computer noises from the ‘60s.
You try to find things that sound like elements in nature, but aren’t, and that’s how you make that otherworldly quality.
How does the Discovery sound different from the Enterprise?
TF: For the Enterprise, the show runners were all about retro. We spent a lot of time going back through the original Star Trek material searching for the beeps and boops on the Enterprise. Mike did a fantastic job of reorganizing every single beep, tick, and clack from the original Enterprise, and used those to re-create the sound of the Enterprise’s bridge. Then, we put in a few more modern beeps and boops as well. Our goal was to pay as much homage to that as we could in this series.
The show originally started off on the Shenzhou, and then we went to Discovery. The Shenzhou is an older ship and so we had to create an older sound for that. When we go to Discovery, everything sounds more sleek and modern.
So it was fun to work on the Enterprise because we had all of these old sounds that everyone knows, loves, and appreciates. We wanted to honor and respect that and feature it is much as possible.
MT: Now that we are operating in canon territory, there isn’t much license to not honor that.
TF: When the Enterprise warps, we put in the classic sound but it’s like this white-noisy, hissy sweep. We couldn’t just play that because it didn’t work. It didn’t fit in with the other cool sounds in the series.
MT: It didn’t fit the aesthetic that Alex Kurtzman wants us to achieve. It’s a great iconic sound and it’s in there, but we’ve updated it.
TF: We have it whine up, and warp away. As it’s warping away, you get that hissy sweep as it disappears. So that’s how we incorporated it into our current palette of sounds.
How does the sound in Discovery relate to other Star Trek shows and films?
TF: Funny, there were some comments about the sound of Discovery on Reddit. Someone complained because we used some of the ship beeps from The Next Generation. I actually did that on purpose because Discovery is a new ship, so I put a few in there just for fun, thinking that these beeps were being tested out on this new ship. We were told that this ship is the future of Starfleet. It was like a test ship. So we put a few of The Next Generation beeps in just for fun and fans definitely noticed.
We put a few of The Next Generation beeps in just for fun and fans definitely noticed
I was thinking about who came up with the sounds of everything on the Starfleet ships. Was it people like me and Matt, sitting there designing all of the beeps for the ships? I bet they were recycling sounds. I will say, though, that every sound I pulled I processed in some way, usually a bit of pitch shifting, so that it was close to the original sound but not quite it.
MT: What’s interesting is there is a sequence in S2, Ep. 5, where they cross into the mycelial plane. Section 31 reveals that their black badges are actually chest communicators, like those worn in The Next Generation. Those black badges are the precursors to the chest communicators, and so the sound very closely mimics the sound from The Next Generation. It’s still super experimental technology that isn’t supposed to come out for the next 30 to 40 years.
TF: There’s a bit of stretch to get to these other Star Trek universes. We wanted to tip our hats at it…
MT: … but not necessarily copy it directly.
Can you talk about some of your different vocal processes on the show? What were some sound tools that you used to create those?
MT: There is a lot of vocal processing on this show! The hardest one to do was the Ba’ul. That was a challenging process for me because we went through many iterations of it. At the end of the day, we had a pretty well known director do the voice, but it’s highly processed.
For that, I was using zPlane’s Elastique, some MeldaProduction tools, some GRM Warp and Shift, and I created multiple layers on top of the original track and gave that to our dialogue re-recording mixer Aleksandr Gruzdev.
Our direction on that voice was to sound like Mothman from The Mothman Prophecies, which has a very specific cadence and a very specific design. So we recorded ADR with that in mind, but we paid more attention to the cadence and speed rather than voice match. When we first presented the voice on stage, there were elements of it that Alex liked and elements that he did not like. It fell onto the shoulders of the re-recording mixer to find the right balance. Then, Tim provided a modulated white noise to add this weird, high-pitched frequency. That was a long and rewarding process.
I really liked the Control Computer voice. You first hear it talk in S2, Ep. 10 when Leland (Alan Van Sprang) gets stabbed in the eye. It sampled his voice and morphs it, and we hear Leland’s voice talking back to him through the Control interface. Later on, Control gives a voice to people that it’s inhabited.
I tend to go off the deep end sometimes figuring out these vocal processes, but we actually did something pretty subtle for Control
I tend to go off the deep end sometimes figuring out these vocal processes, but we actually did something pretty subtle for Control. We’re really only using three elements, and they are all inserts that are used with ADR. We have to ADR all of these lines because if we were to use recordings from set, the miking of it changes in that affects the timbre and tone, and consequently the processing. So we have to ADR all of the lines. For processing, I’m using zplane Elastique Pro and I perform the pitch on that in real time and then print that out. Then that will feed GRM Warp or Shift, and then it will also feed an Infected Mushroom Manipulator plug-in. Then I print those out and phase them, and send those off to the dub stage. I give them a template on how it should sound but then Aleksandr [Gruzdev] will find the right balance.
TF: Re-recording mixer Brad Sherman does our effects. They have a lot to work on and they do an amazing job.
MT: The key to that dialogue processing was using the idea that Control injects people with nano bots that then control their body. The show runners wanted it to sound like the person’s vocal cords but those vocal cords are sort of robotic in a way, but that sound can get unclear pretty quickly. So we will often cut quickly back and forth between the unaffected ADR and the processed ADR. The rapidfire cutting back and forth helps a lot with the clarity.
TF: Matt had the challenge of making these voices sound processed and different while also still sounding intelligible. It’s really easy to make something sound super cool and crazy, but then you realize that you can’t understand a word of what the character is saying. That’s the biggest challenge when it comes to creating vocal processing, making it sound different and unique while maintaining clarity.
I did the voice of the Bounty Hunter in the fourth Star Trek: Short Treks. I made up a space language and ran it through Infected Mushroom’s Manipulator. Since it was an alien language, I didn’t have to worry about clarity. Whereas Matt, he has the horrible challenge of making sure that despite the processing, every word from every character is understood clearly.
MT: And the more intelligible you make it, the less unique it gets. That has been my finding at least. You want it to sound new and different but yet you have to understand what they’re saying. You have to find ways to make it new and different but also completely comprehendible.
Another voice I liked was for the Tellarites. There was a Short Treks episode that featured this character and we had to ADR all of his lines; the mask he was wearing looked fantastic but unfortunately restricted his mouth movement, therefore there were issues with diction. It made his performance sound lispy, and that is not the vocal quality they wanted for this character. So we had to go back and loop him. Then, I pitched him down and combined that with some trombone parts I played. I have a series of trombone growls that I mapped against his performance. It is very subtle. Then, I ran that through the Infected Mushroom Manipulator to come up with a subtle output for that. Then I combined those three layers.
I used a lot of zynaptiq’s Morph and Melda Morph too on the vocal processing.
TF: The last processing we should talk about is for the Klingons. For that, I used a recording of my dog growling and put that into Krotos’s Reformer. That allowed us to subtly add dog growls underneath the actors’ performance for the Klingons.
We also did some subtle pitch shifting down.
MT: I believe the re-recording mixer also pitches them down a bit too. I remember in Season 1, Alex (Kurtzman) wanted to have varying degrees of voice manipulation on the Klingons and Alex [Gruzdev, re-recording mixer] had to adjust it on the fly on the stage.
I always find that less is more with dialogue processing. Yet, for me, I always want to go further with it to see where we can go. But then the dialogue becomes a sound rather than English.
In S2, Ep. 7 “Light and Shadows,” Tyler (Shazad Latif) and Pike (Anson Mount) get caught inside a time rift. Pike sees glitches of a future version of himself struggling with Tyler. Can you tell me about your work on that scene?
MT: As Pike and Tyler experienced temporal distortions we were tasked with creating a feeling that recent lines of dialogue (heard a minute prior in the episode) were warping back to us through a temporal distortion.
First off, we had to ADR all the lines we were intending to process so they were clean.
After spending much time staring at my screen and lots of failed attempts, I stumbled upon a method that seemed to work. So, I looked at these particular moments as having two flavors — one was a bed/pad of vocal material and the second was the story related dialogue. To create the bed/pad dialogue, I basically time stretched, reversed and Doppler’d the ADR source into oblivion to create various pad-like layers and tones. I would use those tones/layers to hopefully create what in my mind was a “wash bed” (or vocal pad of words) the temporal distortion created; like a bunch of dimensions were mashed up. I would then take less extreme processed lines, similar processes as above, and run them through Valhalla Space Modulator or zynaptiq Wormhole that imbued a sense of movement, travel. Using this process I would create a few various layers of the same phrase that I would hand off to Alex G who would work his magic with them. He certainly played a large part in putting them in the space and helping the “travel” of the dialogue.
Any fun field recordings for Star Trek: Discovery?
MT: There was one episode with a lot of phaser fire, and we had to get one of them to cut. So I recorded a railroad tie impact. I put that through Melda’s comb filter and it’s this really quick, 3-frame transient on two shots. Those shots had to cut through the music and the spore drive. There was so much happening at the same time.
TF: Recently, I went to Iceland and I brought my recorder. A ton of those recordings have ended up in the show. The sound of the nano bots is from thousands of seabirds nesting in this rock wall, making these horrible screeching noises. That sound, pitched up, ended up becoming the sound of the nano bots. That’s also coupled with giant sea otter recordings that a friend of mine captured. It’s a library called Why I Otter! from Sonic Shepherd, sound designer Bret Johns.
The sound of the nano bots is from thousands of seabirds nesting in this rock wall, making these horrible screeching noises
I recorded a bunch of café walla in Iceland so whenever we are in Discovery’s commissary, I play the Icelandic café walla, pitched down, and it sounds kind of alien.
I recorded a lot of creaks and groans. I recorded the sound of a crow, making these strange noises outside of our hotel room window in Iceland. And so those recordings were helpful for the creature growls.
MT: For the Short Treks episode about Saru (Doug Jones), I went out to Warner Bros. backlot where there’s this water pit and a bunch of reeds. I recorded reed movements for things in the water (because the short opens up on them in the water). Some of the reeds are dried out and hollow, and they have a melodic sound, almost like wooden chimes.
I recorded the trombone parts to add to the Tellarite vocal processing.
I recorded some sheet-metal bows a long time ago and I used those to map Airiam’s (Hannah Cheesman) yells when she gets sucked out into space. I morphed that into her voice.
I recorded some chains and shackles for the torture scene in the fourth Short Treks episode, “The Escape Artist,” directed by Rainn Wilson.
TF: There’s a recording of volcanic steam that I captured in Iceland that we used for when the ship is warping. It’s this whoosh sound that was created from a recording of volcanic steam coming out of the ground.
MT: I also want to give a shout out to the loop groupers. I had to record a bunch of groans and efforts randomly that I used for the vocal work on the show.
They also did a lot of improv radio chatter, because Alex Kurtzman loves radio on the exterior ship scenes. So I have them do a bunch of improv radio recordings where they are talking two or three at a time and it sounds like FAA radio chatter. So we threw those in sometimes as well.
TF: The communicator beep was a doorbell that I recorded in India.
MT: For shows like this, we use whatever recordings we already had. There isn’t time to go out and record new sounds all the time.
TF: I record when I possibly can but there isn’t much time, with a mix happening every week. You have an entire show to do. It’s hard to get out and record as much as we would like.
MT: I do a lot of recording outside of work.
TF: In the off-season we are looking for more sounds, constantly. When I was on The Walking Dead, I traveled to India and so anything that creaked or squeaked or groaned I recorded for use later. It was really helpful because of all of the old and decaying elements in that show. Honestly, a lot of those sounds were useful as ship creaks and groans for Star Trek: Discovery. I just pitched them down. Pitch shift is your friend.
Any other plug-ins or processing you found helpful on this show?
TF: We basically have to use one of everything on this show.
We used Sountoys Tremolator a lot. I used TONSTURM’s Traveler a lot on the finale, creating all of the ship-bys.
Any final thoughts you’d like to share on the sound of Star Trek: Discovery?
TF: It’s been a hell of a season and we hope everyone has enjoyed watching it. I’m super proud of the work I’ve done on the show and I know that Matt is too.
We know the writers were hard at work trying to make sure the story will make the fans happy, and we’re trying to do as much as we can sonically to support that
We want to honor and respect the Star Trek fan-base — those who love and respect this canon and this property. We know the writers were hard at work trying to make sure the story will make the fans happy, and we’re trying to do as much as we can sonically to support that. That was one of our biggest challenges on the show, to make it sound like Star Trek but also put our own spin on it. We have this opportunity to make such incredible sounds for such cool sequences.
MT: Each episode has something new. We’re not always on the bridge or in the sick bay. Every episode offers the opportunity to do something that we haven’t seen or made before. There wasn’t any time to sit back and rest. We are always making new stuff, and that is great.
TF: We aren’t recording a lot of new material per se but we were certainly creating tons of new material. Going to different sound libraries for source material is like going to a LEGO drawer. As a kid, I had a big drawer full of LEGOs that I could open up and dig around in to find the right piece. This show is the same way — it’s all about finding those magic pieces to work with, twisting and tweaking them, and putting them together in a way that creates something interesting and new.
When I started on the show, I would process everything. But then I realized the less you process something the more you can keep its character. It’s all about finding the right character to cut through the music, or cut through the track. The right character can help you tell your story better. It’s better than having an over-modulated sound that just comes across as noise.
MT: Another thing to point out is that all the loop group lines were scripted by the writers. So besides the sound effects I asked them to do, everything else is written out before hand because the show runners want story-relevant background voices. If a character isn’t working with a phase discriminator, but that word comes through in the backgrounds, then that ruins the illusion of this universe existing. So there is great importance placed on having the loop groupers saying scene relevant information. It goes to show how much the show runners and the writers care about this series and try to honor the Star Trek franchise.
Of course, none of this is possible without the help of the entire crew. If I may, I’d love to list them below:
Dialogue Editors: Sean Heissinger, Bob Jackson
Sound Effects Editors: Michael Schapiro, Clay Weber, Dan Kenyon
Foley Crew: Alyson Moore, Chris Moriana, John Sanacore, Travis Crotts
Re-Recording Mixers: Brad Sherman, Alex Gruzdev
Mix Tech: Brad Bell
Sound Assistants: Deron Street and Damon Cohoon
And a huge thank you to everyone at Secret Hideout and WB PPS.
Please share this:
+ free sounds with every issue: