Written by Adriane Kuzminski, image courtesy of Ocelot Society
Hi Matthieu and Swann, please introduce yourselves:
Hello, we are Event’s two-person sound design team, Matthieu Bonneau and Swann Menage. We both graduated from ENJMIN, the French video games school, about two years ago. The cool thing about ENJMIN is they’ve got a sound for video games program, which means you can study sound design while working in teams that represent the full spectrum of video games-related trades (artists, coders, etc.).
Matt Bonneau (MB): At 35 years old, I’m the team’s resident old man. I went back to school in 2012 after a few years of working both as a sound technician and programmer. I find working on sound for video games to be the best possible mix of both :). I’m currently working full-time at Ubisoft Paris and moonlighting on Event.
Swann Menage (SM): I’m 25, and I’ve only worked as a sound engineer. I did some Foley recording for cartoons, French cinema dubbing, TV announcements, podcasts, sound design for games, etc… I’m a freelancer working mainly for a French company called Game Audio Factory, now known as G4F. They’re a sound outsourcing company.
Could you tell us a little about Event? What was your role in the production and how did you get involved?
Event started as a student project created in six months with a team of ten: programmers, 3D artists, game designers, one UX designer, and two sound designers.
The original pitch was closely inspired by the first Alien movie. It was about being in a Nostromo-like spaceship and having to speak to Mother (the computer in the movie) as your only means of interaction with your surroundings. For a while it was also supposed to feature a monster in the ship, as well as resources management (oxygen, temperature, etc.), but after a couple of iterations, the game designers decided to strip the experience to its core: natural text interaction with an AI.
We’re both big fans of the Alien franchise — the mood and the atmosphere, its credibility and grittiness — so we were very excited about the project and really wanted to be part of it. It’s funny because people often say there is an “Alien: Isolation mood” to our game, but we started this project before Alien: Isolation was even announced. Of course, we’ve played it since then and loved it. Sound-wise, the people at Creative Assembly really nailed the whole analog technology/hard sci-fi mood!
After completing the first demo of Event, we all got internships and eventually got jobs at various studios, but around October 2015 a core team of three people (Léonard Carpentier [our producer], and Emmanuel Corno and Sergey Mohov [our game designers]) left their day jobs and founded Ocelot Society to go on and make Event a full-featured, retail game. Since the game was our baby as well, we were very eager to join the new team, and we have been freelancing on it for a year.
With the game’s focus primarily on exploring and speaking to Kaizen, Event has a lot of interactive fiction elements. Was this your first time working on a game like this? What are other titles have you worked on?
SM: I’m currently working on the Call of Cthulhu game developed by Cyanide. I have previously worked on a very diverse range of projects: Tour de France 2016 (a cycling simulation), Hiroba (an Android puzzle game), and even a futuristic naval interface in augmented reality. It’s an eclectic mix and I have really enjoyed the experience! I haven’t really worked on an interactive fiction game of this scale before Event. As a student, I was part of a project building a gamified audio guide for a museum, which was designed as interactive fiction. Those games are all about empathy and are kind of minimalistic in general. Sound can have a lot of impact in those games, and it’s pretty creative; there are no basic templates.
I love working on these type of games because I feel there is still a lot of room to experiment in terms of ways of telling stories
MB: I’ve been working on the Watch Dogs franchise at Ubisoft for a bit more than two years now. Before that, I mostly worked on student projects and game jams, and I guess a couple of them could be considered interactive fiction, like Spotlight, for instance. I love working on these type of games because I feel there is still a lot of room to experiment in terms of ways of telling stories.
Since there are very few interactive fiction and exploration games with such detailed audio like yours, how did you approach the sound concept for this game, and what were your influences and inspirations?
There’s a number of interactive fiction and exploration games we admire for their sound design: Dear Esther (or pretty much anything by The Chinese Room, actually), Gone Home, Journey… Although, these shine especially through the way they handle music and/or voice, and we knew we wouldn’t have much of either. Our sound design concept stemmed from two main axes: the narrative context and the gameplay ingredients.
The narrative context of the game came through the hard sci-fi creative direction and the ship’s history and purpose (a “space yacht” built in the ’80s). This translated into a few loose rules for sound design:
• Realistic (or at least believable) sound effects — and above all no sound in space ;)
• An analog feeling about all the ship’s technology (tapes, clunky switches, buzzy neons and CRT screens)
• Very little non-diegetic music (this is also simply because we’re not professional musicians or composers, and we had access to a limited budget for outsourcing)
The terminals are where the players are going to spend the most of their time, so we tried to make it as interesting as we could audio-wise
As far as the gameplay ingredients, the game features only two characters: the player and the AI. All interactions take place within the ship’s terminals. Those two statements made it easy to figure out what to focus on: the terminals and the environment. The terminals are where the players are going to spend the most of their time, so we tried to make it as interesting as we could audio-wise; the AI’s crude voice synthesis, the buzz of the screen, the keyboard typing sounds, and the audio glitches all had to be given special care. The environment represents the AI’s body in a way, so we tried to see it a bit like the Belly of the Whale trope. The ambient sound had to reflect the AI’s current mood and give clear feedback on any action the player initiated through the terminals.
Of course, these were just directions. In many cases, we had to bend the rules or simply put them aside because of time constraints, but we tried to keep them in mind every time we had to make a choice or assess priorities. Additionally, sometimes choosing to break a rule brought an interesting contrast. For instance, we felt that introducing non-diegetic music for the first time after two hours of gameplay gave it that much more impact.
Popular on A Sound Effect right now - article continues below:
The overall tone of the game’s sound and music is incredibly smooth in terms of style and retro-futurism. Who composed the music, and how were the other musical tracks (such as the turntable songs) chosen?
Even in the sound design we were inspired by music. In pre-production we built an ambient/atmospheric playlist, a sort of audio mood board with specific sounds that we thought could fit the project. It’s not really something you notice in the final game, but it really inspired us.
“Hey Judy”, the main song, took on more and more importance as the development went on
“Hey Judy”, the main song, took on more and more importance as the development went on. Originally, we were looking for a piece that would illustrate floating in space as well as fill the silence that’s associated with it, a bit like how the “Blue Danube” is used in 2001: A Space Odyssey. But we were looking for something less grandiose, something conveying a sense of fragility and loneliness, and at first one of our references was Portishead’s “Deep Water” from their album Third. So Swann contacted his friend Julie Robert (aka Yllogique) who he knew could deliver something in that style, and she sent us a first draft of that song, which seemed to fit perfectly. Then they reworked it with Camille Giraudeau and it took a bit more of a jazzy feeling. It became more than just an illustrative piece when we decided to tie it closer to the story and gameplay: it would be something like Kaizen’s favorite song in that it would broadcast over the radio on the ship, perhaps to lure humans aboard because it feels lonely or to help it complete its mission.
Later on, when it became clear we’d need more music for key moments of the story, it seemed to make sense to build around the song, so we used the stems of the original version and adapted/remixed it ourselves. Swann made the creepy descent-into-madness version that backs the progression towards the bridge of the ship, and Matt made the cheesy epic ’80s edit that you can hear in one of the game’s endings. And of course, Kaizen has its own interpretation of the song :).
For the lobby’s turntable songs we looked for public domain music that matched Kaizen’s moods. At first it’s proud to show off the lobby and sets the mood with some classical music, then it’s just happy to have the player agree to its mission, and that jazzy track matched pretty well!
Kaizen definitely “wears its heart on its sleeve” throughout the game, which is a huge draw of the experience. In the load screen of Event, it states it is a game about empathy. How were you able to support this idea and Kaizen’s mood swings in your audio design?
Unfortunately, we weren’t able to get as far as we wanted in that direction, but giving Kaizen a voice was a first step. In a way, maybe the technical limitations we hit helped give the AI a vulnerable aspect that makes it likable.
On top of that, we slightly tweaked the voice depending on Kaizen’s mood. It used to be more noticeable, but the game designers felt it would reveal too much about the system if it was too obvious, so we had to make it subtle. There are other elements as well that are supposed to give clues about how Kaizen is feeling: the buzz of the terminals gets louder and more aggressive when it’s angry, for instance.
You both sound very resilient when it comes to limitations. What were some of the other challenges you faced, and what was the hardest part to design in terms of interactivity and timing?
Event is not an action game, so tight timing was not much of an issue, but we did have timing problems because of general resource issues. Some early technical decisions prevented us from integrating sound directly into animation timelines, and as a result, sometimes a door animation would lag behind and the sound wouldn’t be properly sync’ed.
Luckily enough, the programmers provided us with clean code and plenty of tools, so plugging in audio wasn’t the hellish ordeal it could have been. Our main challenge was mostly related to deadlines, since we were both only working part-time on it. Unfortunately, there’s a lot of placeholder audio content that had to make it into the final build because we didn’t have time to rework it.
On top of audio tools, did you do you use any audio middleware? What engine was used to develop the game?
Event was built in Unity 3D and we used Wwise as an audio engine. That helped a lot, but it still left us a lot of programming to do, including Kaizen’s voice synthesis system and a basic audio occlusion/propagation system. The ship is a small but dense environment, so sound going through walls would have been a real problem without it.
Non-diegetic music is always tricky to design, too, because sometimes the player could rush through a sequence or loiter around, which means portions of the music might be skipped or loop for too long. Fortunately, Wwise’s interactive music system is advanced enough to deal with those kind of issues by adding randomness and fine tuning transitions. Hopefully we’ve been able to cover most scenarios, but players are full of surprises!
This game must have been quite the experience for both of you, both technically and creatively. What was your favorite part of the game to design?
It was a clear case of throwing all the rules we had previously set for ourselves out the window, and it felt good!
MB: The voice synthesis system was fun to design. We experimented with a lot of different solutions before coming up with this one and it kind of works. Other than that, I loved working on the music and effects of the game’s most mysterious ending (can’t say too much!). It was a clear case of throwing all the rules we had previously set for ourselves out the window, and it felt good! The main menu’s soundscape was a pleasure to work on as well. I loved plugging into abstract visual glitches to drive ambient music, and I feel I could fall asleep listening to the result :).
SM: The Nautilus is a small ship, but every room has a really specific mood! It was such a blast to create all those different atmospheres. For instance, Kaizen is really proud to show the player some rooms, and it’s ashamed of the mess in others. We needed the player to feel that s/he is welcome in the lobby and the living room — Kaizen tries its best to make these rooms as comfy as possible! I also really enjoyed working on the intro scene. We didn’t have a large enough budget to do a full cinematic sequence with animations and stuff to cover the entire back story, so we made this little interactive fiction. It’s basically only text — with the player having to make choices about his backstory — and sound: some is realistic, some is more musical, some really cinematic… It’s pretty cool to make ten minutes of text feel alive!
Thank you for giving us a look at this game’s audio development! If people would like to hear more of your work, where can they follow you?
Thanks for having us! We had lots of fun working on the project, so it’s a pleasure sharing it.
Please share this:
+ free sounds with every issue: