Hi Jamin, please introduce yourself and Three Monkeys to the A Sound Effect readers:
Hi there, I’m Jamin Smith – director and writer on Three Monkeys; an audio game that looks to push the boundaries of what can be achieved with audio in games. In the game, you play Tobar, a blind hero whose lack of vision is his strength. In a world that has fallen victim to a terrible curse which has ripped the sun out of the sky, plunging it into darkness, you’ll journey into the Abyss to break the curse and save Byzantia.
Why did you decide to go for an audio-only game?
With a team of composers and sound engineers at the core of the team, we always wanted to explore the audio game genre, and see what boundaries we could push. While the concept started out as a simple call and response ninja game, the scope quickly grew to a fully-fledged audio RPG with an emphasis on characters, narrative and world.
How’s Three Monkeys different from other audio-only games out there?
A lot of audio games – knowing that they the player deprived of their primary sense – look to use scare tactics to create a horror experience.
They do this very well, we should add; there are some great games out there. However, with Three Monkeys we’re looking to empower the player; allow them to make the first move, to attack. Most audio games are also very linear and corridor-based.
With Three Monkeys, we’re looking to empower the player; allow them to make the first move, to attack.
We’re looking at creating a semi-open world game, with large zonal hubs that you can really take the time to explore. It’s fair to say it’s a very ambitious game, and we’re hoping it’s going to redefine what audio games can be.
What’s your approach to creating an immersive audio-only world? And what are some of the sonic components and layers in it?
The audio has to take on many roles that would usually be filled by visuals. This does present a danger as it has to be subtle and carefully compiled.
As you build up the sonic layers you really have to refine what’s in each layer to make sure you’re not blasting the player with information.
As you build up the sonic layers you really have to refine what’s in each layer to make sure you’re not blasting the player with information. When designing exclusively in audio layers everything can be classed as information but it generally falls into one of two categories; ‘What do I need to know’ and ‘What do I need to feel’.
It gets more complicated when it moves between the two so for example our navigational tools are what you might consider immersive background audio however this audio does still provide the player with information as they become more tuned to it. It is pretty much always there and sinks to the background but when you decide you need to know where to go, it provides you with the reference you need.
Throughout everything we’re doing, information and narrative (by that I mean the sequence of play rather than the narration) is at the forefront of our minds. This is why if we put something in and it’s not clear the purpose it serves the player, it probably shouldn’t be there.
Are you using any particular recording techniques?
We found that portraying really close narration and getting the drama out of it was a bit of a challenge. In the game Yoska is a Sprite that sits on your shoulder so this obviously posed an interesting challenge. We found that by getting the voice actor to act out the scenes in front of a binaural head, we could get across both close movement and proximity much more successfully. The bedding layer of all environments will also be recorded in suitable spaces – most likely using binaural microphones – although more recently we’re investigating whether the Soundfield microphone setup could be great for this.
In most however, a lot of sounds are recorded as sound effects, which are then placed in a binaural space so that we can allow more interaction with the player.
From a technical perspective, how are you implementing the sounds?
The game is being built in Unity and to build the prototype we literally just used the in built binaural processing! A lot of our initial ideas came from how sound can form a clear narrative. To test those ideas we didn’t need anything too advanced, but as we move forward we will need to be using something more substantial to ensure that we can make information and immersion clear. We’ll most likely be using WWise to implement the sounds and utilise some better binaural processing. We’re just A/B testing a few plugins such as the Astoundify plugins for the general binaural landscape.
In the game, we do have to account for the different objects, acoustics, proximity and texture, all of which require different techniques to ensure they smoothly move as the player does. Quite often we have found pretty creative audio options to provide a solution to a problem that initially thought we needed to fix within the code of gameplay design. This is where the input of Kevin Satisabal (a team member who is visually-impaired) has been invaluable. As an expert of sonic navigation in every day life, it’s amazing how many times we’ve found that there is a much simpler way to use audio if we truly learn to use audio as our information source rather than using what we expect to see as the basis for what sounds should be there.
When it comes to the sound design, what’s been the biggest challenge so far?
Purely from a sound design perspective it’s most likely been the careful spacing of frequencies/timbres. As we’ve mentioned, information is crucial to the player and we’ve found that careful selection of sound types when there are no visuals can make all the difference.
When putting audio/music to visuals, we immediately make connections between the two whether they were intended or not.
With audio-only, your brain still makes those connections but with ideas we already have in our head. This is both our most powerful and dangerous tool.
With audio-only, your brain still makes those connections but with ideas we already have in our head. This is both our most powerful and dangerous tool. If we do it well, we can help the player create a whole world unique to them but still retain the sense of progression.
If you allow the player to get too lost in their own ideas without enough guidance, there’s a risk that they’ll get lost within the progression too, which doesn’t make for a great game experience.
How far along in the development process are you?
We’ve built a tech demo, which we’ve shared at numerous games shows, but at the moment we’re seeking funding to develop the full game. The whole game is mapped out and scripted, we just need some cash in the bank to pay voice actors and the rest of our development team.
How can people help support development of the game?
Making noise about the game is a fantastic way to help us out. You can find us on Twitter @EnterByzantia and Facebook. If you were feeling really generous, you could also donate to the game’s Kickstarter campaign here
And to everybody that’s backed, supported or even taken an interest so far (including yourself) – huge thanks!