GDC 2015 has ended and those who weren’t there have gotten their information solely in 144 character fragments. I wanted to write up a quick post of my experiences at the conference, key takeaways, etc. to give those who weren’t able to there a (slightly) more comprehensive idea of what transpired.
To be clear, there was no drinking, having fun, or gallavanting because we’re all professionals and don’t have time for such shenanigans.
The big takeaway from the week as a whole is that everyone is interested in VR and 3d audio, but we’re still figuring out what to do with it.
I arrived Monday night, not to hit audio bootcamp on Tuesday, but because I’m lucky enough to work for Sony and have the opportunity to be a part of their Game Technology Conference before GDC in which I sit in a room with some of the most talented game audio developers in the world and talk about game audio. We heard talks from Evolution studios, some of the Morpheus team, and others from SCEE and SCEA. Talented folks. Here I am sitting in a room with guys from Naughty Dog, Sony Santa Monica, Bend Studio, Sony Cambridge, London, Evolution, Sony Japan, Insomniac. It’s a very humbling experience being surrounded by such incredible, inspiring talent, all the while having great discussions to further inspire and innovate.
I actually cut out of conference for an hour to catch the beginning of the Audio Bootcamp and Jay Weinland’s talk on Weapon Design in Destiny. I always enjoy talks where people share some screenshots of their Wwise projects. I find it fascinating how we all use the tool in different ways to sometimes do similar things and other times create totally innovative concepts. The two big “that was cool” moments from Jay’s talk which have been done elsewhere, but were elegantly implemented: the notion of silence duckers: using a Wwise silence plugin with 0.1 second length played with an explosion to duck most other sounds by 12dB for the .1 seconds with a .2 second recovery time to carve out some space for the explosion without being detectable. Also their passby solution for rockets in which they created several sounds of the same length with the pass by the listener at the same spot in each file. Based on velocity of the object they trigger the sound based on when the midpoint should go by the listener and if it’s too late to start the sample from the beginning, they seek into it.
One final comment about Audio Bootcamp: since the beginning it’s been more of an “introduction to game audio” day. This year it seemed far more like an extra day of the audio track. So many interesting speakers and talks on music, technical sound design, VO, etc. Pretty cool that audio has so many compelling topics that it takes more than the 3 day conference to cover all pertinent info.
Wednesday started with a talk from Jim Fowler of Sony talking about using Orchestral colors in Interactive music. While it was a bit esoteric for non-music people, Jim did a fantastic job of presenting a great concept in regards to working with music stems: rather than arrange music by instrument or section, arrange it by function within the score. He then showed how he marks up charts for an orchestra so they can tell what they need to play when. Really neat concept, and some lovely dry, British humor to boot.
I then headed over to my one non-Audio talk by Alistair Hope from Creative Assembly on building fear in Alien Isolation. Unfortunately it was only a half hour talk, but somehow he managed to get through all of his content. The key takeaways here were how they used prototyping to figure out their concept and then stayed true to the concept through further testing. These guys really get the meaning of the term “grounded” in regards to design and how something is grounded when it makes sense in the world you are building, rather than the real world. Interestingly they toyed with making it a 3rd person game at one point due to the fact that most other survival horror games have been 3rd person and there was also the conflict at the time with FPS Aliens Colonial Marines. In the end they found that 3rd person felt like an Alien game, while 1st person felt like Alien, so they stuck to their guns. The last, most important thing, which should apply to all projects were their Key Universal Learnings. The seem so self-explanatory, but are definitely worth reminding ourselves (and our teams) of when working on a project:
- Have a Strong Vision
- Everything should work together to support that vision
- Deliver strongly on the vision
- Believe in what your doing
Next up for me was a talk from Harmonix on creating the interactive musical experience of Fantasia. My one wish for this talk was that they brought a Kinect along because it was cool to see some movies of their prototypes in Max/MSP and Live, but watching the movies of gameplay made we want to see how the 3d motion of the user caused various changes in the music. Still it seems like everyone that works at Harmonix is a musical tech wizard and they definitely have a lot of fun developing their gameplay.
Wednesday concluded with a talk from Monolith about Shadows of Mordor which was really great. Brian Pimantuan, the audio director as well as his programmer and staff composer did a really good job of showing how they set out to maximize emotional resonance in the open world environment of the game. Some of the interesting things they did were moving the listener back to the player to make things more intimate and tie things closer to the player. Similar to Condemned, they added music stingers to impacts on Uruk Captains. Really nice, subtle touch of integrating music into sound design and increasing intensity. Also really dug they way they took a few queues for the Nemesis Orcs and made each one unique and reinforced each Orcs character by chanting the Orc Chieftains name over the music cue. Really slick. Also of note, though they barely touched on it, was how great the mix of this game is. So much going on and just a fantastic job of keeping everything balanced and sounding good.
Thursday was the (almost painfully) long day. The morning began with Oculus’ Brian Hook and Tom Smurdon talking about their experiences thus far with audio and virtual reality. They had some interesting perspectives on how we need to handle audio for VR including all mono sounds and a very judicious use of music. Gone are the days of simple tagging of anim roots with sounds to be replaced with a joint-based animation tagging system since the immediacy of virtual reality means we need greater spatialization of near-field sounds. They provided a great, early insight to playing with audio in VR games. It also made me very excited and encouraged about the work Sony is doing on the same front. The Oculus programmer, Brian Hook, made a VST plugin of their 3d audio SDK implementation which allows Tom, the audio lead, to easily audition 3d audio sounds before getting them in the game. A nice touch and one we should (hopefully) expect to see for other 3d audio solutions soon.
I had plans to troll the expo floor for a bit after the Oculus talk. I tried to see Nuendo’s implementation into Wwise 2015.1, but the line was too long, so I started to wander and ran into Mike Niederquell of Sony Santa Monica and Rob Krekel from Naughty Dog. We spent the next hour chatting about a gamut of topics including best practice uses for the PS4 speaker controller (perhaps a future blog post). Before I knew it, it was time for the next talk, which was Joanna Orland of SCEE talking about how to get a team on board and understanding your audio vision. Using the Book of Spells project she introduced the concept of creating a common language with the rest of the team so they could provide feedback to audio without being obtuse. In the Book of Spells example, each spell type was given an elemental name derived from natural sounds. If the rest of the team wanted changes to a specific sound they would use these elemental descriptions to help describe to Joanna the exact aesthetic they were looking for.
Rob Bridgett gave a very compelling talk on adaptive loudness and dynamics in mobile games next. His talk was arguably about much more than mobile games and easily spills into handhelds and also has implications for consoles. Rob is doing some supercool stuff out at Clockwork Fox. Not only does he do different mixes and loudness settings via compression based on whether the user has headphones connected or not, he also uses the device microphone to measure the noise floor of the room to help determine optimal loudness for the game mix. Brilliant adaptive techniques which, given the availability of a microphone, should be used in consoles as well.
Next up, Martin Foxton presented a talk on modular sound design using the Frostbite engine. His concept was essentially the notion of building sound events or in-game sound effects from smaller building blocks of sounds which can be reused as necessary and also creating templates for these sounds a la prefabs in Unity where you can create a script to carry over various settings for a sound. If you’re not already using modular sound design, it’s a great way to achieve variety while still maintaining sane bank sizes. It’s the reason every time you fire an R2 smoke bolt in inFAMOUS Second Son there’s 1024 possible derivations of the sound that can play!
The final talk of Thursday was a mind blowing presentation by Zak Belica of Epic and Seth Horowitz of Neuropop, a neuroscience research company. Seth was pretty damn hilarious and I only wish they had another hour or two to discuss their concepts. The takeaway here was that sound was one of the fastest of your six senses (yes, there are six. Seeing dead people is not one of them, but balance is). Anyway, because audio is such a fast sense, especially compared to vision, it means there are fewer possible illusions we can play on the auditory sense. However there are some neat tricks: For example the sound of bacon frying makes most non-vegetarians salivate, especially when you show an image of bacon with the sound. Speed the sound up and show a picture of bees, and people think they’re hearing bees and feel a little more uncomfortable. They showed us a few other really neat tricks including modulating a sound at 18.1 – 22kHz to make the eyeball vibrate and create a discomforting feeling. Using infrasonic distortion panned alternatively left and right to create unease. And even why fingernails scratching on a blackboard used to give everyone such shivers when blackboards existed (the envelope of the sound is identical to a child screaming in pain). Seriously, we all need to do more research into neuroscience and how it affects or manipulates audio perception. There’s a lot we can play with there.
By Friday, brains and livers were full, but there were still a couple more good talks to attend. Before these final sessions, I walked the expo floor and was finally able to check out Nuendo’s implementation into Wwise. It’s not fully realized yet (you can only import audio files, not folders which you can templatize into containers), but it’s a great start which I hope other DAWs will follow suit with. Needless to say, I’m starting to evaluate Nuendo now and hope they come to their senses, realize the opportunity they’re creating for themselves, and offer competitive crossgrades. There’s some great forthcoming features in Wwise 2015 besides Nuendo as well: calling events from other events, batch rename tool, profiler enhancements, optimizations, incremental bank building, advanced cache streaming and more. Can’t wait to start playing with it!
The afternoon started with David Collins and Mike Niederquell having an informal discussion about the sound design of Hohokum. Super awesome that they did a live demo during their talk. Not only are there not enough live demos at GDC, but watching Mike play through some of the levels made me really want to play the game again. It was cool to watch such a fun, light informal talk and also bask in the joy that is Hohokum. Seriously, if you haven’t experienced it (I would say “play,” but it’s less of a game and more of a audiovisual experience) you should definitely seek it out and give it a go!
A perfect cap on GDC was Dwight Okahara and Herschell Bailey from Insomniac giving a glimpse into the open world sound design of Sunset Overdrive. Key takeaways here were that the audio team helped drive and sculpt the irreverent style of the game by implementing offbeat audio into early “gritty” concepts which brought the rest of the team around to the more fun style we now know and love as Sunset City. They showed off some of their fun tech like contextual storefront dialogue and the horde crowd/walla system and it was fun and refreshing to see such a talented team facing the same frustrations my own team does with streaming, lack of programming resources and other annoyances that plague our daily audiocentric lives.
So that was the talks that I made it to. Granted for every audio talk I went to, there was another at the same time. I missed tons of great talks from Matthew Marteinsson of Klei talking about Early Access to the PopCap team blowing minds with their work with Wwise and 5mb of memory to create Peggle Blast on iOS to Jon Moldover, Brian Schmidt and others talking about turning music games into instruments and more of an interactive experience.
One final note which I’ve said so many times this past week and hope to never stop repeating: the game audio community is something truly special and wonderful. Hanging out with and meeting so many inspiring men and women and being able to openly share our passion is such a fortunate thing. Of all the people I met, hung out with, joked with, talked shop with, etc., there wasn’t an ounce of ego anywhere. Everyone in the community seems dedicated to each other and hellbent on push our entire industry forward together and I can’t express how lucky I feel to be just a small part of that experience.
Leave a Reply