Month: March 2015

  • The Sound Design of inFamous Second Son: Video Powers

    Of all the powers in inFamous Second Son, Video powers may have been the most esoteric. I mean smoke at least has an analog in fire (and we used some fire elements in both the visual and sound design), but video? You think video, you may think laser, but we already had a neon power (which was even sometimes referred to as laser). So how the hell did we get something sounding as unique as our video powers without treading on the other power sets?

    Part of the answer is interestingly with how the power set itself was initially conveyed to the team. Video power was actually called “TV power” internally for most of production. Heaven’s Hellfire, the video game that Eugene, the video power conduit, is obsessed with was initially a TV show. We realized after many months that it made more sense to make it into a video game instead and that would open up more avenues for us to play around with in the gameplay (such as the mildly retro boss battle).

    But we still had “TV powers” stuck in our brain and when Andy and I began brainstorming about how to make sounds that were powerful and unique and “TV like” we started thinking about televisions. We stalked thrift stores around town hoping we’d come across some old 1970s vacuum or cathode tube televisions to take apart and record. We failed there, but Andy eventually came across a couple old CRT TV/VCR combos. Double obsolete points! We brought these into the studio and proceeded to record all kinds of sounds with an array of microphones from shotguns to contact mics to crappy telephone microphones which did an amazing job of capturing bizarre electromagnetic interference around the power supply, and other surfaces. We recorded all possible permutations of power on and power off sounds and even got the VCR mechanisms to give us some very bizarre whines and hums. We also did some recordings of the Sucker Punch MAME arcade cabinet which has a very old CRT monitor in it with tons of wires exposed, as well as a shortwave radio I’ve had for years, but never really needed for a video game sound before.

    We recorded all of these sounds at 192kHz and the frequency content of the recordings on the CRT monitors at the higher frequencies was pretty astounding. While some of them we had to remove the >20kHz content to save our ears and speakers, Andy also did some pitch shifting to play around with some of these normally inaudible sounds and they became part of the video power palette.

    A few words on the telephone microphones we used: they are cheap and really neat for recording electromagnetic interference. Although Radio Shack may be dead and gone now, you can still get them online. It’s pretty neat the wide array of sounds you can get from one of them by waving near essentially any power source from a monitor to a computer, plugs, etc. Basically any electronic device will give you some interesting content. For a lot of the TV powers, Andy took various EMF sounds and morphed them together using Zynaptiq’s Morph plugin.

    So, similar to our other power sets, below is a video showing some of our field recording as well as the final in-game sounds.  What’s different here is that the video powers were finalized later in the project and we were so focused on finishing the game, that we did not make a fancy, fun video for the team. So, it may not be as fun as the previous videos, but still shows what we recorded and how it ended up sounding.

  • The Sound Design of inFamous Second Son: Concrete Powers

    It’s hard to believe that inFamous Second Son is a year old already!  I’ve been completely lagging on finishing up these posts about the powers design for the game, so let me use this opportunity to make good and present the first of the final 2 parts of this series. I will hopefully get around to posting my presentation on the Systems Design for the game soon as well so those who haven’t heard/seen it can have the information available to them. Anyway, on to the magic and mystery of concrete!

    For those who haven’t played or seen inFamous Second Son you play a guy who gets superpowers battling an authoritarian government agency called the DUP whose soldiers are all imbued with concrete superpowers by their leader Dana Augustine (as normally happens with government agencies).

    The biggest challenge for us with concrete was how to make it sound unique. It’s just rocks and stone right? We’ve all heard countless variations on rock sounds in everything from impacts to destruction and rubble/debris sounds. We needed to figure out ways to make our sounds stand out as unique, while also conveying the power of the enemies in the game who used concrete.

    The powers ran the gamut from concrete grenades to spawning concrete shields to launching off spires of concrete and forming a concrete balcony on walls. In short there was tons of concrete objects being created and broken in the world. Not only did we need these to sound unique and “powered” but they also had to sound completely distinct from all the “normal” concrete in the world you could destroy or collide other objects with. It was a huge challenge, but one that Andy Martin was definitely up for.

    The place to start, naturally, was by buying a bunch of concrete. I looked into the process of concrete, which is usually just a mixture of water, an aggregate like sand or gravel, and Portland cement (named after a type of stone used in the UK, not the sleepy hamlet of the Pacific Northwest of the US). While the thought of mixing up my own concrete sounded appealing to my construction worker wannabe side, we weren’t in a position in the project where we had limitless time to experiment. So we did the next best thing: went to Home Depot. Andy and I both made trips to the hardware store and bought all kinds of concrete and stone, from paver stones (which were often too resonant) to clay bricks, cinder blocks, and more. They were demolishing a building across the street from my house and I noticed some particularly large chunks of both asphalt and concrete sitting on the other side of the fence. I waited until nightfall, donned my ninja costume (really just a bathrobe with a scarf tied around my head) and absconded with the almost-final resources we would need to make our concrete powers come to life.

    From here, Andy began to run wild and experiment with all kinds of torture he could enact on our various pieces of concrete. From scraping everything against the slabs from metal disks to binder clips to resonating a jews harp against them to, yes, crushing, beating and destroying, he created an elaborate and unique palette of concrete sounds. As a few of the characters in the game developed, their powers also evolved. Some characters now had “beams” of concrete they would shoot out to shield allies while another burrowed underground like Bugs Bunny on his way to Albuquerque, and another sat atop a giant swirling tornado of concrete chunks. We needed something unique here and I devised a way to record a constantly moving collection of some of the concrete chunks we had broken (and wrote up a blog post about it here).

    Andy’s wizardry both in recording these sounds and in shaping them in ProTools and Wwise into the layers of concrete powers was top notch as always and now it was time to show the team what we’d been doing (and that our jobs are more fun than theirs). Below is another Sonic Equation of sorts which we showed at a company meeting demonstrating some of recording techniques used to make the concrete powers of Second Son:

    Thanks again for reading. I hope to get a write-up of the video powers (which naturally entailed a lot of fun creative recording and manipulation) done next week in time for the proper anniversary of Second Son’s release. Stay tuned!

  • GDC 2015 recap

    GDC 2015 has ended and those who weren’t there have gotten their information solely in 144 character fragments. I wanted to write up a quick post of my experiences at the conference, key takeaways, etc. to give those who weren’t able to there a (slightly) more comprehensive idea of what transpired.

    To be clear, there was no drinking, having fun, or gallavanting because we’re all professionals and don’t have time for such shenanigans.

    The big takeaway from the week as a whole is that everyone is interested in VR and 3d audio, but we’re still figuring out what to do with it.

    I arrived Monday night, not to hit audio bootcamp on Tuesday, but because I’m lucky enough to work for Sony and have the opportunity to be a part of their Game Technology Conference before GDC in which I sit in a room with some of the most talented game audio developers in the world and talk about game audio. We heard talks from Evolution studios, some of the Morpheus team, and others from SCEE and SCEA. Talented folks. Here I am sitting in a room with guys from Naughty Dog, Sony Santa Monica, Bend Studio, Sony Cambridge, London, Evolution, Sony Japan, Insomniac. It’s a very humbling experience being surrounded by such incredible, inspiring talent, all the while having great discussions to further inspire and innovate.

    I actually cut out of conference for an hour to catch the beginning of the Audio Bootcamp and Jay Weinland’s talk on Weapon Design in Destiny.  I always enjoy talks where people share some screenshots of their Wwise projects. I find it fascinating how we all use the tool in different ways to sometimes do similar things and other times create totally innovative concepts. The two big “that was cool” moments from Jay’s talk which have been done elsewhere, but were elegantly implemented: the notion of silence duckers: using a Wwise silence plugin with 0.1 second length played with an explosion to duck most other sounds by 12dB for the .1 seconds with a .2 second recovery time to carve out some space for the explosion without being detectable. Also their passby solution for rockets in which they created several sounds of the same length with the pass by the listener at the same spot in each file. Based on velocity of the object they trigger the sound based on when the midpoint should go by the listener and if it’s too late to start the sample from the beginning, they seek into it.

    One final comment about Audio Bootcamp: since the beginning it’s been more of an “introduction to game audio” day. This year it seemed far more like an extra day of the audio track. So many interesting speakers and talks on music, technical sound design, VO, etc. Pretty cool that audio has so many compelling topics that it takes more than the 3 day conference to cover all pertinent info.

    Wednesday started with a talk from Jim Fowler of Sony talking about using Orchestral colors in Interactive music. While it was a bit esoteric for non-music people, Jim did a fantastic job of presenting a great concept in regards to working with music stems: rather than arrange music by instrument or section, arrange it by function within the score. He then showed how he marks up charts for an orchestra so they can tell what they need to play when. Really neat concept, and some lovely dry, British humor to boot.

    I then headed over to my one non-Audio talk by Alistair Hope from Creative Assembly on building fear in Alien Isolation. Unfortunately it was only a half hour talk, but somehow he managed to get through all of his content. The key takeaways here were how they used prototyping to figure out their concept and then stayed true to the concept through further testing. These guys really get the meaning of the term “grounded” in regards to design and how something is grounded when it makes sense in the world you are building, rather than the real world. Interestingly they toyed with making it a 3rd person game at one point due to the fact that most other survival horror games have been 3rd person and there was also the conflict at the time with FPS Aliens Colonial Marines. In the end they found that 3rd person felt like an Alien game, while 1st person felt like Alien, so they stuck to their guns. The last, most important thing, which should apply to all projects were their Key Universal Learnings. The seem so self-explanatory, but are definitely worth reminding ourselves (and our teams) of when working on a project:

    • Have a Strong Vision
    • Everything should work together to support that vision
    • Deliver strongly on the vision
    • Believe in what your doing

    Next up for me was a talk from Harmonix on creating the interactive musical experience of Fantasia. My one wish for this talk was that they brought a Kinect along because it was cool to see some movies of their prototypes in Max/MSP and Live, but watching the movies of gameplay made we want to see how the 3d motion of the user caused various changes in the music. Still it seems like everyone that works at Harmonix is a musical tech wizard and they definitely have a lot of fun developing their gameplay.

    Wednesday concluded with a talk from Monolith about Shadows of Mordor which was really great. Brian Pimantuan, the audio director as well as his programmer and staff composer did a really good job of showing how they set out to maximize emotional resonance in the open world environment of the game.  Some of the interesting things they did were moving the listener back to the player to make things more intimate and tie things closer to the player. Similar to Condemned, they added music stingers to impacts on Uruk Captains. Really nice, subtle touch of integrating music into sound design and increasing intensity. Also really dug they way they took a few queues for the Nemesis Orcs and made each one unique and reinforced each Orcs character by chanting the Orc Chieftains name over the music cue. Really slick. Also of note, though they barely touched on it, was how great the mix of this game is. So much going on and just a fantastic job of keeping everything balanced and sounding good.

    Thursday was the (almost painfully) long day. The morning began with Oculus’ Brian Hook and Tom Smurdon talking about their experiences thus far with audio and virtual reality. They had some interesting perspectives on how we need to handle audio for VR including all mono sounds and a very judicious use of music. Gone are the days of simple tagging of anim roots with sounds to be replaced with a joint-based animation tagging system since the immediacy of virtual reality means we need greater spatialization of near-field sounds. They provided a great, early insight to playing with audio in VR games. It also made me very excited and encouraged about the work Sony is doing on the same front. The Oculus programmer, Brian Hook, made a VST plugin of their 3d audio SDK implementation which allows Tom, the audio lead, to easily audition 3d audio sounds before getting them in the game. A nice touch and one we should (hopefully) expect to see for other 3d audio solutions soon.

    I had plans to troll the expo floor for a bit after the Oculus talk. I tried to see Nuendo’s implementation into Wwise 2015.1, but the line was too long, so I started to wander and ran into Mike Niederquell of Sony Santa Monica and Rob Krekel from Naughty Dog. We spent the next hour chatting about a gamut of topics including best practice uses for the PS4 speaker controller (perhaps a future blog post). Before I knew it, it was time for the next talk, which was Joanna Orland of SCEE talking about how to get a team on board and understanding your audio vision. Using the Book of Spells project she introduced the concept of creating a common language with the rest of the team so they could provide feedback to audio without being obtuse. In the Book of Spells example, each spell type was given an elemental name derived from natural sounds. If the rest of the team wanted changes to a specific sound they would use these elemental descriptions to help describe to Joanna the exact aesthetic they were looking for.

    Rob Bridgett gave a very compelling talk on adaptive loudness and dynamics in mobile games next. His talk was arguably about much more than mobile games and easily spills into handhelds and also has implications for consoles. Rob is doing some supercool stuff out at Clockwork Fox. Not only does he do different mixes and loudness settings via compression based on whether the user has headphones connected or not, he also uses the device microphone to measure the noise floor of the room to help determine optimal loudness for the game mix. Brilliant adaptive techniques which, given the availability of a microphone, should be used in consoles as well.

    Next up, Martin Foxton presented a talk on modular sound design using the Frostbite engine. His concept was essentially the notion of building sound events or in-game sound effects from smaller building blocks of sounds which can be reused as necessary and also creating templates for these sounds a la prefabs in Unity where you can create a script to carry over various settings for a sound. If you’re not already using modular sound design, it’s a great way to achieve variety while still maintaining sane bank sizes. It’s the reason every time you fire an R2 smoke bolt in inFAMOUS Second Son there’s 1024 possible derivations of the sound that can play!

    The final talk of Thursday was a mind blowing presentation by Zak Belica of Epic and Seth Horowitz of Neuropop, a neuroscience research company. Seth was pretty damn hilarious and I only wish they had another hour or two to discuss their concepts. The takeaway here was that sound was one of the fastest of your six senses (yes, there are six. Seeing dead people is not one of them, but balance is). Anyway, because audio is such a fast sense, especially compared to vision, it means there are fewer possible illusions we can play on the auditory sense. However there are some neat tricks: For example the sound of bacon frying makes most non-vegetarians salivate, especially when you show an image of bacon with the sound. Speed the sound up and show a picture of bees, and people think they’re hearing bees and feel a little more uncomfortable. They showed us a few other really neat tricks including modulating a sound at 18.1 – 22kHz to make the eyeball vibrate and create a discomforting feeling. Using infrasonic distortion panned alternatively left and right to create unease. And even why fingernails scratching on a blackboard used to give everyone such shivers when blackboards existed (the envelope of the sound is identical to a child screaming in pain). Seriously, we all need to do more research into neuroscience and how it affects or manipulates audio perception. There’s a lot we can play with there.

    By Friday, brains and livers were full, but there were still a couple more good talks to attend. Before these final sessions, I walked the expo floor and was finally able to check out Nuendo’s implementation into Wwise. It’s not fully realized yet (you can only import audio files, not folders which you can templatize into containers), but it’s a great start which I hope other DAWs will follow suit with. Needless to say, I’m starting to evaluate Nuendo now and hope they come to their senses, realize the opportunity they’re creating for themselves, and offer competitive crossgrades. There’s some great forthcoming features in Wwise 2015 besides Nuendo as well: calling events from other events, batch rename tool, profiler enhancements, optimizations, incremental bank building, advanced cache streaming and more.  Can’t wait to start playing with it!

    The afternoon started with David Collins and Mike Niederquell having an informal discussion about the sound design of Hohokum. Super awesome that they did a live demo during their talk. Not only are there not enough live demos at GDC, but watching Mike play through some of the levels made me really want to play the game again. It was cool to watch such a fun, light informal talk and also bask in the joy that is Hohokum. Seriously, if you haven’t experienced it (I would say “play,” but it’s less of a game and more of a audiovisual experience) you should definitely seek it out and give it a go!

    A perfect cap on GDC was Dwight Okahara and Herschell Bailey from Insomniac giving a glimpse into the open world sound design of Sunset Overdrive. Key takeaways here were that the audio team helped drive and sculpt the irreverent style of the game by implementing offbeat audio into early “gritty” concepts which brought the rest of the team around to the more fun style we now know and love as Sunset City. They showed off some of their fun tech like contextual storefront dialogue and the horde crowd/walla system and it was fun and refreshing to see such a talented team facing the same frustrations my own team does with streaming, lack of programming resources and other annoyances that plague our daily audiocentric lives.

    So that was the talks that I made it to. Granted for every audio talk I went to, there was another at the same time. I missed tons of great talks from Matthew Marteinsson of Klei talking about Early Access to  the PopCap team blowing minds with their work with Wwise and 5mb of memory to create Peggle Blast on iOS to Jon Moldover, Brian Schmidt and others talking about turning music games into instruments and more of an interactive experience.

    One final note which I’ve said so many times this past week and hope to never stop repeating: the game audio community is something truly special and wonderful. Hanging out with and meeting so many inspiring men and women and being able to openly share our passion is such a fortunate thing. Of all the people I met, hung out with, joked with, talked shop with, etc., there wasn’t an ounce of ego anywhere. Everyone in the community seems dedicated to each other and hellbent on push our entire industry forward together and I can’t express how lucky I feel to be just a small part of that experience.