Month: March 2013

  • Modulation in Wwise

    One of Wwise’s few shortcomings is its current lack of support for LFOs. Modulation can be a godsend to make looping static sounds feel way more dynamic and alive. (an example using volume, pitch, and lpf is here). I wanted to outline two different means here you can “cheat” modulation in Wwise using some technical trickery.

    1). Modulation RTPC:
    This is the simpler method, although somewhat limited in it’s dynamism. Simply create an RTPC linked to a global timer, and once the timer reaches it’s max, it resets itself to 0. I’ve called mine “modulation” with a min of 0 and a max of 100 (units being set in the engine as seconds). I can then draw an rtpc curve for modulation on any sound I want and affect the pitch, volume, lpf, etc. over time (friendly reminder: subtle pitch changes are WAY more appealing than extreme pitch changes).The most important factor here is to remember to have your values at 0 and 100 be identical, so there’s no pop in the loop. The obvious drawback to this solution is that the modulation is uniform with no possibility of change per cycle. However with a 100 second loop, you have a fair bit of time to build a dynamic modulation curve whose looping won’t be easily detected by a user.

    2). Using a tremolo effect as an LFO:
    This solution comes from Steven Grimley-Taylor who posted about it on the Wwise forums, and is nothing short of a brilliant use of the tools available in Wwise to make an LFO a reality. It also has some limitations, which we’ll discuss in a bit. The basic gist of this concept is to create a white or pink noise sound generator and sidechain it to a tremolo effect. As Steven explains it:

    “Create a Sound SFX object with a Tone Generator Source set to White Noise. Then add a tremolo plugin and then a metering plugin which generates an RTPC

    Wwise_Mod_LFO_fx_layout

    The tremolo becomes your LFO control and you can map it anywhere you want. It becomes unstable at faster speeds, but then this is probably not the best solution for Audiorate FM. For normal ‘modulation’ speed LFOs it works a treat. Wwise_Mod_LFO_tremoloWwise_Mod_LFO_rtpc

     

    You can also go into modular synth territory by creating another of these LFO’s and then modulating the frequency of the first LFO with the amplitude of the second.

    Oh the LFO audio should be routed to a muted bus, you don’t actually want to hear them, just generate a control RTPC”Wwise_Mod_LFO_bus

    I’m currently using a couple of these in my project and it works great, the only drawback is that using the tone generator plus the tremolo per LFO isn’t super cheap (~2 – 3% of CPU), and the more modulators you want to add, the more expensive it gets. But you can drive the parameters of the LFO from other rtpcs, opening up enormous avenues of creativity and evolving sounds.  It’s a really nice way to spice up some bland looping sounds and give them a bit more life.

  • The future of next-gen sound blah blah blah

    Sorry, I couldn’t resist being a little snarky as I typed that title out. Every time there’s a new generation of consoles on the horizon, words begin to flow about what “next-gen” means in relation to (insert your discipline here).  For me, there are two interrelated aspects that we can look at to push envelopes further: technological and structural. Technological advances are those made possible by the capabilities of the hardware and how that interacts with software. CD-ROMS meant we could start streaming redbook audio, tons of voiceover, and video. The Xbox’s DSP chip gave us a low pass filter and reverbs built into the system. The PS3’s SPU core architecture gave us an entire core (or more I suppose if you sufficiently bribed your programmers) to do with whatever we wanted: create custom DSP or utilize FMOD or Wwise and go crazy with the delays and reverbs and eqs, etc. The PS4’s 8gb of memory means, given the time to load data into RAM, we have a near limitless reservoir for game resources. Ok, so “near limitless” is probably an over-exaggeration, but we’re talking a 16x increase over the last generation!

    By structural, I mean how does the technology create new ways for us to deliver a sonic experience. The sub-industry of mobile and web development have democratized game development significantly, and with them and the rise of Unity as a viable engine has audio middleware solutions like FMOD and Wwise along for the ride as well. Even Wwise, which started out as a PC, 360, PS3 only platform 6 years ago, now has support for iOS, Android, Windows Phone, and direct integration into Unity. With the democratization of tools comes the possibility to use these tools in novel ways. One such example is adaptive mixing. While in console land we’ve been doing this for years (for a great example of this see Rod Bridgett’s discussions of mixing Scarface: The World is Yours for PS2 back in 2006), this is only now being possible across all platforms. And with the potential for the Ouya, Green Throttle, Steam Box, Apple TV and other small Android, iOS and PC-based home consoles in the coming years we should see “next-gen” meaning what can we do to push content to be more impactful no matter the platform.

    While I think the structural aspect has far more implications for sound design as a whole, much of what becomes possible is through technology. I want to touch on one specific technology in this post: procedural synthesis and design. Procedural synthesis is nothing new. Guys like Nicolas Fournel and Dylan Menzies have been doing it for years. Audiokinetic have had wind and impact generation tools in Wwise for several years now. Audiogaming’s wind and rain tools are integrated into the new version of FMOD Studio and will be making their way into Wwise soon (not to mention their latest plug-ins for procedural generation of footsteps, vehicle engine models, and fire).  And there have been countless papers and demonstration videos showing off better and better sounding procedural algorithms from the aforementioned to elements like fabric and full physics simulations.

    When developing a game, we often take a cross-platform approach because it’s the easiest way to maximize profits for minimal cost: put your product out on every possible platform and you have a multiplier effect on how many people may play/buy your game, ideally with minimal additional effort on your part. Hopefully in the next year, if all these new hardware devices do come out, we’ll be at a point where we have enough processor to utilize procedural synthesis in games across all platforms, and not just minimal use on 360, PS3, and PC. Having these effects not just available, but possible, across all platforms maybe the shot in the arm procedural synthesis needs to finally bridge the gap from “talked about incessantly” to “the here and now.”

    These two elements: realtime, dynamic mixing and procedural synthesis, while nothing new, may be the holy grail of audio development for games in the near-future. I am eagerly looking forward to how things shape up over the next few years, to see what others are doing, and to further explore these waters myself.

     

     

  • …introducing what I’ve been doing for the past year!

    As most people know, Sony introduced the Playstation 4 last week which means the veils of silence and secrecy have been lifted and I can FINALLY divulge the project and platform I’m working on. I’m really excited about the project and am quite happy with how it’s been shaping up. Below is our debut trailer, which I worked with our team down in Foster City to do the sound design and mix. I hope to share some interesting tidbits about the audio systems I’ve designed for inFamous: Second Son over the coming months, so check back here occasionally or follow me on twitter or shoot me an email, give me a call, send me a pigeon, etc.  But for now, here’s the worlds first peek at inFamous: Second Son captured in-engine (and using lots of in-game sounds although we did the sound pass in post just because it was a lot easier to do so).

    [ylwm_vimeo]60270081[/ylwm_vimeo]