Protected: The Sound Design of Ghost of Tsushima: the Furin (windchime)

This content is password protected. To view it please enter your password below:

Posted in Uncategorized

Wwise Tour 2020: Sonic Storytelling

I finally updated my website’s backend after being on PHP v5.6 (current version is something like 7.4) for far too long. Good news is now you can access the site via https for better security. Now that that is done I hope to post some articles soon about some of the specific sound design and recording we did on Ghost of Tsushima.

For now, here’s a talk we did in December about Sonic Storytelling and exploring some of the ways we use audio to tell our stories. It was a lot of fun to share some of the world with the audio community:

Posted in Ghost of Tsushima, Sound Design, Systems Design, Tools/Middleware Tagged ,

Ghost of Tsushima Shipped!!!!

Apologies to the one and a half people that may actually check this blog frequently for new content. Needless to say, I’ve been busy for the past few years working on Ghost of Tsushima. I’m SO excited that it has shipped, is in player’s hands, and they seem to be loving it. It’s been especially humbling to see people calling out audio as something they love. We knew the music and voice acting was pretty top notch, but to see so many comments about the sound design and even the mix has been such a treat. I’ve got a handful of posts and videos I’ll be sharing over the next few months going into some of the detail of the sound design for the game, some of our recording practices and other fun stories from the field.

For now, here’s an interview I did with Jennifer Walden for A Sound Effect that talks a little bit about some of the fun we had creating sounds for this special project:

Posted in Ghost of Tsushima

The Playstation 1 as you never heard it

The Playstation 1 turned 25 this month, and I was just going through some old recordings when I stumbled upon this gem (warning: it’s a slow build, but it gets loud!):

For reference, here’s what that SHOULD sound like:

I have no idea how our Playstation made this sound, but here’s the story:

Travel back with me if you will to the year 1999. Y2K mania was sweeping the world as QA Managers were stockpiling canned goods and weapons (at least mine was). Cellphones were slowly creeping into peoples pockets, but were mainly used for talking. And a hot tech device, the Palm Pilot, was taking the way we wrote letters and transforming them in a stylus friendly way (R.I.P.).

I was a recent college graduate living with a house of fellow humans who were trying to find their way in a post-graduate life. One of my roommates had bought a Playstation back in 1995 or 1996, and it lived a rough life bouncing from house to house. Needless to say, by 1999, it was on its last legs. I remember popping in a disc sometimes and hearing the platter spin and wondering if it was melting. Sometimes it would work, sometimes it wouldn’t.

Often, when it wasn’t going to work, it went through its boot process at a glacial pace. The startup sound (and visual) would play at that crazy speed above turning a 15 second synth stinger into a 2 and a half minute opus that Vangelis himself would have considered composing on his latest album.

I thought it was the coolest thing and, as a newly minted sound designer, thought it would be a really fun thing to capture. I truthfully can’t remember how I recorded it, but with the amount of hiss on the original recording, I’m guessing I used a cassette recorder pointed directly at the TV speaker. Hi-tech solution, I know.

Obviously, I’ve never used the sound because it’s property of Sony (who is coincidentally now my boss), but I’ve kept the sound around because I love it as an example of what we do in our craft: take some sound and via trickery, design, tools, techniques, and happy accidents, turn it into something wholly new and unique.

Managing Quality of Life (as a Manager)

(this post originally appeared on designingsound.org)

© IWM (Art.IWM ART LD 2850)

© IWM (Art.IWM ART LD 2850)

We all work in creative fields doing creative work with creative people. Along with that creativity is some semblance, great or small, of technology and the requirement for us to guesstimate the length each task we do will take. Making an entertainment product, no matter the medium, has elements of reinventing the wheel and huge swaths of unknown creative territory. The opportunities for innovation are often near impossible to accurately schedule down to the day or hour. Couple that with the fact that most professional creative endeavors entail dozens or hundreds of other people doing work that will impact our own. For these reasons, we need to accept that occasional crunch may be inevitable periodically during a project’s lifespan, BUT crunch does not have to be an unending slog of 18 hour days, stale pizza, and sleeping under your desk.

As a manager it is my job to protect my employees as best I can from anything in the workplace that may negatively affect their quality of life. It is also my (unspoken) job to protect my own sanity. Below I will sketch out the means I have used to protect my employees and myself and have learned to mitigate crunch to just a few days and weeks scattered throughout a project.

This is not just for managers; it’s for everyone. If you’re a one-person audio show, you ostensibly are your own manager. Even within an audio team, everyone needs to manage aspects of their own work no matter what. How and when we do our work can often save us some pain in the end.

frontloader

Front load

One of the most important means to get a project into a good spot toward the end of its lifecycle is to front load asset creation wherever possible. This often means identifying elements of the project with a lower probability of changing even as the scope or design of the project itself evolves. Ambience, character foley, and physics impacts are often places we can do the most early work without losing time and assets to changing direction. If your project will have tons of breakable crates, you can probably schedule a wood breaking session before the crates even exist in your project and cover a comprehensive behavior set of breaking, sliding and impacting wood. If your project takes place in a modern day city, you can start identifying the ambient sounds you’ll need and begin capturing and integrating them immediately. If you’re in a medieval fantasy world, you can do the same and use your imagination a bit more to help fill in the voids. Concept art is always helpful to provide both inspiration and maintain a direction that will match the visual aesthetic. Same goes for characters. If you have sketches of the characters and their clothing, ideally with material callouts, you can start amassing the materials you’ll need for clothing and footsteps. The odds of finishing these broad categories of sound design early is unlikely, but having the bulk done before production shifts into high gear will ideally give you time later for more critical sound design tasks.

As with any of these strategies, it is imperative to be in communication with the rest of the team when making choices here, so we can be making smart decisions on what we work on. If the design of a character is still in flux, obviously we would not want to invest our time in design work which may be thrown out in a month or a year. It is critical to work smartly whether in pre-production or marching toward an end date if we want to prevent as much crunch as possible.

expectations_01

Set expectations

When it comes down to it, there is no way we can successfully relieve ourselves of most of our crunch without setting expectations within the rest of our production team. For most other disciplines, what audio does is a black box of unseen (but heard) magic. Therefore, we need to be exceptionally proactive in communicating with the greater team. Getting information from them about features and schedules as early and often as possible is essential. Check in frequently in whatever way(s) you can to ensure there are as few surprises as possible throughout production.

Inevitably, there will be hiccups: miscommunications, new features, forgotten meeting invitations, etc. It is critical that we are clear in communicating audio schedules and make it understood that when other peoples’ schedules change, we have no choice but to follow suit. Once a team understands the cascading effect of changes it can lead to a better sense of planning and a more conservative approach to last minute, risky additions. In a perfect world, it will lead to extended deadlines for audio and other tail-end team members, but this is unfortunately still the exception to the rule. There are some studios that provide a grace period for audio work compared to other disciplines. This should not be the exception, but to make it a standard rule, we must generate trust from others in order to gain this additional flexibility and time to keep working after others are locked out.

This trust and understanding is something that must be continually fostered. Part of our jobs beyond sound design, mixing, and integration is education. We need to be teaching the team not just about what we do, but how their decisions affect those of us downstream and the kind of lead-up time we need to effectively respond to changes.

It is also critical as managers and sound professionals that we stand our ground. It’s a hard pill to swallow that it can be okay—if not imperative— to say no to team member requests, and there is a fine line to tread. Obviously we want to make the best sounding product we can. We also don’t want to make a habit out of saying no to external requests, as this can cause serious professional problems. Instead, we need to always have the macro view of the project in mind and use this to smartly ensure others’ decisions that will affect audio are thought out and the ramifications to your team and the project are understood and considered.

Part of what we are doing through production is trying to balance quality versus quality of life. We all want to do the best job possible and make something we are proud of at the end of the day. But killing yourself, having your health affected, or being broken by crunch does not make it worthwhile.

A successful strategy I have employed in my projects is to organize delivered sounds into a few categories: temp/placeholder, which are usually just a means to get something stubbed in but blatantly not final quality, shippable, which is good enough to ship but not good enough to make you happy when you hear it, and final, which is a polished, perfect sound. Get all your sounds to a shippable quality first, and then bring your most important sounds to final level. If you have additional time, bring a batch of the next-most-important sounds up to a final quality bar, and repeat until it’s pencils down time. It can be exceptionally difficult to be comfortable signing off on sounds that are just “good enough.” But where health is concerned there is a balance that must be struck. “Work smarter, not harder” is a key mantra to quality of life in the workplace. And part of working smarter is identifying which sounds are not critical enough to take time out of your schedule for further iteration and polish, and knowing when something is good enough to sign off on and move along.

protection_gasmasks

Protect your People

As a manager, it is critical not just to save your sanity, but to try and protect your employees as well. Crunch is never easy, and these considerations should really be made for the entire team, not just your direct reports. It’s likely that everyone is going through their own personal hell during crunch, but you’re all going through it together. Checking up on each other is a great way to bond, grow stronger and more efficient as a team, and even stumble upon happy accidents and features.

When going through a crunch, it is also important to recognize the stress long hours can put on peoples’ lives. We all have a life outside of work: family, friends, children, hobbies, sports, etc. We need to be supportive of and foster our employees’ abilities to have their life outside of work even during crunch. It helps to be more flexible with your employees during crunch when you can. If they work late, let them come in late. Ensure they don’t work 7 days a week, cover for each other to give yourselves breaks during a tough schedule, take some time every day away from your monitor, and ensure your employees do the same. Giving some leeway will help their morale, their sanity and their productivity.

Our external activities are what define us as people and provide us our solace and happiness. Work provides a salary and, for most of us we are also lucky enough in that it provides a creative outlet and often a way to be productive, make cool stuff, and have fun at the same time.

Time spent in crunch also needs to be considered once the long hours are done. Leniency here is also highly important to make up for the lost time in an employee’s life. Many organizations provide comp time after these periods, in effect giving employees free time off to recuperate, travel, be with their families, etc. If the place where I’m working does not provide comp time, or sufficient comp time, I will allow my employees to take a few extra days off when it makes sense anyway. It’s a small thing, but having this time to recover from long hours is critical to quality of life. When we ask more of our employees, we need to be prepared to give more as well.

photo by Evil Erin (http://flic.kr/p/6r2GgF). Used under CC BY-SA 2.0

photo by Evil Erin (http://flic.kr/p/6r2GgF). Used under CC BY-SA 2.0

Make it Fun

Crunch in some form may be inevitable, but that doesn’t mean it has to be a slog. If we have to go through a period of crunch, whether it’s a day or two months, try to keep the feeling light and fun among your team. It is on us to also help mitigate the pressures and pain of crunch by promoting more fun and stress relieving activities at work. I’ve worked at one studio that played BINGO during dinner on crunch nights. Initially I thought it was bizarre. Isn’t bingo something old people play out of boredom? And if I was going to be at work at night, I’d rather just get my work done and go home. But it was a super fun, silly diversion for us that, oddly enough, people looked forward to. If we were going to be spending long days and nights at the office, at least let’s take a half hour to goof off and laugh.

For my own part, I’ll invite the team over to my area after dinner for a drink. Anyone can bring a glass and we all enjoy a little sip of brown liquor before getting back to work. It takes twenty minutes out of our workday but provides a brief respite, a moment to relax inside the long hours and refocus for the remainder of the day. It bears noting that I didn’t come up with this idea myself; rather it developed organically at a previous studio where we were going through a particularly brutal crunch. We all took turns bringing in a bottle and amassed a frighteningly large collection of empties by the end of the project. It was a coping mechanism that developed into bonds of friendship that persist eight years later and counting.

photo by Leticia Bertin (http://flic.kr/p/99QbRd). Used under CC BY 2.0

photo by Leticia Bertin (http://flic.kr/p/99QbRd). Used under CC BY 2.0

Get Help

Obviously a great way to mitigate crunch is to get help. If you have the resources, hire some contractors to help your team out. Offloading work will allow you to focus on what’s important and get more of your shippable assets to a final level of polish. Of course, we often do not have the resources or even time to throw more bodies at the problem, but this is something that can be planned for earlier in production so the surprises that may creep up can be handled better towards the end.

As a manager it is part of your job to try and do right by your employees, but just as they are going through crunch and seeking solace and guidance from you, sometimes you need that same sounding board yourself. Your team is a great place for that, but if there are grievances of higher-level issues to discuss, you should seek out management or other leads to discuss these issues. Often seeking help from others will bring to light production issues they may not be aware of or help you come up with new solutions to improve your process moving forward.

This article would probably be as appropriate if it were titled, “How to be a Decent Manager,” as most of these issues are pretty obvious at the surface but often get lost or forgotten in the mire of production. Crunch should never be a defining factor of creative development. We are lucky we get to spend our working lives in a creative endeavor we are passionate about. But we also need to maintain time for ourselves outside of work. We need to constantly be fighting against the issues which can lead to crunch via communication, education, and getting pertinent work done early. At the same time, we need to accept that as deadlines creep up, there will inevitably be at least a few late nights interspersed into the schedule. To help the team deal with these periods, we need to be available to support them, and sometimes we need to be creative in either seeking out solutions for lessening the severity of crunch or at least turning it into something a little more fun.

The Feels of Interactive Audio

(this post originally appeared on designingsound.org)

polar-bear-196318_960_720

[Author’s note: much of what I describe below could be construed as “reactive audio,” not “interactive” because sound is most often reacting to other systems and world parameters rather than directly affecting them in a feedback loop. For the sake of brevity and sanity I will refer to both reactive and interactive sounds below using the widely accepted and used term of “interactive audio.”]  

The sculpting of the relationship between sound and an environment or world state is perhaps one of the greatest powers we hold as sound professionals. Obviously this is often conveyed in linear media via music. Think of Bernard Herrmann’s strings in the shower scene of Psycho, Wendy Carlos’ score in The Shining, or the “ch-ch-ch ha-ha-ha” foreshadowing a grisly Jason Vorhees splatterfest. Even the subtle, otherworldly sound design within a surreal tale like David Lynch’s Lost Highway grounds the inexplicability of the plot into the strange world in which it occurs. In each of these examples, the sound is at least as critical as the visuals to make the audience feel something. But the sound and visuals are the same every time, meaning we get the same experience with every replay.

With technology, we have the ability to extend the power of audio into interactive mediums, and we’ve been doing it for years. It is this direct relation between user actions and sonic changes that provide feedback to a user that something is happening and this is the kernel of effectiveness which we call interactive audio. Let’s explore some examples of interactive audio across a few different mediums and look at how these examples affect the user and what responses they evoke.

 colecovision-wcontroller-l

Games

Video games have been kicking around the term “interactive audio” for years, most frequently in the realm of music. While this is usually a means of “action changes, so the music responds to these changes,” there are several examples where audio has a more tightly integrated, and thus effective, approach to interacting with player control.

The quintessential example of interactive audio for most people is Guitar Hero and its spiritual successor, Rock Band. These are also prime examples of music being the driving force of interactivity because these games are, in fact, music simulators. You play along with a song performing similar-ish actions to musicians. Pressing buttons in rhythm to the music rewards you with points and a successful round. If you mess up, you hear your error through sound effects and music dropouts which simulate flubbing a song while playing it live. Even Harmonix’ earlier games, Amplitude and Frequency used a similar gameplay mechanic with a similar reward/failure loop tied directly into music performance. Interestingly, while audio interactivity is ingrained into this style of gameplay, we see most of the unique bending of the sound to player actions when the player performs poorly. Only in failure does the song sound different than if it were playing outside of the game space. From the standpoint of the game’s directive (make the user feel like they’re actually playing these songs), it makes sense. Play perfectly and you’ll be rewarded with the feel of “I did it! I played Stairway to Heaven!” Fail, and you get the feeling that you need more practice.

Parappa the Rapper

Before Guitar Hero there was Parappa the Rapper, the Playstation rhythm rapping game that was all about pressing a button to the rhythm. But even something as simple as Parappa introduced the ability to “freestyle” your rap by pressing buttons in rhythm beyond what the game instructed you to do. Doing so would give you bonus points and also transform the soundtrack into something with a new, remixed feel. This interactivity provides several layers to the game: it adds a new dynamic to the soundtrack, which is normally the same 2 minute song played over and over until you move on to the next level. It enhances the difficulty and player satisfaction by challenging players to try and be creative in how they press buttons in a game whose main mechanic is to follow onscreen instructions. And it promotes replayability by giving users a chance to do something new and different in each playthrough. Not bad for a simple sample trigger!

A more complete example may be the game Rez. Initially developed for Dreamcast and ported to PS2 and more recently PS4 and PlaystationVR, Rez has the look of a wireframe old-school arcade game like Tempest with mechanics similar to the 16 bit arcade classic Space Harrier. In Rez your character pulses to the beat, a simple scaling trick which instantly roots the music into the action of the game. Rez was pretty revolutionary for its time because the music itself changed based on what you were shooting and what was spawning or being destroyed on screen. The music was all 120bpm, 4/4 electronic music, and the way the player attacked the objects on screen gave the music the adaptive ability to retrigger loops, change instrumentation, or play sweeteners on top of the score. It’s pretty fascinating to watch playthroughs of the game and hear how every game session sounds different. The way the player chooses to attack will completely affect the music structure and samples used. Similar to how the player “remixes” the Parappa vocals by pressing buttons, players in Rez are essentially remixing the soundtrack to the game (both sound effects and music) by playing the game. It is the player’s input that affects the audio output we hear.

Rez

Thumper

Thumper is another rhythm action game where the music, sound and visuals are all so cohesively tied together that it feels “right.” You get sound when you expect it, and it matches the flow of the visuals. Every time you turn a tight corner, the track itself flashes, and the flash matches a swell in the music. Each power up or enemy destroyed provides a satisfying low frequency hit or music sample that matches the percussive feel of the action onscreen and ties seamlessly into the game’s pulsing score. Pitch and beat stutter effects are also present in the game play that all affect the game’s score.  Tying sound into the onscreen action not only sells the action better but also emphasizes the relationship between these aspects of the game and our core information senses of hearing, seeing, and (sometimes) touch. More on that in a minute.

The videos above do not really do justice to the interactivity of the experience because the relationship is so intimately bound between game and player. To someone watching a video, it just looks like a video game. But to the player experiencing the lockstep relationship between audio and gameplay it becomes something new; a more complete experience.

It also bears discussing the interaction between animation and sound further because it is a highly effective, thoroughly underused way to enhance intended mood. Whether stark or whimsical, audio tied to and interacting with the visuals can lock the action of a game deeper into the soundtrack while also pulling the user further into a sense of immersion. From the automatons marching to the beat in Inside to creatures bobbing their heads to the score in Rayman to Sly Cooper’s marimba footsteps when sneaking around in a barrel to the music layers triggered when doing a super-powered jump in Saints Row IV, music or sound effects tied to animation enhance the play experience by tying the sonic palette to the wider scope of the world and its gameplay.

Linking sound to actions in the world and having sound react to or interact with game state helps provide focus to a game and creates a sense of tempo in the action. But why is this the case? Naturally, the answer lies in that bag of gray matter sitting in our skulls. Cognitive scientists have been studying the phenomenon of multi-modal and cross-modal processing for years. Multi-modal processing is the act of multiple senses all providing information to the brain about a stimulus, while cross-modal processing is the act of one sense affecting the perception of another. For example, there have been studies showing playing audio cues during visual stimuli can make the users think they see visual cues that are not there. Therefore sound can imply visual data in certain scenarios. That is power! Various studies have also shown a more complete understanding of a situation when given clues or information through more than one sense. While there haven’t been any studies (that I know of) specifically looking at cognitive brain function and the use of interactive audio, I hypothesize that audio interacting with visual and game state stimuli make multi-modal integration tighter and therefore enhances perception of these events.

photo by MuseScore (http://flic.kr/p/dvmqDQ). Used under CC BY-SA 2.0

photo by MuseScore (http://flic.kr/p/dvmqDQ). Used under CC BY-SA 2.0

Apps

Computer and smartphone applications have been another place where we experience interactive audio. Any audio enthusiasts above a certain age group may remember the early web sensation that was Beatnik. Created by former pop star, Thomas Dolby, Beatnik allowed you to remix music on the fly in a web browser. Pretty revolutionary for the late 90’s! Nowadays we’re seeing similar, more sophisticated applications on smartphones. From the DJ Spooky app which allows you to remix your entire music library to the audio app for the movie Inception which produced a generative score based on player location and action, we are seeing these tiny devices creating compelling, iterative experiences for users through interactive audio.

DJ Spooky app

Turntabalism and DJ mixing are excellent examples of (formerly) low-technology interactive audio. With 2 turntables and a mixer, a person can take two pieces of music and transform them on the fly into a singular new composition. The DJ interacts with these two pieces of vinyl (or CDs and mp3s nowadays) and creates a wholly new experience from this interaction. Using these skills as a jumping off point, DJ Spooky, well-known DJ, musician, author, artist, and busy bee, helped create an app which allows a user to utilize these same tools to remix their entire music library. Using controls and gestures on a touchscreen users can mix, trigger samples, play loops and even scratch samples and music from their own library. It’s a fun, dangerously addictive toy/performance tool and what keeps users coming back is the interactive nature of manipulating linear audio. The interaction between the user’s fingers and their music collection, slowing down a track, scratching it, or firing off a specific phrase at will to create something entirely new used to be an art that took years of practice and lots of gear to master. Now it all lives in a few dozen megabytes on a tiny phone or tablet and provides users with an instant ability to mix and remix any sound into something new.

https://youtu.be/_jZG-4Kv3BI?t=19

Inception

At the time of its release, the Inception app, a free iOS movie tie-in download was dubbed an “augmented sound experience.” Created by the innovative team at RJDJ, it combined samples from Hans Zimmer’s score with smartphone technologies such as GPS, accelerometer, gyroscope and microphone and DSP technologies such as delay, time stretching and reverb to make a truly interactive application. The premise of the app is that you unlock “dreams,” which are sonic textures created from the score and heavily processed. As you do things in real life, the app begins playing these textures processed in a suitable way. For example, if you launch it after 11pm, it begins playing really spacey, dreamy textures with layers of post-processed delay. Other tracks and effects are only unlocked with other special events, like being in Africa or being in sunlight, each with its own unique experience. Similar to augmented reality games, but audio-centric; your experience of the app is what you hear and it in turn is affected by elements like location and time. Your very being, where you are or when you are, is the driver of the changes you hear in the sonic textures. If you have an iOS device, you should download and play with it yourself to get a glimpse into certain ways we can affect a user’s experience with various parameters and technologies around us.

H__R

RJDJ also has a newer app currently titled H__R (apparently there is some litigation regarding its name; it’s looking for a new name now) which gives the user control over these features a bit more explicitly. With H__R the user puts headphones on and is given a series of presets such as “Relax,” “Happy,” or “Sleep.” Each has sliders the user can play with to affect the sound input of the microphone. Select the “Office” preset and you have sliders to control Space, Time Scramble, and Unhumanize. You can use these sliders to dampen and filter the sound around you making it sound like you’re in a big space with lots going on or you can zone out into your own quiet time. You are effectually tweaking your mix of the world around you. This app is especially interesting because of the way it tweaks your perception of everything you hear with some sliders on your phone and some clever under the hood DSP. It’s yet another example of how interactive audio can affect the way we hear and create a new experience in “the real world.”

photo by Nick Gray (http://flic.kr/p/fDcHq). Used under CC BY-SA 2.0

photo by Nick Gray (http://flic.kr/p/fDcHq). Used under CC BY-SA 2.0

Installations

One last area where we’ve seen a lot of interactivity is in art installations. I find these the most interesting because unlike games or other media apps, they involve a bit more human interaction, not just via fingers or hand gestures, but humans moving around to experience the way sound interacts with the environment. While art installations incorporating interactive sound may often be relegated to art galleries and workspaces, they also challenge the ways we perceive and react to audio in more interesting ways.

An example which we may see outside of galleries and in places like public parks is a whisper wall or other means to concentrate and direct vocal sounds to a listener far away from the speaker (see the image above). While this is technically just a demonstration of physics and sound propagation it is also a means of architecture driving interactive audio. At the “Murmur Dishes” installation in downtown San Francisco, I have seen people walking down the street stop what they’re doing to begin interacting with the sculpture. One person going to one side, the other on the opposite side and talking quietly while staring back at each other in amazement with that look in their eyes of “Oh my God! I can totally hear you right now!” This is an interactive sculpture with audio as the feedback of the experience. Users rely on the audio to prove to themselves that this seemingly aural illusion is in fact real, and it is through audio (sending and perceiving speech) that the users interact with the sculpture.

Let’s look at another example of an art installation incorporating interactive audio to get a better understanding of how audio can be used to affect user experience. Anne-Sophie Mongeau, a video game sound designer and sound artist, created an art installation which was a simulation of the experience of being on a large sailing ship. The exhibit featured a custom 11-channel speaker layout playing back sounds around the exhibit to simulate various ship sounds from the creaking of the deck to the wind in the sails above. Weather including wind and rainstorms would randomly occur periodically while visual projections supported the changing weather and the ebbs and flows of ocean movement. In the middle of the room was a ship’s wheel. Invariably people would gravitate to the wheel and see if it moved. Indeed it did and every turn elicited a heavy, creaky, wooden “click” from a speaker mounted at the wheel. The more someone would turn the wheel, the quicker the weather patterns would change. Anne-Sophie designed this system using Max/MSP, which is a fantastic tool to create complex dynamic audio systems using internal or external logic or controllers.

Adding the wheel and its interactive components transformed the piece from a passive art installation into an interactive experience. Users were no longer walking through an exhibit and hearing waves and sails. Instead, they were on board the ship, steering it into and out of squalls. The immersion of interactive audio can be exhilarating, especially when it is taken out of the living room or the screen and propagated to a larger venue.

https://vimeo.com/136594880">https://vimeo.com/136594880</a> 

One may be tempted to infer that interactive audio is so effective because we are used to static (non-interactive) audio, but in looking at various forms of popular media, we are seeing more and more examples of interactive audio everyday. Their effectiveness lies in how they tie into other systems and also how they provide instant verification that player action is affecting the experience whether that’s at home on a console, on our smartphones or at a museum. This has long been the appeal of interactive audio, to craft an experience and have it adapt to user behavior, and I expect we will continue to see new applications and implementations of audio as a driving means of interactivity continue to mushroom.

Dealing with external production issues

(this post originally appeared on designingsound.org)

Let’s consider (for the next 178 words) the production process as a river, and we are a drop of water flowing from the beginning of the project to the end. Like every other drop of water, we have a goal: to make it to the end of the river. And as a single drop of water, there are numerous challenges we face as we hurtle down the river, many of which are the result of our own shortsightedness, creative struggles, or management of just getting from point A to point B. There are also a great many issues we face and obstacles we must overcome, originating from outside of our own control. Rocks, sticks, dams, and eddies which may block or slow the flow of water, much the same way interdepartmental communication, decoding feedback, and managing milestone expectations may hamper audio development. To bolster this admittedly weak simile, I decided to ask a handful of successful, talented audio leads throughout the video game industry about some of the challenges they constantly face and ways they have gone about mitigating their harm to the audio of a project.

As we’ll see, many problems are unsurprisingly similar across studios and thus give us a space where we can collectively attempt to improve the industry and thus improve our own sanity and defuse some of the unnecessary challenges we face day to day. Before diving into these issues we all face, let’s first meet the people who were kind enough (and had time in their schedules) to succumb to an interview:

  • Jaclyn Shumate, Audio Director, PopCap
  • Jason Kanter, Audio Lead, Avalanche Studios
  • Kenny Young, Freelance Audio Director, Composer & Sound Designer at AudBod.com
  • Rob Bridgett, Audio Director, Eidos Montreal

Team members communicating

Interdepartmental communication

Communication, especially between departments, is often a source of breakdown in a project no matter the field. Successful projects are often ones where the goals are crisp and the lines of communication are known, understood and utilized frequently. This is not to say that a project cannot be successful without clear communication, but it becomes exponentially more difficult. Regardless of how successful communication may be, pitfalls are inevitable.

Communication “always fails in one way or another, ” says Rob Bridgett, ” Every team and situation responds differently and every situation is slightly different, and it has to fail before it can be fixed.” He finds that one-on-one communication (with other leads) is crucial and has both scheduled and unscheduled time allotted to catch up with other leads and ensure they are on the same page. A diverse range of communication tools from instant messaging to face-to-face communication (either in person or via Skype) is crucial because, “email communication always fails at some point because it is abused as a communication method. I’ve found it is only useful when used as a small, formal part of an overall communication strategy of other methods.”

For Jaclyn Shumate, and indeed for all of us, “cultivating communication…is one of the most important jobs of an audio person.” Similar to Rob Bridgett, she will “always check in with individuals on the team, and see what they are working on, instead of relying on production tools – or lockout deadlines.” It’s this face-to-face time and constant discussion that helps raise awareness of audio within other departments.

After being locked out from a build on the director’s whim while he was still completing his work, Kenny Young, realized “that quality was my responsibility and wasn’t going to come from anyone else, and that communication isn’t a one way street – you need to second-guess and confirm crucial information, especially at crucial times on a project.” This second-guessing is the same reason Rob Bridgett and Jaclyn Shumate set up their one-on-one time to try and mitigate these issues or at least confirm suspicions about content changes and course direction.

In one instance, Jason Kanter had the misfortune of working on a project where the duties were explicitly divided into programming on one coast of the U.S. and creative content (art, design, audio, etc.) on the other. There was little to no face-to-face interactions between these disparate departments across multiple time zones. The setup proved so contentious that each studio blamed the other for all issues throughout the production from frame rate drops to broken tools. This created a toxic environment that eventually led to the project’s cancellation. Kanter ruminates, “that if the entire team were in the same location, there’s a chance the game may have seen the light of day.” And that is another important aspect of communication: it’s not solely about information dissemination, but also about building rapport and friendship. On this specific project, “there was no collective camaraderie. No sense of family,” and when we think about our most successful teams or projects they often involve a level of intimacy and kinship that are an intangible element of great work.

Since communication is never perfect, Shumate also has other tools at her disposal to help catch non-communicated changes. RJ Mattingly, technical sound designer at PopCap, “made a sweet script that sends us an email every time someone makes a change that affects audio. This means we will get an email if someone deletes an animation timing event, or an audio script attached to a prefab.” In a perfect world we don’t need tools like this, but we also need to be proactive. While we are fostering better communication, we must also try to prevent information from slipping through the cracks through whatever means we have available.

Communication is a tough nut to crack. The lessons here are Always Be Communicating, but having contingencies for communication failure are still important. The more you can connect with individuals, the more successful your communication will be, but the responsibility to get the information we need to do our jobs and ensure content is not changing in the eleventh hour lies with us.

photo by Tom Stohklman (http://flic.kr/p/m72mo). Used under CC BY-SA 2.0

photo by Tom Stohlman (http://flic.kr/p/m72mo). Used under CC BY-SA 2.0

Decoding feedback

Another issue heavily related to communication is that of taking feedback and translating it into something your team can understand and iterate upon. Sound is an art form and, like any art, trying to quantify it into written or spoken language often proves illusive. Furthermore, most people outside of the audio community may not understand (or properly use) audio terms and technologies or be able to communicate with the same language we use daily. Therefore our challenge becomes multi-dimensional: we must acts as interpreters and translators while ensuring we understand what the desired result is before we embark on translating feedback into action.

Kenny Young told me of a time when he was doing a review of a cinematic with his team and immediately heard exactly where and how the audio was not supporting the movie. The interesting thing to him was that, “each of them had picked an aspect of something they had heard in that moment, and because the moment wasn’t working, ascribed the problem to the thing that they had heard, not because it was actually the problem but simply because that is what they had heard.” While this is an issue we have all dealt with, Young went further and did some research as to why and how this happens.  Through his research he found that, “people retrospectively ascribe justification for their feelings all the time . . . (and) .  .  .  if you just dismiss people as ignorant (because what they say is ignorant), you overlook the gift they are giving you. When someone has a bad experience, that isn’t questionable, ever – that was their reality. So when someone says something is specifically wrong with the audio, and you know what they are saying isn’t true, ask questions which get to the root of how the experience made them feel and try to reverse engineer the cause.”

Rob Bridgett has a very direct way of dealing with feedback which ties directly into the way he handles interdepartment communication. He will always prepare at least three variations of a sound for anyone he is collaborating with. “We listen through each one and discuss. I say up front I am going to play three versions so I can set expectations. This gives the other person some different elements to react to and points us in a direction.” For him decoding feedback is made easier by being a moderator to the feedback as the creative leads are delivering it.

And how about the other side of the equation? While so much of our job is spent having to translate feedback from leads, what about when we need to encode our own feedback into a language which our audio team can understand? Jason Kanter faced this issue and called it his, “most challenging feedback experience.” To provide his designers with the greatest amount of creative freedom he tried to couch his feedback in abstract terms; not as abstract as “more ‘purple’ in a gunshot but maybe asking for it to sound a little less ‘wooly.'” The results were less than stellar and led to frustration both on the sound designers part and for Jason whenever he needed to provide feedback. His solution was actually quite simple, and only works because he is dealing with people who speak a common language. He figured out to, “simply tell them exactly what was wrong with the sound design on a more technical level. I’d usually start with a slightly abstract description of the issue but then follow it with suggested technical solutions. ‘This impact is too biting. Try attenuating some 3-5Khz.’ or ‘That gunshot sounds a little too hollow. Maybe try reducing the mid-distance gunfire layer and using some more close perspective.’ Altering my approach in how I communicate made all the difference in the world. In the end it greatly improved their work as well as our relationship.”

Another interesting dilemma is when your vision does not mesh with a creative director’s idea of what something should sound like. For Jaclyn Shumate, the “biggest challenges have come when I have disagreed with the creative direction, but have had to follow it.” But she took a potentially stressful, creatively draining situation and turned it into a strong lesson in design. “It can be hard to ‘hear’ a sound before it is created when it is not what you would choose to make! So, I asked very specific questions to try to understand what the lead had in their head. I didn’t like the final result, but it wasn’t bad audio, it just wasn’t what I would’ve done. So, I tried to focus on the positive of it being a good exercise in improving my sound design skills. If you can make a sound that sounds like what is in someone else’s head, then you have some awesome sound design skills!”

There is no right way to decode (or encode) feedback, but make sure you understand the direction a creative lead may want your revisions to take. Communication, examples, and technical descriptions are just a few ways we can turn a muddy mess into something intelligible.

Contractor deadlines

Another area where external forces may affect our process is when dealing with– or being– a contractor. Often the pressures contractors face are compounded compared to a “normal” production team. They lack the communication of what’s going on in the team and often just work blindly throwing content over the fence or being several days behind on builds. Conversely, relying on contractors brings with it the risk of an additional set of deadlines, review processes, and communication that wraps the macro challenges of production into a microcosm of those same potential issues.

The common thread in the previous issues returns as Kenny Young was quick to point out, “communication is clearly super important here.” He, Rob Bridgett and Jason Kanter all mentioned those rare times when a contractor didn’t work out, or bugs or content changes came in after the contract was up, and they were left having to re-do the work. This is a compounded issue involving scheduling, deadlines and production, and the more areas we bring into the picture the greater the chance of things not running smoothly. Bridgett tries to schedule for those potential pitfalls and external issues by setting, “deadlines with a safety net of extra time, so… If content arrives a little late on their end, or if the person is slammed and trying to get the content over to me, we always have some extra time to fall back on.”

People’s work styles differ as well and it can really help knowing what you’re getting into when hiring a contractor. As Jaclyn Shumate noted, “I think this is where having longer-term relationships with external partners is beneficial. Knowing someone’s work-style can solve problems that exist, because then you can plan for it!” If you know someone provides great content, but may take a little longer to produce it, or they’re really popular/busy so they often have multiple projects in the works, you can “[build] in more time in [their] schedules, so that we [can] have the assets we needed in time for our lock-dates.”

But we can’t always hire people we have worked with before or who are known entities. Jason Kanter brought up an anecdote where he was working on a project with a lot of dialogue and managing a team of dialogue editors. He was “trying out new editors in hopes of expanding the team which eventually meant running across someone who may have been a talented sound designer but simply didn’t connect with the specifics of the task at hand. . . Thankfully I was able to maintain a team of four or five reliable and talented editors but when working on a project for an extended period of time with a team of freelancers you’re going to run into issues like when another gig pops up or, there’s a death in the family. In those cases when I couldn’t find anyone to fill in I had to simply pick up the slack and do the extra work myself.”

Contractors are often necessary for a small team to get content done in time, but managing external team members is at least as challenging as managing an in-house team and usually much more so due to the challenges of lacking constant face-to-face, one-on-one communication. Even when you may be in the same building or room, you’re dealing with a whole new set of quirks, skills, and idiosyncrasies and it can be an additional challenge to get these to mesh with your schedule or production style. Hiring people you know and trust is a great way to make this relationship gel, but finding new people who work well is also rewarding. When those fail, most of us need to be prepared to jump in and fix what’s broken along with every other fire we’re putting out at the time.

Does this look familiar?

Schedule/Milestones

Without fail, a group of sound designers at a bar can commiserate on the classic misrepresentations of schedules and milestones. An alpha, beta, or any milestone means everyone is changing content without ample time for those of us downstream of all that work to react and respond to changes in any meaningful way.

Every interviewee acknowledged the challenges of needing to do critical work after all other disciplines. As Jaclyn Shumate noted, “every game I’ve been on has had some type of content dump or massive game re-structuring work right before ship. It’s the worst part of being dependent on other disciplines to complete your work.” Jason Kanter mirrored this sentiment in his own observation that, “there will always be 11th hour content dropped in our lap and we’ll be spending the last month of production working round the clock to wrap things up.”

And the (partial) solution is often to be proactive. Kenny Young suggests, “Get all your ducks in a row in pre-production so that you can execute like a ninja when you are shipping the game – this includes sucking up the knock-ons from new features.” Rob Bridgett will, “set expectations with the project planners, be clear about cutoff and delivery dates for other departments, agree on that plan and move forward. And when that plan falls apart, have a clear plan B ready with extra time specified for the audio side.” Similarly, Jaclyn Shumate will, “attack the issue from all sides. I prioritize, adjust, set, and communicate expectations, and then have conversations with the content creators and producers to see if there is anything we can do to lesson the load. After that it’s just digging in and do the best we can, and hoping that it won’t happen as bad again the next time around.” Sometimes if there is some extra budget around you can outsource some of the work, and that can be tremendously helpful.  However, in my experience often by the end of the project the money well is dry…!”

As Shumate notes, sometimes we can reach out for help from contractors, when budgets allow, to try and fill in the gaps caused by the tidal wave of content changes as the end of a project. Jason Kanter has had success using other employees to help his teams out as well. While we often want to be touching the integration of our content to assure it is exactly as intended, as critical deadlines loom getting something close is better than having a bunch of missing sounds. A perfect example of this involves tagging animations. For a particular project for Kanter, after discussing the issues of late-breaking animations being added to the game, the “animators kindly agreed to take on the task of tagging animations for effects. This meant more work for the animators and less control by the FX creators but it’s the only way we could ensure that animations get tagged and that those tags get updated throughout production.”

Lastly we must also never forget that, while what we do is artistic, it is still commercial art. The schedule is the arbiter of how much time we should allot ourselves for various tasks. As Kenny Young reminds us, “don’t fall in to an overly ambitious workload or create self-indulgent technology or content solutions that are labour intensive in ignorance of what it takes to actually ship them, and everything else, to an insanely high quality level.”

budget_01

Budget

Perhaps less of an issue than most of the others mentioned here, short of never having enough money to buy nice new equipment, budgetary issues are still often controlled externally of our department. We may request a budget and provide itemized breakdowns of spending and proposed person-hours of work, but there is almost always someone else telling us how much money they’re going to be giving us to get our job done. The challenge then is to make something great, even if the monetary figures don’t add up.

Obviously, the biggest potential issue with a budget is not having one. But it can be just as tricky to have a budget, and not know what it is! As Jason Kanter notes, “I can deal with budgetary restrictions but trying to determine what you can spend without being given a ballpark or even knowing the overall expected cost of the project is no fun. Eventually it just turns into overshooting in the dark based on what you want rather than what the project can handle.”

This is yet another place where communication can smooth a potentially sticky area. Jaclyn Shumate considers herself lucky because she’s never really faced any major hurdles involving budgets. For her, it’s been, “Not about getting big ones, but about getting clear ones and being able to stick to them.” By having her budgets communicated clearly to her, she is able to allot proper funds where they are needed without second-guessing or planning blindly. Rob Bridgett mirrors this sentiment when he says, “having [a] detailed audio budget broken down and agreed upon at the very beginning is absolutely the best approach, that way you can see exactly where you can move things from and to (foley, library sounds, mix etc), rather than just having a big pot of money assigned to ‘audio’ that looks big, but isn’t really!”

It helps to be flexible when possible with regards to budgets as well. Kenny Young has had, “zero budget on some projects, and I’ve been offered blank cheques by the publisher on others, but I’ve always worked as efficiently as possible.” For Rob Bridgett, he finds additional success by dealing with budgets, “openly and with an eye and ear on opportunity rather than digging heels in, because the opposite can also be the case when extra budget is floating around.”

The problems we all face daily in our jobs are quite common. We all deal with the frustrations of external forces putting pressure on our work – from communication and feedback to contractors and budgets. Often the way the successful sound designers above have worked through these issues is by being proactive in preparation, reactive in dealing with issues as soon as they arise, and communicating, communicating, and communicating some more. Hopefully you’ve learned something from the talented leads who gave their time here. If not at least you know we’re all in the same boat heading down that production river, trying to avoid the rocks.

Evoking Emotion in Pure Sound Design

(This post originally appeared on designingsound.org)

composite; photo by Peter Dutton (https://flic.kr/ps/BVeWw). Used under CC BY 2.0

When we discuss what makes the sound in any medium: film, music, video game, theater, etc. effective, it usually boils down to one of four aspects, and often all four: the detail in the design, the emotion conveyed, the way the sound meshes with the visuals, and the mix of the sonic elements. When sound design exists on its own, with visuals and voiceover removed or non-existent, we lose some of these key elements (or should we call them crutches?) we can rely on to help express these concepts, especially emotion, since we process so much emotion via spoken word, visual imagery and music. So how do we bridge that gap and create effective pure sound design that can still evoke emotion? The answer may lie in the concepts behind the very elements we have removed.

emotions_HappySad
Emotion

Continue reading

Tutorial: Using Queries in Wwise

This post originally appeared on designingsound.org

 

Wwise_Queries_banner

The Query Editor is a powerful tool within Wwise, yet most people I talk to either aren’t aware of it or never use it. With the Query editor, you can run searches in your project to easily make multi-edits, tag items, and even troubleshoot bugs. This post aims to lift the curtain on the Query editor to give users a better sense of how to wrap your head around this tool and become Wwise Power Users.

Project Explorer

Queries have their own tab in the Project Explorer and like other tabs in a Wwise project this represents a folder in your Wwise project. You can add work units as needed and create or duplicate and modify the existing queries to drill into the system and find what you’re looking for. Think of a query as a kind of mini-report. If there’s something you’re trying to track down or you want to see all of the objects in your project that are set to a specific value or using a specific RTPC, you can use a query to generate a list showing these objects.

Understanding the Query Editor

Query Editor

At the heart of the Query system is the Query editor. When we open a Query in Wwise, we are shown the Query Editor. We’ll break down the components of the Query Editor, but it is worth noting that it’s a fairly complex tool in regards to the number (and nesting) of settings you have access to, so it is highly recommended to spend some time with the interface to become familiar where some of the common settings live.

QueryEditor_top

The top of the Query Editor shows us the name of the current Query and from here the information begins to flow down. The first thing we need to do is decide what type of Object we want the system to find. This can be any structure within Wwise from a Sound to a Random Container to a Music Segment and so on. Click the drop down in the Object type and scroll through to see all the options you have. Whatever you select here will be the object type that Wwise returns when you run your Query. If you want an inclusive search of all structures in your project, select All Objects.

Next you can choose where you want the search to start from. Clicking the ellipsis next to the “Start From” box will open up a window where you can select a place in the Actor Mixer Hierarchy or the Interactive Music Hierarchy or the entire project. Wherever you choose, the Query will perform a recursive search from. So if you choose the Actor Mixer Hierarchy it will search every structure in every work unit, whereas if you select a single work unit, it will only search through the contents of that work unit.

You can also select the Platform if you’re working on a multi-platform project so you can search for All platforms or just a single one if you need.

The browser window

QueryEditor_browser

The browser section of the Query Editor is the meat of the system (or the protein-rich meat substitute if you’re vegetarian). The browser contains a series of collapsed categories and it is here that you can begin to select the logic and parameters you wish to search for. While this is the most powerful part of the Query editor, it is also unfortunately the most obtuse as well. It is often not super intuitive to find the parameter you’re looking for as it may be buried in a category you weren’t expecting, or given a generic name with a dropdown that will reveal the settings you’re after. Fortunately, if you’re on Wwise version 2015.1 or later you have the ability to search and filter within the browser by clicking on the tiny magnifying glass above the scroll bar. This will bring up a text field which you can type and the contents of the browser will filter its contents to what you’re typing. My advice is to actually spend a little bit of time looking through all of the categories in the browser, open up each one, click each option and see what lives in there. Having a cursory knowledge of these contents will help you create more powerful custom Queries later.

Criteria

QueryEditor_criteria

When you select an item from the browser window, it displays in the Criteria window to the right of the browser. The Criteria window takes these generic categories and allows you to get a bit more specific with a variety of tools from simple drop downs and checkboxes to conditional statements. In the example above, I am looking for objects which use a specific RTPC. Also notice that the Criteria name shows you where in the browser that specific element is located (Game Parameter -> RTPC -> Game Parameter Usage). This is another area where the depth lends itself to a layer of complexity that is best understood by actually clicking through some of the options to see what is available to you.

Above the criteria window is an Operator dropdown. When you create multiple criteria for your Query, the operator allows to you say if you want all of them to be required to return objects in your search (And) or any of them to be allowed but not required (Or).

Results

Results returned by a successful Query

Once you have all of these pieces laid out, you’re ready to let Wwise do the work. Click the Run Query button at the top of the editor, and if there are any objects that match the Object Type and Criteria in the scope of your search they will display in the Results window at the bottom of the Query Editor. From here you can select them and do a multi-edit or you can begin looking at individual objects to make adjustments or troubleshoot.

Essentially, what you’re doing with the combination of criteria and operator is creating a simple logic question for Wwise to answer. If you’re trying to find information about some sounds in your project, another way to approach it is to write it out a question and then build your query based on the criteria you’ve written out, such as “What sequence containers in my weapons actor mixer are set to continuous and use a trigger rate of less than 0.25 seconds?” To create a Query to return these objects, set your Object type to Sequence Container, set Start From to your weapons actor mixer and select Property values for continuous playback and Trigger rate (which you’ll set to < .25). Make sure the Operator is set to And and you’ve just answered your question!

Some examples

Even with the breakdown above, the Query Editor is a bit daunting. Fortunately, Wwise ships with an entire Factory Presets work unit full of very useful queries. Not only can we use these to find elements in our project, but we can also use them as a springboard to start making our own queries. Let’s take a look at a couple of these to see how the Query Editor works:

A Factory Query in Wwise showing how to find unlinked objects in a project

For those of you who have ever worked on multiple platforms where you needed to make platform-specific tweaks or changes to various objects, you know all too well the process of unlinking parameters from the “global” state of Wwise and having platform specific values. (For the uninitiated, you can break the connection of a slider and have Wwise apply different values for different platforms. So if you were working on a game where, for example the PC version had a different mix or number of variations than the Android version you do this via unlinking in Wwise). In projects like this, it’s common that some objects are unlinked but the majority remain linked. Perhaps you want to see all the objects in your project that are unlinked to get a sense of how different the platforms are. The image above is Audiokinetic’s preset for Objects with Unlinked Properties Query and it will do just that: show you all objects with unlinked properties. If we look at the Criteria in the Query Editor we can see that it’s basically looking for any objects (due to the OR operator) where the Volume, Pitch, Low Pass or High Pass filters are NOT linked. This Query will then return all objects in your project that have one of these objects unlinked. Drilling down through the Factory Queries is a great way to get comfortable with where various items live and what their tweakable parameters are.  Lets look at another couple before making our own.

I’ve had bugs in my projects more often than I’d like to admit where my sounds are suddenly inaudible. Usually I’ll use the profiler to try and track down what is going on (which is fodder for a wholly separate article). The issue often lies with the additive nature of filters. I may have a low pass filter on a sound, and then a state or two may be active which also tweak the filter. The exponential nature of the filter means that while each of these modifications may be minor, they add up quickly to muddle the sound to the point of inaudibility. When troubleshooting what may be causing the excessive values, I’ll often look at the voices tab in the profiler and if I see a pattern in the LPF values or abnormally high values, I’ll then run a query looking for LPF values either at an explicit number or above a certain value. The image below shows a query of all objects with an LPF value greater than 45 as well as some objects it has returned. Note that you can change the conditionals in the criteria so you can look for values greater than, less than, equal to, etc., to pin down what you’re looking for.

SpecficE xample With Returns

Now let’s say you use the Notes field in your project to help communicate with your Future Self about the state of things in the project. For example, when you add a placeholder sound, you add a note in the Notes field saying “placeholder.” With Queries, you can easily run a report to find all those sounds with a Note that reads “placeholder.” In the General section of the Browser is the item for Note. Type “placeholder” into the Criteria, run the Query on your project, and voila! All your placeholder sounds appear in the Results window. From here you can Copy them to the Clipboard to create a list for your reference, multi-edit them to remove the note, or modify them as you wish.

A query showing a search for objects with a specific string in the Notes field

Putting it all together

So hopefully now we understand the basics of the Query Editor. Let’s create one for ourselves! When adding music to the Interactive Music Hierarchy, the Music Segments are not set to stream by default and doing so manually requires drilling down a few containers to get to the Music segments to tag them as streaming. It’s kind of a pain when importing multiple files at once. So let’s create a query to find all music segments not set to stream. Once we get these, we can easily multi-edit the list to make the ones we want stream.

First off, we need to create a new query. Like any structure in Wwise, this is as simple as creating a child Query within a work unit in the Queries tab like so:

CreateQuery

Next we’ll want to build our new query. I’m only concerned with Music Tracks in this instance since those are the music objects which contain the Streaming flag, so I’ll set the Object Type to Music Track. I select my Start From point as the Default Work Unit in my Interactive Music Hierarchy. In the browser, I open up Property and double click Value to add that item to the Criteria. I select “Is Streaming Enabled” from the drop down and keep the checkbox unchecked. Now this will find all Music Tracks in my Default Work Unit that are not set to stream. I click Run Query and a list of non-streaming music tracks will display in the Results window at the bottom of the Query Editor. I can now select those I want to stream and press Ctrl+M to bring up the multi-edit window and check the box to Enable Streaming. All done!

New_MusicNotStreaming_query

And that’s a cursory view of using the Query editor and how it can help you troubleshoot bugs and speed up workflow. It’s a very deep tool and one that definitely takes some playing around with to really grasp the breadth of its capabilities. If you are looking for information about content in your project, you can find what you need using Queries. The Factory Queries provided are a great jumping off point, followed by experimentation on your own and sharing/discussion with the community. What is shown here really only scratches the surface, and the level of complexity you can build with queries is really only limited by your imagination.

Posted in Tools/Middleware Tagged , ,

Quick tip: using templates in Wwise

(this originally appeared as a post for designingsound.org)

Like any tool in a game developers toolbox, Wwise is a deep, complex program with an owners manual longer than most novels. Who has time to read through an entire manual these days? I wanted to show off a simple, often overlooked feature in Wwise, which may not be readily apparent to someone who hasn’t read the manual. The ability to import a folder structure and apply a Wwise structure as a template to it can save a ridiculous amount of time when setting up structures which may have a similar layout to other ones already in your project. With a little forethought and a few mouse clicks, the process of setting up complex structures in Wwise becomes an automated dream.

To start, let’s say we have a series of impact sounds which blend between soft and hard impacts based on the force of the impact and an additional layer of bounce sweeteners which only play above a certain force. We also do some filtering and pitch randomization based on the force and hardness of the objects colliding (via an RTPC). This is organized in Wwise as a blend container with some child random containers which each contain audio files:A blend container layout in Wwise we'll use as a template for importing a new structure. Click for a larger, readable version.

Now let’s start thinking about each of these structures as a folder. If we want to use this structure layout elsewhere in Wwise, we can “re-build” or emulate this structure layout in Windows using folders. Where we have structures in Wwise (their nomenclature for containers, actor mixers, virtual folders, etc.) we create folders in Windows which will serve as a guide for Wwise when we import new sounds. A Windows folder-based layout mirroring the impact structure above would look something like this:

Windows folder layout which corresponds to the structure layout you wish to emulate in Wwise

Similar to the blend container in Wwise example above, we have a “master folder,” in this case obj_sheet_metal_impact, which contains three folders: bounce, impact hard and impact soft and within each of those folders are corresponding wav files. With a folder structure in Windows that mirrors the structure we want in Wwise, we can import into Wwise complete with all of our bussing, RTPCs, crossfades, etc. created for us! (As an aside, I always build my folder structures directly in the Originals folder so that the organization is already in place without having to move wav files and folders around as an extra step in the File Manager).

Once your folders are laid out in a similar manner to your Wwise hierarchy, open the Audio File Importer in Wwise and click the “Add Folders” button. Navigate to the top level folder of your new structure, in my case “obj_sheet_metal_impact” and click “Select Folder.” This will open that folder in the Audio File Importer. You can now assign each folder level as a different structure in Wwise such as a random, sequence or blend container. The magic, however, happens when we click the arrow in the Template column and select “Browse” then navigate to an existing structure whose layout and parameters we want to emulate:

The Wwise template layout

As you can see Wwise automatically fills in which structure each folder should represent and even handles having more (or less) audio assets in a folder. Shuffle things around as/if needed, then click Import, and you’ll have a new structure mirroring the template structure complete with all rtpcs, crossfades, etc.

Our new structure with all template properties applied

Once we import the folder structure using an existing structure in Wwise as a template, we’re then free to tweak it to our hearts’ (or games’Smilie: ;) content, but most of the grunt work has been taken care of through some simple folder organization. Happy templating!

Posted in Tools/Middleware Tagged , ,