{"id":544,"date":"2017-01-22T00:34:48","date_gmt":"2017-01-22T00:34:48","guid":{"rendered":"http:\/\/www.bradleymeyer.com\/wp-core\/?p=544"},"modified":"2017-01-15T00:35:16","modified_gmt":"2017-01-15T00:35:16","slug":"the-feels-of-interactive-audio","status":"publish","type":"post","link":"https:\/\/bradleymeyer.com\/wp-core\/2017\/01\/22\/the-feels-of-interactive-audio\/","title":{"rendered":"The Feels of Interactive Audio"},"content":{"rendered":"<h6 style=\"text-align: center;\"><em>(this post originally appeared on <a href=\"http:\/\/www.designingsound.org\">designingsound.org<\/a>)<\/em><\/h6>\n<p><a href=\"http:\/\/designingsound.org\/wp-content\/uploads\/2016\/11\/polar-bear-196318_960_720.jpg\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-36505\" src=\"http:\/\/designingsound.org\/wp-content\/uploads\/2016\/11\/polar-bear-196318_960_720.jpg\" alt=\"polar-bear-196318_960_720\" width=\"551\" height=\"367\" \/><\/a><\/p>\n<h6><em>[Author\u2019s note: much of what I describe below could be construed as \u201creactive audio,\u201d not \u201cinteractive\u201d because sound is most often reacting to other systems and world parameters rather than directly affecting them in a feedback loop. For the sake of brevity and sanity I will refer to both reactive and interactive sounds below using the widely accepted and used term of \u201cinteractive audio.\u201d] \u00a0<\/em><\/h6>\n<p>The sculpting of the relationship between sound and an environment or world state is perhaps one of the greatest powers we hold as sound professionals. Obviously this is often conveyed in linear media via music. Think of Bernard Herrmann\u2019s strings in the shower scene of <em>Psycho<\/em>, Wendy Carlos\u2019 score in <em>The Shining<\/em>, or the \u201cch-ch-ch ha-ha-ha\u201d foreshadowing a grisly Jason Vorhees splatterfest. Even the subtle, otherworldly sound design within a surreal tale like David Lynch\u2019s <em>Lost Highway<\/em> grounds the inexplicability of the plot into the strange world in which it occurs. In each of these examples, the sound is at least as critical as the visuals to make the audience feel something. But the sound and visuals are the same every time, meaning we get the same experience with every replay.<\/p>\n<p>With technology, we have the ability to extend the power of audio into interactive mediums, and we\u2019ve been doing it for years. It is this direct relation between user actions and sonic changes that provide feedback to a user that something is happening and this is the kernel of effectiveness which we call interactive audio. Let&#8217;s explore some examples of interactive audio across a few different mediums and look at how these examples affect the user and what responses they evoke.<\/p>\n<h2>\u00a0<a href=\"http:\/\/designingsound.org\/wp-content\/uploads\/2016\/11\/ColecoVision-wController-L.jpg\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-36509\" src=\"http:\/\/designingsound.org\/wp-content\/uploads\/2016\/11\/ColecoVision-wController-L-1024x467.jpg\" alt=\"colecovision-wcontroller-l\" width=\"550\" height=\"251\" \/><\/a><\/h2>\n<h2>Games<\/h2>\n<p>Video games have been kicking around the term \u201cinteractive audio\u201d for years, most frequently in the realm of music. While this is usually a means of \u201caction changes, so the music responds to these changes,\u201d there are several examples where audio has a more tightly integrated, and thus effective, approach to interacting with player control.<\/p>\n<p>The quintessential example of interactive audio for most people is <em>Guitar Hero<\/em> and its spiritual successor, <em>Rock Band<\/em>. These are also prime examples of music being the driving force of interactivity because these games are, in fact, music simulators. You play along with a song performing similar-ish actions to musicians. Pressing buttons in rhythm to the music rewards you with points and a successful round. If you mess up, you hear your error through sound effects and music dropouts which simulate flubbing a song while playing it live. Even Harmonix\u2019 earlier games, <em>Amplitude<\/em> and <em>Frequency<\/em> used a similar gameplay mechanic with a similar reward\/failure loop tied directly into music performance. Interestingly, while audio interactivity is ingrained into this style of gameplay, we see most of the unique bending of the sound to player actions when the player performs poorly. Only in failure does the song sound different than if it were playing outside of the game space. From the standpoint of the game&#8217;s directive (make the user feel like they&#8217;re actually playing these songs), it makes sense. Play perfectly and you&#8217;ll be rewarded with the feel of &#8220;I did it! I played <em>Stairway to Heaven<\/em>!&#8221; Fail, and you get the feeling that you need more practice.<\/p>\n<h3>Parappa the Rapper<\/h3>\n<p>Before <em>Guitar Hero<\/em> there was <em>Parappa the Rapper,\u00a0<\/em>the Playstation rhythm rapping game that was all about pressing a button to the rhythm. But even something as simple as <em>Parappa <\/em>introduced the ability to \u201cfreestyle\u201d your rap by pressing buttons in rhythm beyond what the game instructed you to do. Doing so would give you bonus points and also transform the soundtrack into something with a new, remixed feel. This interactivity provides several layers to the game: it adds a new dynamic to the soundtrack, which is normally the same 2 minute song played over and over until you move on to the next level. It enhances the difficulty and player satisfaction by challenging players to try and be creative in how they press buttons in a game whose main mechanic is to follow onscreen instructions. And it promotes replayability by giving users a chance to do something new and different in each playthrough. Not bad for a simple sample trigger!<\/p>\n<p><iframe loading=\"lazy\" title=\"PaRappa The Rapper - Stage 1 (Cool Mode)\" width=\"500\" height=\"375\" src=\"https:\/\/www.youtube.com\/embed\/wl2vhMK9Q7c?start=14&#038;feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe><\/p>\n<p>A more complete example may be the game <em>Rez<\/em>. Initially developed for Dreamcast and ported to PS2 and more recently PS4 and PlaystationVR, <em>Rez<\/em> has the look of a wireframe old-school arcade game like <em>Tempest<\/em> with mechanics similar to the 16 bit arcade classic <em>Space Harrier<\/em>. In <em>Rez<\/em> your character pulses to the beat, a simple scaling trick which instantly roots the music into the action of the game. <em>Rez<\/em> was pretty revolutionary for its time because the music itself changed based on what you were shooting and what was spawning or being destroyed on screen. The music was all 120bpm, 4\/4 electronic music, and the way the player attacked the objects on screen gave the music the adaptive ability to retrigger loops, change instrumentation, or play sweeteners on top of the score. It\u2019s pretty fascinating to watch playthroughs of the game and hear how every game session sounds different. The way the player chooses to attack will completely affect the music structure and samples used. Similar to how the player &#8220;remixes&#8221; the <em>Parappa<\/em> vocals by pressing buttons, players in <em>Rez<\/em> are essentially remixing the soundtrack to the game (both sound effects and music) by playing the game. It is the player&#8217;s input that affects the audio output we hear.<\/p>\n<h3>Rez<\/h3>\n<p><iframe loading=\"lazy\" title=\"Rez Infinite: FIRST GAMEPLAY of Area X!\" width=\"500\" height=\"281\" src=\"https:\/\/www.youtube.com\/embed\/E1vpCg0m5bI?start=13&#038;feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe><\/p>\n<h3>Thumper<\/h3>\n<p><em>Thumper<\/em> is another rhythm action game where the music, sound and visuals are all so cohesively tied together that it feels \u201cright.\u201d You get sound when you expect it, and it matches the flow of the visuals. Every time you turn a tight corner, the track itself flashes, and the flash matches a swell in the music. Each power up or enemy destroyed provides a satisfying low frequency hit or music sample that matches the percussive feel of the action onscreen and ties seamlessly into the game\u2019s pulsing score. Pitch and beat stutter effects are also present in the game play that all affect the game&#8217;s score.\u00a0 Tying sound into the onscreen action not only sells the action better but also emphasizes the relationship between these aspects of the game and our core information senses of hearing, seeing, and (sometimes) touch. More on that in a minute.<\/p>\n<p><iframe loading=\"lazy\" title=\"THUMPER - Level 9 Boss Final &amp; Infinity level\" width=\"500\" height=\"281\" src=\"https:\/\/www.youtube.com\/embed\/3njfX2opKsM?start=164&#038;feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe><\/p>\n<p>The videos above do not really do justice to the interactivity of the experience because the relationship is so intimately bound between game and player. To someone watching a video, it just looks like a video game. But to the player experiencing the lockstep relationship between audio and gameplay it becomes something new; a more complete experience.<\/p>\n<p>It also bears discussing the interaction between animation and sound further because it is a highly effective, thoroughly underused way to enhance intended mood. Whether stark or whimsical, audio tied to and interacting with the visuals can lock the action of a game deeper into the soundtrack while also pulling the user further into a sense of immersion. From the automatons marching to the beat in <em>Inside<\/em> to creatures bobbing their heads to the score in <em>Rayman<\/em> to Sly Cooper&#8217;s marimba footsteps when sneaking around in a barrel to the music layers triggered when doing a super-powered jump in <em>Saints Row IV<\/em>, music or sound effects tied to animation enhance the play experience by tying the sonic palette to the wider scope of the world and its gameplay.<\/p>\n<p>Linking sound to actions in the world and having sound react to or interact with game state helps provide focus to a game and creates a sense of tempo in the action. But why is this the case? Naturally, the answer lies in that bag of gray matter sitting in our skulls.\u00a0Cognitive scientists have been studying the phenomenon of multi-modal and cross-modal processing for years. Multi-modal processing is the act of multiple senses all providing information to the brain about a stimulus, while cross-modal processing is the act of one sense affecting the perception of another. For example, there have been studies showing <a href=\"http:\/\/nobaproject.com\/modules\/multi-modal-perception#reference-23\">playing audio cues during visual stimuli can make the users think they see visual cues that are not there<\/a>. Therefore sound can imply visual data in certain scenarios. That is power! Various studies have also shown a <a href=\"http:\/\/cercor.oxfordjournals.org\/content\/11\/12\/1110.full#sec-1\">more complete understanding of a situation when given clues or information through more than one sense<\/a>. While there haven&#8217;t been any studies (that I know of) specifically looking at cognitive brain function and the use of interactive audio, I hypothesize that audio interacting with visual and game state stimuli make multi-modal integration tighter and therefore enhances perception of these events.<\/p>\n<figure id=\"attachment_36507\" aria-describedby=\"caption-attachment-36507\" style=\"width: 578px\" class=\"wp-caption aligncenter\"><a href=\"http:\/\/designingsound.org\/wp-content\/uploads\/2016\/11\/music_app.jpg\"><img loading=\"lazy\" decoding=\"async\" class=\" wp-image-36507\" src=\"http:\/\/designingsound.org\/wp-content\/uploads\/2016\/11\/music_app-1024x586.jpg\" alt=\"photo by MuseScore (http:\/\/flic.kr\/p\/dvmqDQ). Used under CC BY-SA 2.0\" width=\"578\" height=\"331\" \/><\/a><figcaption id=\"caption-attachment-36507\" class=\"wp-caption-text\">photo by MuseScore (http:\/\/flic.kr\/p\/dvmqDQ). Used under CC BY-SA 2.0<\/figcaption><\/figure>\n<h2>Apps<\/h2>\n<p>Computer and smartphone applications have been another place where we experience interactive audio. Any audio enthusiasts above a certain age group may remember the early web sensation that was Beatnik. Created by former pop star, Thomas Dolby, Beatnik allowed you to remix music on the fly in a web browser. Pretty revolutionary for the late 90\u2019s! Nowadays we\u2019re seeing similar, more sophisticated applications on smartphones. From the DJ Spooky app which allows you to remix your entire music library to the audio app for the movie <em>Inception<\/em> which produced a generative score based on player location and action, we are seeing these tiny devices creating compelling, iterative experiences for users through interactive audio.<\/p>\n<h3>DJ Spooky app<\/h3>\n<p>Turntabalism and DJ mixing are excellent examples of (formerly) low-technology interactive audio. With 2 turntables and a mixer, a person can take two pieces of music and transform them on the fly into a singular new composition. The DJ interacts with these two pieces of vinyl (or CDs and mp3s nowadays) and creates a wholly new experience from this interaction. Using these skills as a jumping off point, DJ Spooky, well-known DJ, musician, author, artist, and busy bee, helped create\u00a0an app which allows a user to utilize these same tools to remix their entire music library. Using controls and gestures on a touchscreen users can mix, trigger samples, play loops and even scratch samples and music from their own library. It\u2019s a fun, dangerously addictive toy\/performance tool and what keeps users coming back is the interactive nature of manipulating linear audio. The interaction between the user&#8217;s fingers and their music collection, slowing down a track, scratching it, or firing off a specific phrase at will to create something entirely new used to be an art that took years of practice and lots of gear to master. Now it all lives in a few dozen megabytes on a tiny phone or tablet and provides users with an instant ability to mix and remix any sound into something new.<\/p>\n<p><a href=\"https:\/\/youtu.be\/_jZG-4Kv3BI?t=19\">https:\/\/youtu.be\/_jZG-4Kv3BI?t=19<\/a><\/p>\n<h3>Inception<\/h3>\n<p>At the time of its release, the Inception app, a free iOS movie tie-in download was dubbed an &#8220;augmented sound experience.&#8221; Created by the innovative team at RJDJ, it combined samples from Hans Zimmer\u2019s score with smartphone technologies such as GPS, accelerometer, gyroscope and microphone and DSP technologies such as delay, time stretching and reverb to make a truly interactive application. The premise of the app is that you unlock \u201cdreams,\u201d which are sonic textures created from the score and heavily processed. As you do things in real life, the app begins playing these textures processed in a suitable way. For example, if you launch it\u00a0after 11pm, it begins playing really spacey, dreamy textures with layers of post-processed delay. Other tracks and effects are only unlocked with other special events, like being in Africa or being in sunlight, each with its own unique experience. Similar to augmented reality games, but audio-centric; your experience of the app is what you hear and it in turn is affected by elements like location and time. Your very being, where you are or when you are, is the driver of the changes you hear in the sonic\u00a0textures. If you have an iOS device, you should download and play with it yourself to get a glimpse into certain ways we can affect a user&#8217;s experience with various parameters and technologies around us.<\/p>\n<h3>H__R<\/h3>\n<p>RJDJ also has a newer app currently titled H__R (apparently there is some litigation regarding its name; it&#8217;s looking for a new name now) which gives the user control over these features a bit more explicitly. With H__R the user puts headphones on and is given a series of presets such as \u201cRelax,\u201d \u201cHappy,\u201d or \u201cSleep.\u201d Each has sliders the user can play with to affect the sound input of the microphone. Select the \u201cOffice\u201d preset and you have sliders to control Space, Time Scramble, and Unhumanize. You can use these sliders to dampen and filter the sound around you making it sound like you&#8217;re in a big space with lots going on or you can zone out into your own quiet time. You are effectually tweaking your mix of the world around you. This app is especially interesting because of the way it tweaks your perception of everything you hear with some sliders on your phone and some clever under the hood DSP. It\u2019s yet another example of how interactive audio can affect the way we hear and create a new experience in \u201cthe real world.\u201d<\/p>\n<figure id=\"attachment_36506\" aria-describedby=\"caption-attachment-36506\" style=\"width: 390px\" class=\"wp-caption aligncenter\"><a href=\"http:\/\/designingsound.org\/wp-content\/uploads\/2016\/11\/WhisperWall_NickGray.jpg\"><img loading=\"lazy\" decoding=\"async\" class=\" wp-image-36506\" src=\"http:\/\/designingsound.org\/wp-content\/uploads\/2016\/11\/WhisperWall_NickGray-768x1024.jpg\" alt=\"photo by Nick Gray (http:\/\/flic.kr\/p\/fDcHq). Used under CC BY-SA 2.0\" width=\"390\" height=\"520\" \/><\/a><figcaption id=\"caption-attachment-36506\" class=\"wp-caption-text\">photo by Nick Gray (http:\/\/flic.kr\/p\/fDcHq). Used under CC BY-SA 2.0<\/figcaption><\/figure>\n<h2>Installations<\/h2>\n<p>One last area where we\u2019ve seen a lot of interactivity is in art installations. I find these the most interesting because unlike games or other media apps, they involve a bit more human interaction, not just via fingers or hand gestures, but humans moving around to experience the way sound interacts with the environment. While art installations incorporating interactive sound may often be relegated to art galleries and workspaces, they also challenge the ways we perceive and react to audio in more interesting ways.<\/p>\n<p>An example which we may see outside of galleries and in places like public parks is a whisper wall or other means to concentrate and direct vocal sounds to a listener far away from the speaker (see the image above). While this is technically just a demonstration of physics and sound propagation it is also a means of architecture driving interactive audio. At the \u201cMurmur Dishes\u201d installation in downtown San Francisco, I have seen people walking down the street stop what they\u2019re doing to begin interacting with the sculpture. One person going to one side, the other on the opposite side and talking quietly while staring back at each other in amazement with that look in their eyes of \u201cOh my God! I can totally hear you right now!\u201d This is an interactive sculpture with audio as the feedback of the experience. Users rely on the audio to prove to themselves that this seemingly aural illusion is in fact real, and it is through audio (sending and perceiving speech) that the users interact with the sculpture.<\/p>\n<p>Let\u2019s look at another example of an art installation incorporating interactive audio to get a better understanding of how audio can be used to affect user experience. <a href=\"https:\/\/annesoaudio.com\/\">Anne-Sophie Mongeau<\/a>, a video game sound designer and sound artist, created an art installation which was a simulation of the experience of being on a large sailing ship. The exhibit featured a custom 11-channel speaker layout playing back sounds around the exhibit to simulate various ship sounds from the creaking of the deck to the wind in the sails above. Weather including wind and rainstorms would randomly occur periodically while visual projections supported the changing weather and the ebbs and flows of ocean movement. In the middle of the room was a ship\u2019s wheel. Invariably people would gravitate to the wheel and see if it moved. Indeed it did and every turn elicited a heavy, creaky, wooden \u201cclick\u201d from a speaker mounted at the wheel. The more someone would turn the wheel, the quicker the weather patterns would change. Anne-Sophie designed this system using Max\/MSP, which is a fantastic tool to create complex dynamic audio systems using internal or external logic or controllers.<\/p>\n<p>Adding the wheel and its interactive components transformed the piece from a passive art installation into an interactive experience. Users were no longer walking through an exhibit and hearing waves and sails. Instead, they were on board the ship, steering it into and out of squalls. The immersion of interactive audio can be exhilarating, especially when it is taken out of the living room or the screen and propagated to a larger venue.<\/p>\n<p><a href=\"https:\/\/vimeo.com\/136594880https:\/\/vimeo.com\/136594880\/a\u00a0\">https:\/\/vimeo.com\/136594880&quot;&gt;https:\/\/vimeo.com\/136594880&lt;\/a&gt;\u00a0<\/a><\/p>\n<p>One may be tempted to infer that interactive audio is so effective because we are used to static (non-interactive) audio, but in looking at various forms of popular media, we are seeing more and more examples of interactive audio everyday. Their effectiveness lies in how they tie into other systems and also how they provide instant verification that player action is affecting the experience whether that\u2019s at home on a console, on our smartphones or at a museum. This has long been the appeal of interactive audio, to craft an experience and have it adapt to user behavior, and I expect we will continue to see new applications and implementations of audio as a driving means of interactivity continue to mushroom.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>(this post originally appeared on designingsound.org) [Author\u2019s note: much of what I describe below could be construed as \u201creactive audio,\u201d not \u201cinteractive\u201d because sound is most often reacting to other systems and world parameters rather than directly affecting them in a feedback loop. For the sake of brevity and sanity I will refer to both [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[],"tags":[],"class_list":["post-544","post","type-post","status-publish","format-standard","hentry"],"jetpack_featured_media_url":"","jetpack-related-posts":[],"jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/bradleymeyer.com\/wp-core\/wp-json\/wp\/v2\/posts\/544","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/bradleymeyer.com\/wp-core\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/bradleymeyer.com\/wp-core\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/bradleymeyer.com\/wp-core\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/bradleymeyer.com\/wp-core\/wp-json\/wp\/v2\/comments?post=544"}],"version-history":[{"count":2,"href":"https:\/\/bradleymeyer.com\/wp-core\/wp-json\/wp\/v2\/posts\/544\/revisions"}],"predecessor-version":[{"id":547,"href":"https:\/\/bradleymeyer.com\/wp-core\/wp-json\/wp\/v2\/posts\/544\/revisions\/547"}],"wp:attachment":[{"href":"https:\/\/bradleymeyer.com\/wp-core\/wp-json\/wp\/v2\/media?parent=544"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/bradleymeyer.com\/wp-core\/wp-json\/wp\/v2\/categories?post=544"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/bradleymeyer.com\/wp-core\/wp-json\/wp\/v2\/tags?post=544"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}