Upload
balamuralia
View
226
Download
1
Embed Size (px)
DESCRIPTION
Network Cabling
Citation preview
Accepted in Partial Fulfillment of the Requirements For the Degree of Master of Fine Arts
at The Savannah College of Art and Design
__________________________________________________________________/__/__ Stephen Legrand Date Committee Chair __________________________________________________________________/__/__ David Stone Date Committee Member 1 __________________________________________________________________/__/__ David Malouf Date Committee Member 2
Cutting the Cables: Developing DAWs Beyond Analog Methods
A Thesis Submitted to the Faculty of the Sound Design Department in Partial Fulfillment of the Requirements for the
Degree of Master of Fine Arts Savannah College of Art and Design
By
Alan Hugh Koda
Savannah, Georgia May 2011
Koda i
Acknowledgements
I would like to thank my thesis committee for their guidance, direction, and assistance in exploring this thesis. My gratitude goes out to Stephen LeGrand, whose words of enthusiastic encouragement pushed me forward; to David Stone, who provided a wealth of wisdom and insight on postproduction sound; and to David Malouf, who patiently and ardently guided me through the frontiers of interaction design. I also thank Charles Dye, Alexander Brandon and Greg Herman for their advice and counsel on media software and workflow processes, and their wisdom on ‘The Big Picture’. I thank the sound department’s faculty for the ideas, lessons, discussions, and advice that have been so generously given in my three years’ tenure at SCAD. Brandon Brown, Clayton de Wet, and Stephen Fortunato deserve special thanks for their continued moral support and friendship; their encouragement has kept me working, researching and writing through the good and the bad. I especially wish to thank my parents and siblings for all of their love and support, without which this thesis would not be possible.
Koda ii
Table of Contents
ABSTRACT.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
INTRODUCTION.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
CHAPTER 1: BACKGROUND .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
A Brief History of DAWs and Postproduction Audio............................................................. 8
Audio Attitudes Towards Technology.......................................................................................10
CHAPTER 2: DESIGN FLAWS .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
The Signal Flow Model .....................................................................................................................15
Busses and Tracks..............................................................................................................................21
Parameters ............................................................................................................................................24
The Digital Environment .................................................................................................................26
Workflow ...............................................................................................................................................28
CHAPTER 3: NEW TECHNOLOGIES .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .33
CHAPTER 4: THE NEW GUYS .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
New Benchmarks................................................................................................................................39
CHAPTER 5: RESPONSE BY DESIGN .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .42
The X-‐Y Mixer.......................................................................................................................................42
The Touch Screen DAW ...................................................................................................................46
Shortcomings........................................................................................................................................56
CONCLUSION .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
WORKS CITED .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
BIBLIOGRAPHY.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .62
Cutting the Cables: Developing DAWs Beyond Analog Methods
Alan Hugh Koda May 2011
This thesis explores the development and design of the modern Digital Audio Workstation in the context of film postproduction environments. Through analysis of the various processes that DAWs address, the interface through which these processes are utilized, and the development by comparison of other media software, the thesis argues that DAW design relies too heavily on modeling analog audio technology, and would benefit from design explicitly focused on taking advantage of current digital technologies and methods. The final portion of the thesis posits several such designs and examines how they address issues with current DAW design.
“…at every screen are two powerful information-processing capabilities, human and computer. Yet all communication between the two must pass through the low-resolution, narrow-band video display terminal, which chokes off fast, precise, and complex communication.”
- Edward Tufte
Introduction
In The Inmates Are Running the Asylum, Alan Cooper analyzes the many ways that
modern-‐day software products fails users. He posits a key issue underlying these problems
– namely, that software design most often comes from software programmers. Because of
their occupational focus on issues with code and functionality, Cooper argues that
programmers ultimately design software from their own point of view – what he calls an
implementation design. Software is presented in much the same way it is constructed, and
while this allows for easier building and debugging, it ignores the needs, desires, and
mindsets of the end users and their workflow.
Digital Audio Workstations are in much the same way designed from an
implementation point of view. Created to fit within workflows steeped in analog processes,
Koda 4
DAWs logically emulate the look, feel, and functionality of the hardware they are designed
to replace. Postproduction environments in particular support this emulation, as it creates
a fairly minimal learning curve – something of great importance to large teams with
complex multistage tasks to accomplish. However, in the rush to fill analog gaps with
digital solutions, designers lose sight of the greater possibilities that digital technology
affords. Systemic design flaws could be corrected with solutions disassociated from
hardware needs, and new media production paradigms could be explored. Analog-‐based
DAW design satisfies the short-‐term goals of successfully introducing new software
solutions to previously hardware-‐based processes, but fails at the more general aim of
making the trade’s work as efficient as possible.
The current ‘standard’ DAW design was originally a great breakthrough – much like
the Graphic User Interface provided a metaphor for the average consumer to easily use
computers, DAWs provided a visual metaphor akin to the traditional methods and
hardware with which producers and sound designers worked. As software technology has
replaced older hardware-‐based methods, however, the disconnection between how audio
software is currently designed and its potential becomes more obvious. The active design
choice to mirror hardware-‐based methodologies limits the actualization of DAW software’s
full potential, and the advancement of new methodologies and workflows for audio, in
many ways:
1) Emulating hardware layouts also emulates their flaws. While direct metaphors
such as virtual consoles and editing bays are easy for professionals to grasp, they
also inadvertently carry many of the analog world’s shortfalls into the digital realm,
such as obtuse signal flow paths, a lack of robust relational models, and a general
Koda 5
organizational inefficiency. Work models that arose out of technical necessity are
now canonized as standard processes and functions of the professional audio world,
instead of being reconstructed as something more direct, communicable, and
effective.
These structures lack organizational focus, and ignore the particular needs of
a computer environment, instead importing wholesale mix and exchange idioms
from consoles and other analog hardware. The work model requires an excess of
managerial effort from mixers and editors, and the effects reach farther than
complicated individual tasks. The reduction and concatenation of once widely
separated tasks into a single software package blurs the overall process of sound
design. Tasks possess ambiguous boundaries, and require the user to tailor each
program use to the task at hand. What were once aesthetically distinct processes
lose their form, creating more mental work for the user to properly delineate the
needs for each task. Integration of multiple individuals’ work is cumbersome, and
bears many of the same difficulties that the analog world presents.
The digital realm is meant to circumvent practical barriers to efficiency and
quality, and many of these barriers have in fact been surmounted. However, a need
for improvement is still evident, and better functionality lies in restructuring the
presentation of audio workflow for the digital world.
2) Emulating hardware technologies ignores possibilities with digital
technologies. One great advantage that analog hardware has always had over
digital audio software is the performative aspect of effects creation and mixing that
it enforced. The difficulties inherent with cutting, splicing, and syncing reels of
Koda 6
magnetic tape encouraged the usage of Foley effects, as well as the creation of
special effects live to picture. It especially encouraged the use of the mixing console
like a musical instrument, providing speed and expressivity to mixers. The
exactitude that digital technology can achieve, on the other hand, means that
regions of sound can be nudged by hundredths of frames, effects can be stretched at
just the right points to exactly match an action, and breakpoints can be drawn in to
crudely mimic volume curves, rather than resorting to any faders or potentiometers.
While having such a high degree of precision is advantageous in many ways, it often
leads to hours of clicking, cutting, and dragging to achieve results, and in the process
implicitly encourages detail-‐oriented tweaking over more holistic approaches. With
current advancements in touch-‐screen technology, as well as efforts to create more
tactile electronic instruments, digital post-‐production technologies could achieve
the same level of expressivity and spontaneity that their analog cousins enjoy.
Sound design is most successful when approached as performance rather than
construction, and DAWs in postproduction would benefit greatly from designs with
more grace, more sensitivity to context, and more opportunity for humanized input.
3) Emulating obsolete hardware layouts creates a barrier to new entrants into
the field. While the software-‐designed-‐as-‐hardware metaphor may have provided
an excellent entry point for seasoned audio professionals into the world of
computer-‐based audio, it is inadequate for communicating functionality to
neophytes with little to no hardware experience. With the usage of analog audio
hardware slowly eroding, newcomers have much greater difficulty acquiring analog
hardware experience as a learning aid. Meanwhile, starting the road to
Koda 7
postproduction audio fresh on a DAW means learning both obsolete analog
practices and current digital versions of those practices. The functionality of the
hardware-‐as-‐software metaphor is waning, and is instead protracting a system of
exclusion and redundancy.
Some DAWs and technologies have attempted to address a few of these issues. Most have
not, and continue to build new features and tweaks into the same problematic structure,
creating feature bloat and building even greater difficulty in improving the structure and
usage of the software or successfully inviting new users. This thesis argues that ultimately
the biggest problems with current DAW utilization are systemic – they are built deep into
the design of the software. Effective change requires a thorough remodeling of computer
audio, and, implicitly, an even greater focus on aesthetic and artistic possibility over
metrics like audio fidelity and functionality.
Chapter 1: Background
A Brief History of DAWs and Postproduction Audio
The slow but steady introduction of digital editing and mixing systems into
postproduction audio provided huge improvements in editing efficiency and complexity
over counterpart analog systems. Analog sound editing for film is particularly time and
labor intensive. Editing steps included procuring magnetic tape transfers of sound cues
(taking about a day to process), physically splicing those cues into reels one at a time, and
syncing dozens of reels for all effects playback (Stone). Archiving rooms stored sound
effects by the hundreds (Yewdall 219). A small staff of technicians was dedicated to the
task of duplicating archived effects cues for the editorial department’s use (Stone). A
digital editing medium eliminated the need for archiving rooms, for sound cue transfers,
and for physical splicing. Digital decisions were also infinitely more malleable – an editor
could tweak a cut for hours without having to worry about affecting the quality of the
original cue.
Regardless, audio professionals were reticent to make the transition; DAWs crept
into audio workflows slowly. Practices of implementing (or not implementing) digital
technologies varied widely through the 1990s. Postproduction houses invested in a myriad
of emerging DAWs – WaveFrame, Sonic Solutions, NED PostPro, and, of course, DigiDesign’s
Pro Tools – which were then adopted in different ways into film sound editorial
departments. Postproduction workflow developed into a hybrid of digital and analog
technologies. A dozen different work methods attempted to accommodate the frequent
Koda 9
media exchanges that arose between multiple DAWs and analog equipment. Work
frequently shuffled from tape to digital editing sessions and back to tape. Foley was often
recorded digitally (usually on a WaveFrame) and printed out on 24-‐track magnetic stock –
meaning that the 24 tracks had to be transferred back, one by one, into another editor’s
DAW for editing (very often an entirely different DAW). Hybridization carried onto the mix
stage as well, though not directly to the console. Digital audio provided a way to quickly
edit and reprint material, and so stages began to replace their offline Moviola editing rooms
with small DAW edit bays right near the mix stage (Stone).
Businesses built on racks of expensive hardware, however, had little desire to
plunge headlong into a radically different, relatively new technology. Many professionals
simply didn’t trust the sound quality of digital audio. Digital music mixing was “something
to be scoffed at” even through the late 1990s (Charles Dye). Production audio for film was
likewise apprehensive about making the switch from Nagras to DAT recorders (and later
from DATs to file-‐based systems), despite the need for reliably and efficiently capturing
hours of on-‐set dialogue. Production mixer Jeff Wexler, one of the first to use hard-‐disk
recorders on-‐set, began his transition by first recording with a Deva alongside a DAT
recorder – and later surreptitiously leaving the DAT machine off (Wexler). Producers and
audio teams alike simply did not trust new technology to perform the job as reliably as old
hardware could.
Koda 10
Audio Att i tudes Towards Technology
These resistive attitudes are indicative of the deeper technological culture
surrounding audio in general. Audio work consists of converting acoustic energy into
electricity, and so is inherently based on the technologies that enable that conversion. This
technological base ironically makes for a somewhat technophobic culture – while the
original analog technologies make up audio’s heritage, newer technologies become a threat
to that heritage. Audio culture has been shaped by the production practices of the analog
world: the usage of large mixing consoles, 2-‐inch 24-‐track tape, complex routing between
various sources, and outboard effects. Digital advancements were tellingly first
implemented in analog fashion, with technologies such as digital outboard effects,
synthesizers with built-‐in editing stations, and fader automation packaged in modern
consoles. All of those technologies assume that audio manipulation is physically modular –
that each piece of gear functions separately and must be routed to others. Pre-‐DAW
technologies continued to support the legacy audio model in which signal flow is broken
into somewhat amorphous stages, each collectively made up of disparate pieces of
hardware routed together.
Even more importantly, no developments up to the 1990s drastically affected the
audio recording medium itself1. All sounds were recorded to and played back from
magnetic tape. Wrapping audio as a file fundamentally altered the ways that audio could
be approached, and combined with hard-‐disk-‐based nonlinear editing and mixing systems,
constituted a complete paradigm shift in terms of the audio production chain.
1 This is true for editing and mixing work; playback systems, however, began their march towards digital
technology in the late 1970s with Dolby Lab’s theater sound revolution
Koda 11
DAWs were bred from a strain of audio culture much different than analog
production. The antecedents to the DAW grew from the world of sound synthesis, music
notation, and coded instructions for computer music performances. Collegiate computer
music centers stood at the center of digital audio development in the 1970s and 1980s,
creating large, complex audio machines like the Sal-‐Mar Construction and the Samson Box
(Leider 61-‐74). Developments stemmed largely from universities’ efforts to explore
computer-‐based synthesis and composition tools, and from hardware companies
facilitating those explorations. The commercial markets also tended heavily towards
holistic systems for musical synthesis and composition. Fairlight’s Computer Music
Instrument [CMI] and New England Digital’s Synclavier, perhaps the most recognizable
DAW-‐like instruments, were designed as all-‐in-‐one synthesizer workstations; New England
Digital later made its own foray into DAWs with PostPro (Leider 65). Pro Tools itself began
as Sound Designer, a sample editor for the Emulator sampling keyboard, rather than a full-‐
fledged audio workstation (Cook 8).
While university and musician interests focus on the individual composer or
musician, audio postproduction maintains a much starker focus on large-‐scale team-‐based
efforts and divided work. The all-‐digital, all-‐in-‐one, computer-‐based postproduction
package was a novel idea, but almost in direct opposition to the working methodology of
audio in general. Rather than a physically disparate set of hardware and processes, nearly
the entirety of the pipeline could be concatenated onto one platform. The potential
workflow proffered by the likes of Pro Tools challenged the traditional divisions inherent
in audio work, and even posited the threat that a single user would be able to accomplish
what was once the province of an entire audio team.
Koda 12
The threat, though, was illusory. The audio workflow that DAWs were invading was
still entrenched within the confines of analog audio equipment. The inherent restrictions
of such a technology-‐based industry, working with long-‐format productions, enforce a
modular, task-‐based approach spread among a team to maximize efficiency. DAWs found
use, then, the same way that digital reverb and wavetable synthesis found use – as one
module of a larger workflow. In music, Pro Tools in particular grew as a tool for
compositing vocal tracks from multiple takes, and as an easy way to recall rough mixes for
tracking purposes (Dye). In postproduction, editors switched from literally cutting
magnetic tape to metaphorically cutting audio files in a computer. The DAW’s primary use
was in an area where it had the greatest comparative capability – editing. The Edit Window
has since become an archetypal piece of any DAW software.
DAW use since has seen piecemeal growth, gradually extending into all aspects of
professional audio; its greatest triumph is perhaps its now universal use for film mixing. Its
greatest shortcomings, however, largely remain unchanged since its inception. The
problem is not one of small omissions, or a lack of polish; it is a systemic rendering of audio
processes from a decidedly analog point of view. DAWs still attempt to mimic the look, feel,
and functionality of legacy audio hardware, and in doing so carry a number of issues and
concerns into the digital domain.
Chapter 2: Design Flaws
Emulating hardware layouts also emulates their flaws.
A recent AES publication focuses on a hallmark of graphical DAWs since their
inception and their failure to develop as a robust visual model of audio – the waveform.
Waveforms display a single parameter: an audio file’s amplitude along the timeline. This
allows one to recognize general changes in level, attacks and releases, and clipping and
distortion issues, but is otherwise a limited visual tool for analyzing audio content. A key
issue is the waveform’s genesis from time-‐domain-‐based data, an approach based on the
physical qualities of sound and acoustics rather than the human perception of sound. As
Giannakis and Smith note in a separate study on visualizations of audio characteristics,
“[v]isual representations of sound such as time-‐domain and frequency-‐domain
representations are based on physical approaches to sound understanding and cannot be
used as intuitive conceptual metaphors for sound design” (Giannakis, Auditory 4).
Consequently, prototype models developed in the study strive to create more ‘natural’
visualizations of audio and audio properties, by “[removing] most of the clutter and
[providing] the user with a mental model of the acoustic content that is easy to grasp”
(Gohlke 9).
The study clearly touches on much more than waveform design. The multiple notes
and cues from visual design and modeling principles implicitly impugn DAW software’s
unwillingness to develop, in Gohlke’s words, “a consistent, easily learnable… and effective
mapping between sound and graphics” (Gohlke 4). The study stresses that the waveform’s
Koda 14
biggest shortcoming is its lack of dimensionality – its insistence on the measurement of
only volume over time – and various suggested alternative models attempt to increase the
information density within the waveform display. These prototypes illustrate how
waveforms could provide masking information, how lenses on scrubbing tools could lessen
the need for constant shifts in viewing scale, how individual tracks’ contribution to overall
loudness could be mapped, and how the effect of a plugin could be visually superimposed
on a waveform to provide feedback. Readability and ‘natural mappings’ are stressed, in
contrast to more typical models based on graphs of raw amplitude and frequency data
(Gohlke 5). Their findings echo information design guru Edward Tufte’s argument for what
he calls ‘micro/macro designs’ – those that incorporate dense, extensive data sets to elicit
broad and complex patterns – and his assertion that “often the less complex and less subtle
the line, the more ambiguous and less interesting is the reading” (Tufte 51).
Figure 1 - Several of Gohlke, et al's waveform prototypes (Gohlke 8-9).
The study is a microcosm of the issues that plague DAWs today. As Gohlke and her
partners find for waveform views, digital audio currently employs long outdated
visualizations, and is sorely in need of better methods to visually relate DAW data to the
user. What the study implicitly advocates is higher information density in the visualization
Koda 15
of audio data. The example prototypes affix new dimensions to the otherwise one-‐
dimensional waveform, illustrating how many opportunities for data visualization are
missed by current DAW systems at even the most basic level. The extrapolation of these
findings, then, is to apply these tactics to the whole of DAW design, increasing
dimensionality and information density along all possible parameters.
Because of the current design paradigms of DAWs, however, those metrics are
curtailed by the limitations manifest in imitating the analog domain. The most widely
employed application designs prioritize the creation of a linear, top-‐down, inline signal
flow model, with all other subsequent data object and process models based on this layout.
The inline model creates two master axes – time and tracks – that are more often than not
disproportionately weighted, against the favor of other data such as project structure,
frequency content, processing stages, and plugin parameters. Much like the waveform
itself, this model is incommunicative and data-‐starved, and lays the brunt of organizational
and management work on the shoulders of the user.
The Signal Flow Model
For the importance that nearly all audio professionals place upon signal flow, it is
perhaps the most poorly implemented concept in DAW designs. The biggest issue one finds
in beginner audio classrooms is almost always a failure to grasp signal flow. Student
projects are rife with excessive plugin instantiations, overused volume automation,
gratuitous mixing decisions on nearly all tracks, and redundant copies of audio tracks to
achieve effects variations – problems which all can be rectified by summing tracks together
Koda 16
and then applying plugins, parameter mixing, and parallel processing. Students and entry-‐
level professionals consistently have difficulty in learning how to route signals, how to
locate issues and their points of origin, and how to effectively utilize routing and summing
points to facilitate their workflow.
At first glance, it is easy to see that signal flow has not adopted an appropriate visual
model to present its fundamental concepts, functions, and uses. Though various software
models are used in DAWs to provide signal flow functionality, nearly all of them hearken
back directly to the analog origins of signal routing. Interfaces involve sequences of inputs,
outputs, sends, and intermediary tokens that route between them; mixing and processing
decisions can occur along any level of the signal chain, with no clear representation of how
either subordinate or superior tracks are affected; and no system clearly visualizes the
relationships between tracks. The digital version of routing is thus very much like its
physical counterpart: the rows of patch bay inputs and outputs, full of dangling patch cords
and handwritten labels, have become rows of obtuse digital busses.
Koda 17
Figure 2 - The Mix view in Pro Tools, which replicates a console. Note how all tracks appear as peers.
But this problem runs deeper than a lack of visual representation. A common theme
with misuse of signal flow is that offenders often err on the side of track overuse. For
example, beginner mixers often instantiate multiple inline reverbs on each track, rather
than sending parallel signals from those tracks into a separate track with one reverb unit
(Charles Dye fingers this as the most common beginner problem) (Dye). Because of the
linear method that mixing consoles utilize to process signals, the track becomes the key
component in audio processing. The proper solution to the reverb example – routing
through a summing point for additional processing – emphasizes that all processing
functions occur at a track level, and that tracks and busses are the units that organize the
structure of any mix, no matter how large or small. The problem therein is the decidedly
implementation oriented viewpoint that this design assumes. It is true that tracks and
busses create the functionality to independently mix simultaneous audio sources, but that
only addresses the ‘nuts and bolts’ functionality of the routing system, without
consideration for the end user. Track-‐ and buss-‐based routing models the process of
Koda 18
routing signals, but turns a blind eye to the user’s perception of the process. One’s mental
model of the process more likely than not is much more concerned with the structuring of
audio into practical hierarchies than with which busses should be connected into which
series of inputs and outputs to achieve that hierarchy.
From an audio professional’s standpoint, however, it is difficult to separate the
presentation from the task of routing. Signal flow defines analog audio processes, right
down to the very creation of audio. The process of changing sound into an electrical signal
and back into sound involves multiple stages of signal manipulation, from gaining the
initially weak microphone signal to combining multiple signals to gaining the composite
signal to a strength that will drive speaker cones. All of this is achieved through signal
routing and track-‐based processing, whether connecting sound sources to different
channels or summing those channels into an unused track. The origins of signal flow are
not simply a way to composite a few sounds together, but describe an entire process of
altering signals, both in series and in parallel to other input signals. In short, signal flow is
not part of the process of electrifying sound – it is the process. It defines how source
signals are made into usable audio.
The objectification of audio as physical recordings has much more formally
segmented the process, transforming audio from a live, continuous procedure of shaping
dynamic inputs into a series of stages of collecting, processing, and mixing sounds. DAWs
sustain this objectification by moving sound from a limited physical container
(phonographs or magnetic tape) into a highly malleable digital form (audio files). The
traditional implementation of track-‐ and buss-‐based signal flow, however, does not fully
Koda 19
accommodate this level of segmentation. The system is too simple to provide the sort of
data stratification necessary for organization and comprehension of complex mixes.
Compare it to Adobe products’ methods for dividing and organizing content. Most
programs in Adobe’s Creative Suite utilize ‘layers’ as the most basic organizer of visual
content. Individual elements within layers can also be wrapped together as ‘groups’, which
are then manipulated as a single element. Layers can contain other layers, and groups can
contain other groups. There are more advanced structures as well, at more application-‐
specific levels. Photoshop can create ‘Smart Objects’, which are essentially complex
combinations of layers, groups, and elements. Illustrator uses ‘Symbols’ to fill the same
function. Smart Objects and Symbols can be instantiated multiple times, and easily resized
and oriented, but it is more difficult to edit their constituent layers than with general
Layers and Groups. Both applications require the user to double-‐click the Symbol/Smart
Object to enter a special editing mode, and Photoshop even prompts the user to commit
their edits when finished. This distinction between basic Layers/Groups and more
complex Smart Objects/Symbols both assumes and enforces a workflow division – namely
that the user will have a greater need to make complex object iterations, with very simple
variations, than to keep full control over every individual element. The distinction
packages the basic organizers (Groups and Layers) into complex ones, reducing the time
and effort it would otherwise take to duplicate similar structures throughout the
document.
It is this kind of relational structure that DAWs are lacking. Current grouping
systems hew close to their analog antecedents. Tracks function as the most basic
repository of mix content. Their contents are combined via the buss system: inputs and
Koda 20
outputs are assigned to different busses, interconnecting the tracks in either descending
parent-‐child or parallel peer relationships. This user-‐created system of interconnectivity,
however, provides limited functionality. A small set of content is easy enough to route and
comprehend, but large mixes quickly become a confusing maze of interweaving busses,
long hierarchy chains, and numerous parallel sends. A single task, such as applying simple
reverb to an entire mix, maintains a simple relationship between the project’s recorded
tracks and one summing point for processing. When the same task is managed within a
much larger mix context, however, multiple iterations of similar processes necessitate
many stages of summing and processing – and analog-‐style signal flow becomes onerous.
Here, the mixer must manage multiple groups of tracks and their relation to multiple
processing points. Some groups may simultaneously be sent to several processors, while a
few of the processing tracks may be structured to flow from one to the next. The
relationships between tracks, groups of tracks, and processes quickly grow complex and
interdependent.
The visual feedback that DAWs provide for these relationships replicates structures
from the original analog-‐world metaphor. I/O modules, groups, and parallel sends are
borrowed directly from their implementation in live and studio consoles for managing
large sets of input and output needs. Unfortunately, the analog interface through which
most of this structuring takes place is the mixing console, where most tracks are laid in one
homogenous horizontal file. The console’s layout creates an excellent tool for viewing the
levels of many tracks simultaneously, but a terrible model of the relationships between
those tracks. DAWs duplicate the analog console’s model of signal flow rather faithfully,
rendering separate sections for I/O and parallel sends that simply list on a per-‐track basis
Koda 21
Figure 3 - Pro Tools' IO strip in Mix view, showing complex and unclear track relationships.
the buss assignments entering and leaving the track. These are a poor visual indicator of
the entire mix structure; too often mixers must chase after a problem sound’s tail, following
the meters’ trail through inputs and outputs of tracks until they arrive at the desired sound
source. This chase destroys the boundaries inherent between various mixing tasks, and
ignores possible structures that might otherwise define separate tasks by scale. Unlike the
Layers/Symbols model, iterations of structures involve tedious repetition, enforcing
otherwise unnecessary cognitive work, and blur differentiations between small-‐scale edits
(such as the right combination of effects to produce an explosion) and more holistic mixing
decisions (such as finding the proper level in the mix for that explosion). The track/buss
system is overly simplistic, and intensifies work on the user’s part to organize and
differentiate both components and tasks within the mix.
Busses and Tracks
In the analog hardware beginnings of audio, busses were the physical paths along
which electrical signals could be sent. On a console with 52 inputs and 16 outputs, for
instance, one needs the flexibility to take any input and route it to any output; therefore,
inputs and outputs are mapped not to one another, but to an intermediary – the buss. The
buss becomes a hub for collecting and distributing signals in a flexible and reconfigurable
manner. Sends provide even greater flexibility and more possible routing schemes by
acting as a secondary intermediary – one sends signal through one buss to a send, which
Koda 22
itself is sent through another buss (or busses) to an output. This system effectively creates
a hierarchy of multiple signal flow stages, so that signals can be independently processed,
grouped into submixes, processed further, and routed to whatever output is necessary.
The downside to this approach is the complexity it induces – most of the data
organization and management tasks are forced upon the mixer to sort through. There are a
plethora of configurations for routing tasks, and little visual feedback for the resulting web
of inputs and outputs. Buss structure is rarely presented in a unified, easy-‐to-‐view manner,
making track labels and colors infinitely more useful in communicating logical grouping
and global structure than the actual routing schemes themselves. The large and intricate
Figure 4 - Buss setup in Pro Tools.
Koda 23
bussing configurations that emerge are often rigidly orchestrated totems, and grow more
inflexible to changes and new ideas as the mix progresses.
The actual steps it takes to route a group of tracks to another track for something as
routine as reverb processing seem superfluous in software form. Fresh developments in
DAWs already acknowledge this issue. Reaper and Live allow tracks to freely route directly
to one another, without needing to manage busses. The latest version of Pro Tools (version
9) even builds this feature into its existing buss-‐heavy routing scheme. Software bussing
consistently demonstrates the same fundamental problem – the need to always route
through a middleman, regardless of the task at hand.
Like busses, tracks were created as vessels for signals. Tracks originally functioned
as receptacles for either individual channels or composites of multiple other tracks
(referred to variously as ‘bounces’, ‘dubs’, ‘prints’, or ‘comps’). Individual tracks allow
Figure 5 - A Pro Tools mix session for a short film, utilizing over 50 difficult-to-see tracks.
Koda 24
simultaneous playback of multiple sound sources while still maintaining independent
parameter changes for each of those sounds.
The price of tracks, however, is the real estate involved. If a mixer wants 24
independently controllable sound sources, then she needs a console spanning 24 channel
strips. The independence that tracks provide comes at the cost of necessitating huge
multitrack consoles. This unwieldiness is even more pronounced in DAWs, where size is
constrained by the area of one’s monitor. DAWs allow scalability up to over 200 tracks, but
the user will be able to effectively view no more than 15-‐20 of those at any time – and much
less if complex editing work needs to be done. Even more worrisome, this misuse of spaces
creates undue cognitive noise for the editor to sift through. Tracks not only take up space –
they all take up the same space, regardless of their relationships to one another. The more
tracks that are present, the more unclear each individual track’s relationship to the full mix
becomes.
Parameters
Organizational structures provide a broad overview of the various parts of a mix.
The fine details, however, are stashed in dozens of track-‐level parameters – volume,
panning, send information, and plugin settings. Much like the organization of the mix, the
presentation of these parameters is often lacking. Encapsulated within individual objects,
insert effects and sends emulate their linear and modular implementation in mixing
consoles. This method provides the real estate for a great number of plugins and sends to
be instantiated. The cost, however, is a loss of control: unlike the consoles this approach
Koda 25
emulates, controls for level and other parameters are encapsulated within the insert and
send objects. Controls often appear in separate plugin windows once the object is selected,
and there is no visual feedback at the instantiation level as to how the insert/send affects
the signal other than the resulting meter reading.
Visual feedback of these changes, then, is relegated to automation breakpoints,
displayed laterally along the edit window. The red lines formed by these points have been
the universal answer to representation of all sorts of parameters, from simple volume and
panning changes to complex plugin automation. As dynamic manipulation of plugin
parameters has become more widespread, Pro Tools’ (and many other DAWs’) answer has
been the Automation Lane, allowing users to view multiple red lines simultaneously. This
is an approach from a hardware design point of view – it provides more information, but
not better information – and as such, falls short of the potential of software-‐based data
modeling.
Breakpoint automation, which has quickly become the de facto standard for
parameter representation, puts undue focus on each parameter individually, rather than
their relationships with one another. Automation lanes, while allowing the user to view
multiple parameters simultaneously, still frame those parameters as independent
variables. Effective automated processing often relies on creating and manipulating
relationships between variables so that, for example, as a delayed signal repeats it becomes
increasingly filtered. The interfaces of the plugins themselves currently provide better
environments for parameter manipulation. Each plugin is modeled as an object, and so all
of its constituent controls are collected together. Ultimately, however, plugins are still
flawed for various reasons: they break continuity with the DAW’s interface and controls,
Koda 26
they open as self-‐contained window modules that must be individually managed, they lack
contextual responsiveness and automating a plugin’s parameters within the plugin window
still provides only the simplest visual feedback (knobs turning, numbers
incrementing/decrementing, etc.), instead of a holistic representation of changes incurred.
The Digital Environment
The DAW’s analog origins are made plain in its lack of appropriate accommodation
for a digital (versus physical) editing and mixing environment. DAWs still vastly
underperform in general presentation of data, and do not utilize the power of the computer
to alleviate many organizational shortcomings that become much more evident in a digital
context. This extends into several areas of organization: displaying pertinent and
contextual information, logically grouping content and display items together, and
organizing and rapidly employing source sounds, plugins, and routing schemes.
The DAW interface of information, alerts, and options is a prime example of
inadequate organization. DAW displays still function like console displays – the
notification tools available to the user don’t extend much farther than channel meters and a
few light-‐up buttons. Issues like the aforementioned routing structure are left in the same
model established by physical hardware, with little obvious effort given to extend
functionality via advanced notifications or alerts. For example, digital audio meters have
the capacity to keep the topmost red bar lit after the track overmodulates, a function that
many consoles also share. Unlike consoles, DAWs also work with the dimension of time –
yet no system has yet been developed to indicate when a track clips, only that it clips.
Koda 27
Mixers must still regularly play through an entire mix to determine at what point that red
bar was lit by an overly hot source.
This is a simple example of the sort of omissions DAWs present regularly. They are
designed from a hardware veteran’s point of view – where dozens of meters must be
watched in a row, where distortion is heard but not seen, where fader strips are all the
same shade of gray, and where solo and mute buttons often don’t even light to indicate
their usage. In general, the type of data that DAWs present in the interface is not as
relevant or immediately useful as it should be. Information is often presented on only a
track level, with the wrong scope of detail. Fader and pan information are regularly
displayed as numbers, rather than easy-‐to-‐read slider positions. Meter readings can refer
to pre-‐ or post-‐fader levels, with little to no indication of which is currently being shown.
Most data pertaining to these types of track parameters are located entirely within the
input module at the head of each track, meaning that it is incredibly difficult to track
changes in any of those parameters when attention is focused to the edit window.
Unlike the physical world, where extra space on a product for another meter or
potentiometer is a luxury, the digital world has no problem with providing as much
feedback and flexibility as possible. That freedom is paradoxically the biggest design
problem with digital products – filtering out data so that the user sees only the most
relevant information at any given time. This is difficult in many digital environments, but
especially so for audio. Audio work in general (and mixing in particular) requires one to
monitor a large amount of information and parameters at once. The tendency for digital
audio software, then, is to use screen space to provide as many tools, values, parameters,
and controls as possible simultaneously. The user is forced to sift through large amounts of
Koda 28
visual noise to find the objects or parameters that need to be adjusted. The brunt of data
management is placed upon the user, dampening the efficiency, fluidity, and aesthetic
creativity of DAW use with intensive structuring tasks.
Workflow
A key historical difficulty in DAW integration is that they were designed to replicate
all audio functions necessary for editing and mixing, but not to replicate the team of
individuals who perform these tasks. Beginning as musical audio creation tools, DAWs
were built with the individual performer or composer in mind, providing them with the
functionality to supposedly obviate the needs for a studio, musicians, producers, or
engineers. DAW precedents like the Fairlight CMI and Synclavier clearly support this one-‐
man studio approach. However, functions and tasks are much more formally segmented
within a postproduction environment. One person cleans and edits the dialogue track;
another records and integrates ADR loops; another cuts a set of sounds (for example, all
engine noises) into the soundtrack, and yet another is responsible for a different set of
sounds (maybe tire squeals this time). A small group is in charge of performing, recording,
and editing all Foley for the film. All of these people create work to be integrated into a
master project for the soundtrack – which is mixed by yet another individual.
DAWs attempt to address such a wide variety of tasks with a plethora of options and
flexibility. Nearly all tools and parameters are usually provided through the basic main
view, so that the composer, the dialogue editor, and the post supervisor are each given
access to the controls they need. The problem with this is that it amounts to a ‘shotgun
Koda 29
approach’ to information layout – the resulting effect is that the composer is also shown
extra playlists and spotting tools, the dialogue editor can see click tracks and silence
strippers, and the post supervisor can view a multitude of complex editing tools he will
never use to preview his team’s work. To best use the software to their specific ends, these
individuals must learn to first tune out information and tools that are irrelevant to their
workflow. Additionally, with so much presented at once, tools and commands are
invariably placed in less than optimal positions – some parameters are prominent, while
other controls get squished into corners, written in tiny type, or included in a submenu of a
submenu of a set of tools.
Currently, saving task-‐specific session templates serves as an intermediary solution,
but it again exemplifies the problematic strategy of asking for too much effort from the
user. Pro Tools’ ‘Windows configurations’ are another example of this. Rather than forcing
the user to create his own custom views, DAWs could easily provide a few basic
views/interfaces that one could switch between based on task. Another possibility could
be to simply reduce the reliance on tools in general, scaling back the overall complexity of
the DAW’s presentation. While many specific tools and features add powerful
functionality, they also invariably increase the cognitive load of every user who has little
need for them.
It is important to remember, though, that audio postproduction is not a single-‐
individual process. The process is made of many tasks spread among a team of sound
professionals. Projects regularly change hands between recordists, editors, and
supervisors, and so need robust processes for combining or rearranging pertinent data
within an existing project. DAWs’ issue with the exchange and concatenation of projects is
Koda 30
currently addressed only to a limited extent2. Interchange is by and large still print-‐based:
a completed section of audio work is printed in real time as a single file to one track (this
process is variously called bouncing, dubbing, predubbing, or layback). Prints absolutely
ensure that everything the editor had in his project will be present in the mixer’s full
session. They also ensure that the mixer has no ability to make changes to the editor’s
work without opening her project, altering the content, and then reprinting that stem, in
real time. They are reliable, trustworthy, widely used, and utterly inefficient at their job.
As an alternative, one can import parts of one project into another – tracks, groups,
I/O structure, and other information. Because no overarching organizational tools are
provided within DAWs, however, these imports frequently need extensive reorganizing to
fit within the current project. The information on a track level comes in, but not at a project
level. Individuals’ unique working styles often come with the import, as well as a host of
I/O structures that often clash with the mixing session’s I/O. Often, pieces of one project
function properly only within their original context. Time markers from a dialogue edit, for
instance, most likely note issues for the dialogue editor, and have little function within a
larger mix session. Playlists and groups are likewise context-‐sensitive. Of course, in a
modern DAW such as Pro Tools, one can choose whether or not to include these options
during import. As emphasized earlier, though, giving the user more boxes to check and
uncheck is not the answer. In Pro Tools’ import system, for instance, a much more elegant
solution would be to simply keep imports stratified. Rather than merging an import with
the current open project, one could group each import as a unique set of data within the
2 OMFs/AAFs are not addressed here because they mainly facilitate exchange between picture editor and
sound. This section is concerned with exchange between audio team members only.
Koda 31
Figure 6 - Pro Tools' 'Import Session Data' screen, replete with dozens of options.
project. The mixer could then see exactly where each track, marker, group, and fader
nudge came from, in organized groups. Instead of a fresh session without origin, the
project is now clearly the sum of many attributable parts.
The most glaring error committed by all import methods, however, is how static
they all are. In the days of highly dynamic, database-‐fueled content for web and business
solutions, the importing and re-‐importing of change after change of audio work is archaic
and entirely unnecessary. The ability to link projects together, as opposed to importing
Koda 32
data from one to the other, is a sore omission from the DAW feature set. It is not a fault of
technology that holds this possibility back – the past few years especially have seen a flurry
of web-‐based document editing applications, with features such as histories of document
changes credited to whichever editor made them. It is most likely the ingrained workflow
of postproduction audio, where frequent exchanges, changes, and reprints have been the
norm for decades.
Tracking changes could be a boon for team projects, but it could equally benefit the
single user. Even a simple listed history of processes applied to individual audio regions
would be immensely useful in documenting the concoction of processes that might create
similar audio effects. The process, however, is still approached linearly, and inflexible
audio prints remain a fundamental building block of postproduction audio.
Koda 33
Chapter 3: New Technologies
Emulating hardware technologies ignores possibilities with digital technologies.
The audio mix has always functioned best as a tactile medium. Faders and rotary
potentiometers provide instant humanized control over mixes, something that the
introduction of digital audio could not change. Mixers work best with a tactile surface, and
the ambiguous beginnings of DAWs prove that audio professionals were not even willing to
switch from analog to fully digital consoles years after digital editing was standardized.
Recent advances in tactile user interfaces, however, have been significant. The
smartphone revolution is proof enough of even the general public’s acceptance of touch
interfaces, and new touch-‐based tablet computers are quickly becoming the next high-‐tech
consumer battleground. It seems somewhat strange, then, that there have not been greater
strides in implementing touch technologies as interfaces for existing postproduction DAWs.
Music-‐ and instrument-‐focused audio has certainly explored touch-‐based terrain
since at least the early millennium. Even more so than audio professionals, musicians and
composers crave tactile feedback – whether through keyboards, MIDI-‐capable guitars,
breath controllers, turntables, or X-‐Y pads and capacitive ribbons. The ability to react
immediately – to emphasize nuances in tempo, pitch or timbre – to play one’s chosen tool
like an instrument – directly influences the aesthetic mindset producing the resultant
music. Indeed, the same mentality explains a mixer’s preference for a physical console – it
is his instrument! In the past decade, however, efforts have gone towards moving beyond
emulating traditional musical instrument controls (like valves, frets, and keys) and towards
Koda 34
new horizons made possible by touch technology. Tangible user interfaces (TUIs) such as
the AudioPad and Reactable use computer vision to identify physical tokens placed and
manipulated on their surface, creating a tangible but quickly modifiable music interface.
Other devices like the Continuum Fingerboard or Misa’s Kitara utilize surfaces that, while
designed to emulate real-‐world instruments, also incorporate touch control on two or three
axes in distinctly electronic ways. Application stores for smart phones and touch tablets
have become fertile ground for experiments with touch controls and interfaces, ranging
from basic keyboard emulators and MIDI controllers to complex beat mashers. These
applications are largely focused on music creation for the individual or small studio. A
number of applications exist that enable basic DAW-‐like features such as recording,
sequencing, and mixing, but few of them include complex functionality. Touch screens are
finding their greatest professional audio use as remote control devices: products such as
Neyrinck’s V-‐Control Pro and PreSonus’ StudioLive Remote provide connectivity to Pro
Tools software and PreSonus’ digital audio consoles, respectively.
There is a distinct lack of audio editing and mixing software specifically designed for
a touch screen interface. The controllers that do exist provide new touch environment
paradigms and controls for the professional audio world – but those controls still must
translate onto software designed for keyboard-‐and-‐mouse interactions. The analog-‐to-‐
digital-‐interface issues that plague personal computers are intensified on the touch screen.
Space is even more restricted with touch screens than with typical computer monitors, not
only by size but also by necessity. In addition to their typically smaller dimensions, touch
screens must accommodate human hands as direct input sources, ensuring that touch
points are large enough to easily hit and that significant portions of the screen are not
Koda 35
obscured by the user’s hand during interaction (Saffer 40-‐44). Input sources are also
reduced from multiple mouse and keyboard buttons to a small set of hand gestures.
Dozens of modifier-‐fueled key combinations disappear, and right-‐click menus are nowhere
to be found. As a result, guiding the user’s focus becomes a top priority. Rows of menu
buttons and hierarchical lists need to be eliminated, reduced or replaced with modal
windows and popovers that narrow the interface towards specific tasks in the proper
context.
These difficulties are not to imply that touch screens are an unforgiving
environment that audio software should avoid. Rather, they only illustrate that touch
screens create a new context for interaction – much in the same way that analog audio
interaction does not fully translate to a digital computer environment, traditional computer
interaction does not fully translate to touch screen environments. And regardless of the
difficulties, the potential benefits of pursuing touch paradigms are great. Even as touch
abandons the mouse and keyboard’s sheer number of unique buttons and commands, it
returns real-‐time interaction to the user. Volume can once again be ridden by a fader,
panning can be swiped left and right, and audio files can be placed, moved, sliced, and
trimmed in natural ways that simply cannot be replicated by a mouse and keyboard.
Though limited in scope, gestures serve as intuitive and direct ways to manipulate data and
objects. Touch returns humanized control to audio without fetishizing dozens of hardware
controllers, fader packs, or expensive mixing boards. It allows customization of the user’s
interface, so that the interface may adapt to the user’s needs and not vice versa. The touch
screen’s limitations can even be considered an asset – because of their limited space and
focus, applications have a much greater need to streamline their interface and create a
Koda 36
natural flow between tasks. Applications with cleaner interfaces and smarter data
stratification succeed in touch environments, while cluttered products quickly fail.
In order to fully utilize touch environments and their benefits, then, application
interfaces need to be redesigned with their workspace in mind, and not crammed into a
space that is not meant to contain them. A brief YouTube search reveals several videos that
feature live mixers implementing the Software Audio Console application on a 3M touch
screen. The videos highlight the dilemma this approach creates: information is too dense,
touch points are much too small, and navigation is difficult and time-‐consuming, precisely
because the software was not designed for a touch environment. Faders in particular are
significantly smaller than a human hand can easily control, and the multitudinous channel
strips encourage endless scrolling through cramped controls and tiny text. Concentrating
audio software design towards a touch end-‐goal will streamline interfaces and provide a
new range of human input and interaction with audio.
Chapter 4: The New Guys
Emulating obsolete hardware layouts creates a barrier to new entrants into the field.
The consequence of building on the same obtuse model year after year has been the
development of a sort of ‘sound cabal’ – a view of audio professionals as a secretive group
who alone possesses the ability to shape and process sound for films, television, and video
games. New members must first endure the ‘apprenticeship’, in which editing is slow,
routing is clumsy, mixes clip often, and issues are difficult to track down. Only when the
difficulties of the system have been mastered through years of service is admission
possible.
The exclusion of inexperienced enthusiasts comes as the flipside to adhering to
models of outdated equipment. Not only do neophytes have to learn the software, but they
must also learn the inner workings of the hardware from which it is designed – hardware
whose usage is slowly eroding. Those already familiar with applying analog principles to
the digital realm have a disproportionate advantage over newcomers. Digital audio has
kept technical and creative ability bundled as one product– if one wants a properly
designed soundtrack, one also needs a fairly thorough grasp of complex signal routing via
patch bays or aux sends/returns. Compare this with the evolution of the print industry.
Printing presses were once the sole repositories and users of fonts, each with a signature
set of typefaces. When Adobe and Microsoft digitized scores of fonts from the type
foundries (Pfiffner 89-‐97), however, they opened the doors for those without any prior
knowledge of typeface design or creation to choose and employ their own fonts. In effect,
Koda 38
they separated the technical craft of creating and maintaining fonts from the creative
choice of which font to use. The digital world provides the ability to cleave ‘under the hood’
functionality completely away from the interface, and to replace the implementation
approach with user-‐focused tools that require far less knowledge of their inner
complexities for proper use.
Audio applications function and feel different from other media applications
because they adhere so rigidly to their analog metaphor. An ever-‐growing number of
media applications ditch the bulk of metaphoric real-‐world pretenses and instead strive to
emulate users’ mental models of how certain software tools should react. This means that
the software is designed to frame and present data, states, and actions in a way that closely
approximates the way users subconsciously perceive them. Instead of a direct real-‐world
analogy, the program provides the user with an interface that is simultaneously more
abstract and more intuitive. As this approach proliferates, more users are exposed to
designs that follow mental models and software-‐specific abstractions of data and actions –
meaning that there is a growing base of users not only comfortable with, but accustomed to
these design approaches. Applications modeled directly on the real world are, by
comparison, anomalous – they provide industry-‐ and task-‐specific designs, and therefore
do not translate into a world populated with more flexible and transparent interfaces. In
short, the insistence on an analog metaphor makes audio applications aesthetically and
functionally incongruous with current media software, and renders them unfamiliar to an
increasingly computer-‐savvy audience.
Koda 39
New Benchmarks
It is likely that changes to the shortcomings enumerated in previous chapters will
not come from the current industry-‐leading audio applications, but rather from smaller
programs that experiment with previously listed design concerns and concepts. It is not
simply a matter of execution or ideas, but of providing services to new audiences as they
emerge. Clayton Christensen’s The Innovator’s Dilemma points this out as a large,
generalized phenomenon affecting all types and sizes of industries: after fledgling
businesses make the transition to power players, they more often than not retain their
dominance only until confronted by the next drastic shift in technology and technological
attitudes.
Christensen demonstrates that as a product grows better at what it does, it also
narrows its focus towards consistent and specific technological goals – and can then
become obsolete when those goals need no further advancing, and an entirely disparate set
of goals become the new criteria for success. Audio software may soon be headed for the
same fate. Basic early goals such as high audio quality and track count (metrics notably
carried over from analog equipment) are by now a moot point. Digital audio quality is
consistent and familiar by now, and track count and processing power are limited mostly
by the machine running the software, rather than the software itself. Consequently, other
metrics considered important by sound editors and designers have been largely
marginalized – things like intuitive manipulation, transparency, ease and enjoyment of use,
and scalability of tasks. The ossification of both software design and general user mentality
magnifies these issues, creating a divide between highly experienced professionals and
alternative audiences such as new professionals, amateurs, and light users. For the former,
Koda 40
older metrics remain a priority – experiences with analog and young digital technologies
were indeed highly concerned with audio quality and track count, and years of experience
are likely to ingrain those kinds of concerns as priorities over any others. The latter,
however, are much less accustomed to grappling with these fundamental types of metrics,
in part because of the functionality of technology within the span of their experience.
These newer audiences express a greater amount of familiarity and comfort with digital
audio environments. Furthermore, greater experience with digital software in general
affects these users’ expectations as to how applications should function, and what
paradigms will be present. Attempting to replicate an environment alien to the user is a
poor substitute for following user expectations.
Newer products such as Ableton Live and Apple’s Garageband address these
concerns to an extent (in music, at least) with simple, sleek, and intuitive interfaces, but
products still could find greater methods for confronting these problems, or even solving
ones that sound professionals did not realize existed. Certainly these developments will
not result in the swift downfall of Pro Tools, or its dissolution as an industry-‐standard tool.
The top software applications will continue serving the professionals whose endorsement
built their reputation, and those professionals will likely stand by the tools they have
mastered over years of use. In the long term, however, companies like Avid will lose the
spotlight as their clientele retires and new professionals, accustomed to digital work,
emerge to take their place. Track counts, sampling rates, active voices, plugin
instantiations, latency – all these metrics will lose their potency as technology advances to
acceptable levels, and the DAW’s viability as an aesthetic instrument will become the most
important dimension of comparison.
Koda 41
As Christensen argues, disruptive new technologies also tend to find and create new
markets that older technologies do not satisfy. Reviewing the inception of the DAW itself,
one can see the slow transition the software made from primarily a musician’s composition
and synthesis tool to a mixer or sound supervisor’s editing and mixing suite. The
additional functionality provided by music and composition manipulation software proved
to have excellent crossover value as an editing tool. Note that film sound’s most dramatic
growth period occurred from the very late 1970s through the 1990s (Sergi 3), dovetailing
with the growth of DAW technology. Developments in the art of film sound necessitated
technological growth to facilitate the new approaches of sound effects editing and mixing,
and the nonlinear editor provided a new tool paradigm to effectively accomplish them.
Further developments, however, have lagged behind. As history illustrates, the edit
window was an isolated breakthrough. Even as postproduction audio became digitized,
sound was still ported from tape to digital editor and back. Studios still held onto their
expensive consoles and racks, and so analog methods incorporated the digital editor
uncomfortably into their workflow. The digital audio environment continues to replicate
analog hardware and methodologies in both look and functionality. As other industries
have evolved to incorporate advances in interconnectivity, data exchange, and dynamic
data restructuring, audio has resolutely stood its ground crafting software around a well-‐
established but tedious workflow. The analog metaphor is losing its efficacy in the digital
world, and turns a blind eye to the remarkable ways in which new technology can reshape
the audio process.
Chapter 5: Response by Design
In order to explore solutions to the issues faced by Pro Tools and other current
DAWs, I designed several rough prototypes for new DAW interfaces. Each design attempts
to remodel audio manipulation within a post-‐production environment with a focus on ease
of use, transparency of interaction with audio, and utilization of touch-‐screen technologies
that will likely become widespread in media production tool utilization. The first was
created prior to the writing of this paper, as an exploration of touch interfaces’ possibilities;
other designs explicitly apply the paper’s findings to new designs.
The X-Y Mixer
Figure 7 - Assigning an input in the X-Y mixer
My first prototype design was explicitly focused on mixing; no editing capabilities
were designed. The purpose of the mixer was to provide a fluid and intuitive touch screen
environment for mixing, reducing the clutter and complexity common to most mixing
Koda 43
environments while heightening user input and feedback without the use of a dedicated
hardware controller. To achieve these ends, the design followed these four guidelines:
1. Reduce the interface for optimal implementation within a touch environment.
The mixing interface presents an environment radically different from the standard
console metaphor, designed specifically to provide a clear and simple interface for
touch interaction. All controls respond to simple tap, hold, and drag commands,
while extra functions, options, and tools are confined to a single toolbar and kept to
a minimum. Settings and file-‐browsing capabilities are tucked in expandable
windows in the top corners of the interface, to provide the greatest amount of
surface area dedicated to track interaction. The interface’s design is focused on
direct and simple feedback at the track level.
2. Utilize the two-dimensional space to as an intuitive map for volume and
panning information.
One of the key concepts of the mixer was to implement the touch screen’s plane as a
large X-‐Y controller, with the vertical axis controlling volume and the horizontal axis
Figure 8 - Lateral position determines a sound's placement in the stereo field.
Koda 44
determining spatial position, or panning. This creates a much more intuitive visual
interface for panning than traditional DAWs provide by defining a clear relationship
between the visual representation of a track and its aural position within the stereo
field.
3. Reduce the complexity of the buss system.
Throughout the design process, I researched and explored the functionality of
busses and signal mixing to determine if there was a simpler way to achieve
complex mixing. I found that busses are by and large extraneous in a digital
environment, and usually result in extra steps to complete tasks as simple as routing
one track into another. To circumvent this, I eliminated explicit buss creation and
control, and instead implement a simple group-‐and-‐drag mechanism for routing
multiple tracks together.
Figure 9 - Creation of a submix track from multiple tracks
I also designed the buss system to provide greater visual feedback to the
user. All inputs and outputs are located respectively at the top and bottom of each
track; multiple inputs are represented as graphically distinct entities, with each
input color-‐coded to correspond to its parent track. Outputs appear as either direct
Koda 45
outputs to the speakers (represented by a downward arrow) or as parallel
outputs/sends (represented by sideways arrows indicating the location of the
send’s destination track).
4. Provide more effective visual feedback along each track.
The mixer retains the track-‐focus operation that traditional consoles utilize, but
compresses the channel strip for a more effective use of space. The fader and meter
are joined as one object, so that the relationship between level adjustments and
their resulting meter changes are visually enforced by proximity.
Figure 10 – An early concept for tracks, showing joined meters/faders, I/O, and plugin icons
Effects are also redesigned to provide greater feedback. Instead of being
located in a strip above the track’s fader, effects are directly adjacent to the fader
and meter. This results in a more compact track that requires much less visual
scanning to comprehend. Mirroring ‘favicons’ in Internet browsers, effects have
also been rudimentarily iconized. Four basic effects categories are given icons with
unique designs and colors. An icon’s presence indicates its use on a track, and
S
guitar+4.3
+
+
analog 1
SM! S
bass-5.1
+
+
analog 2
RAXREQ
S
vox-0.3
+
+
analog 3
SRTTLS
S
tamb-12.5
+
+
analog 4
SM!REQTLS
Koda 46
tapping the icon opens the effect’s modification window for adjustment. This icon
system allows the user to quickly ascertain the number and type of effects present
on any given track; it could be expanded to include a greater number of categories
(including user-‐defined categories).
The X-‐Y mixer is not without its flaws, however. The physical space needed for the
panning functionality inherently limits the practical number of tracks that can be present.
The high probability that a track will be center-‐panned reduces the functionality of the
layout, while increasing visual noise and track overlap along the center position. The
system is also overly track-‐focused: large mixes of several dozen or more tracks (such as
for film) would lack proper visualization of the mix structure as a whole. Robust touch
implementation of audio editing, mixing, and processing tools requires a larger, more
holistic approach.
The Touch Screen DAW
My second effort builds on the lessons of the first, but takes a much more
comprehensive approach to design. Rather than focusing on a few mixing tasks within the
DAW, this design attempts to provide an entirely new DAW experience, guided by the
issues analyzed earlier as well as the more successful components of current DAWs. Unlike
the X-‐Y mixer, this design followed the research and writing of this paper, and so is much
more conscious in its attempts to eliminate issues present in current DAWs. The design
Koda 47
aims to provides a novel touch-‐oriented editing and mixing environment that adheres to
principles gleaned from the paper’s findings:
1. Keep information displays dense and multidimensional.
Current DAW interfaces are starved for data, stretching a sparse number of
parameters across limited screen space or wrapping them in miniscule feedback
modules. A new interface should instead provide a high quantity of information
across many dimensions simultaneously, for the most effective use of visual
feedback.
2. Ensure that data is scalable.
The standard track/buss structure loses focuses and becomes exponentially more
complex as mixes scale larger. This design should provide the ability to iterate
configurations of audio and associated parameters at many scales, from a small set
of tracks to a large and complex film mix, and retain a proper sense of proportions
and relationships within the mix.
3. Provide a fluid and intuitive user experience.
Instead of creating an abundance of controls and windows that require
management, return a performative dimension to the act of editing and mixing
audio. Take advantage of touch technology to eliminate extraneous interface
components, remove unnecessary tools, and provide more humanized and nuanced
controls.
To meet these ends and address the issues enumerated in this paper, several
interfaces were created throughout the design process. Their genesis began with initial
sketches focused on scale and hierarchy. Beginning at the large mix-‐focused end of the
Koda 48
postproduction process and working backwards, I hoped to move away from overly track-‐
focused management and create an architecture that highlighted audio file, track, and
group relationships. I sketched a model that conglomerates smaller pieces into larger
building blocks. Groups of tracks are physically connected to parent groups, visually
Figure 11 - The initial workflow model sketch (the cursor indicates focus).
Koda 49
highlighting both the hierarchy of the system and the modularity of that hierarchy. With
this model, an editor could conceivably deliver his entire editing session to the mixer, who
could then simply attach it to the appropriate parent module. In this way, scale and
relationships between even sessions is maintained, and each portion of the postproduction
workflow is visualized as part of a greater whole.
Figure 12 – Mix view, showing parent tracks stacked ‘behind’ child tracks.
The interface resulting from this exploration focuses largely on visualizing
relationships between parent and subordinate tracks (and their contingent parameters)
through a more standard track layout. In mix view, tracks are reconfigured into a hierarchy
tree flowing from top to bottom, with a center layer serving as focus. Mixers can ‘rotate’
different parts of the hierarchy through the focus layer (in much the same way the wheels
Koda 50
of a slot machine freely rotate). For example, in Figure 12 above, the mixer could choose to
rotate the Music Stem into focus, causing the Cue tracks to rotate underneath the center
layer. The space previously occupied by the Cue tracks would then shrink to accommodate
only the Music Stem. Solos and mutes also carry through the hierarchy, so that a solo
enabled on the Dialog Stem, for example, causes subordinate Dialog tracks to also be
soloed. To accommodate touch needs, buttons are few and sized large, and tools are
eliminated in favor of simple tap, drag, and swipe commands.
Figure 13 - Hierarchy in Edit view. Note the parent tracks on the left, and the multiple regions in one track.
The interface can be switched from a vertical mix view to a horizontal edit view; the
hierarchy system swivels sideways. Meters appear on both child and parent tracks, to
indicate the summing level at each stage. Unlike most DAWs, tracks here can contain
multiple simultaneous audio regions – and those regions themselves can consist of multiple
audio files grouped together. This creates greater differentiation between small and large
Koda 51
scale, adds functionality to the track, and allows the editor to more quickly view and
manipulate sounds as events, rather than tediously selecting and editing audio files
individually. Sound events comprised of multiple sound components can be grouped
together to facilitate organization, while still allowing the track to function as a receptacle
and organizational tool on a broader level.
Figure 14 - Track edit view, with multiple simultaneous audio regions
The direction of this interface, however, is still ultimately too steeped in traditional DAW
design. Little space is given for touch points, no unique spaces exist for effects or plugins,
and the overall feel is still visually noisy. This directly hierarchical model also does not
accommodate parallel sends, whose relationships within a mix are more ambiguous and
peer-‐like.
Koda 52
Figure 15- Various designs for a 'token-based' approach
To distance myself further from these issues, I decided to start fresh, this time
focusing on creating a denser and more multidimensional sound container designed for
touch interaction. Building upon previous ideas in the X-‐Y mixer, I created a token
apparatus to function as a track, and packed as many parameters in as possible, in as small
an area as practical. The token takes the form of a semicircle; like the X-‐Y mixer, its height
Koda 53
relative to the canvas determines audio level. Meters appear behind the token to indicate
level. Panning, however, is governed now by orientation – the entire token can be twisted
like a knob using four fingers, and the simple semicircle shape provides instant visual
feedback as to where the track is in the stereo field. Effects and sends are instantiated with
taps on the underside of the token; the elements shrink as more are added, to
accommodate all instantiations within one non-‐scrolling view.
Figure 16 – An early animation sketch of a token unfolding from Mix to Edit view
The token design also attempts to blur the line between mixing and editing, visually
relating the two processes more effectively while facilitating any edit tweaks that need be
made during the mixing phase. When double-‐tapped, tokens swivel open and reveal a
vertical timeline. Changing the orientation of the edit/waveform view diminishes the often
severe line between mixing and editing, allowing certain tracks to be edited while others
keep a simple profile for easier reading of level and panning information. One could argue
that film’s workflow has long since kept these two processes separate, and that nothing is
to gain from a blending of the two. Better transitions, however, not only generate a greater
Koda 54
sense of intuitive interaction with the DAW, but would greatly benefit small teams,
amateurs, and new users in their understanding and implementation of digital audio
applications.
Figure 17 – A more integrated environment, drawing from the ‘token’ explorations.
A third interface iteration takes the outer trappings of the token-‐based work and
reduces the token itself to nearly nothing. Combining the integrated approach of the
tokens with the effects implementation in the X-‐Y mixer resulted in a minimal design that
utilizes the geography of the screen space to place and identify effects and sends. Moving
away from track-‐by-‐track information, the goal was to allow the resulting overall shape and
form of the entire interface to quickly communicate effects, send, and basic routing
information at a glance. Faders are reduced to simple bars to indicate level, with meters
Koda 55
again behind them, while effects and sends are disaggregated from the fader/token to small
but easily-‐spotted areas above and below the fader. Shape indicates an object as either an
effect (square) or send (oblong). Position above or below identifies a send as either pre-‐ or
postfader, infinitely easier to read than a small ‘pre’ button hidden on the send itself. Like
the token design, faders can be double-‐tapped to open an edit view of the track’s
constituent audio files/waveforms.
Figure 18 - The I/O manager, displayed over the mix interface.
Input sections above and output sections below are subtly ‘landscaped’ to provide
simple routing feedback. Tracks with another track as input have slightly descended input
labels; likewise, tracks with another track as the assigned output have slightly ascended
output labels. The mixer can therefore easily note ‘initial’ audio-‐file-‐only tracks, as well as
Koda 56
final destination tracks, by their flatter appearance. A four-‐finger pull down on the input
section, however, provides more I/O detail. The input and output ‘steps’ expand into tiers
to illustrate the hierarchical flow from one set of tracks to the next. Unlike the first
interface, this allows for a more accurate representation of sends – parallel tracks are
simply displayed in parallel with their peers, and do not necessarily need to be included in
a specific hierarchy.
Again, the interface is designed with touch interfaces in mind. Menus and
hierarchical lists are eliminated, as are most tools. Only a few simple buttons exist to add
tracks, effects, and sends; most of the user interaction involves dragging objects into the
right space, or tapping them to edit their parameters.
Shortcomings
These explorations are by and large only sketches and experiments with altering the
standard DAW design; they lack full functionality, and still grapple with several problems.
First, it is difficult to bridge the interface gap between mixing and editing. The
development of mixing paradigms in the analog world and editing paradigms in the digital
world has erected a stiff barrier of separate interfaces, controls, and displays separating the
two. Creating smooth transitions between the two effectively requires a fresh start that
treats both as facets of the same process. Because of their biases towards analog methods,
mixing processes were given the greater share of effort and redesign; however,
incorporating edit displays directly within the flow of the mix seems an effective way to
merge the two within one display. New work practices and methods could certainly arise
Koda 57
from such a layout. Nevertheless, the standard implementation of two separate mix and
edit views is difficult to overcome, and it is likely equally if not more difficult to persuade
audio professionals to abandon them.
Second, parallel sends were never satisfactorily incorporated within any design.
The last few provided effective track-‐based displays, but no design created a successful
system to model their relationship to the entire mix. Because of their ambiguous non-‐
hierarchical nature, sends do not comfortably fit into nesting schemata; they function much
more like separate concurrent projects. Perhaps future models might better incorporate
them as such, and isolate them from the rest of the project in an intuitive manner.
Third, signal processing effects were only barely addressed. The reason is simple:
effects are most often adjusted from within their own proprietary interfaces, and those
interfaces cannot be controlled by either the user or the DAW employing them. At present,
DAWs can at best provide the means for easy instantiation, sequencing, and removal of
effects. Possibly in the future, plugins may develop standardized semantics that allow
DAWs to graft their own interfaces onto effects’ functionality, but it is unlikely that plugin
software companies are willing to give up their visual brand identity. Effects redesign most
likely then will stem only from a general change in audio attitudes towards design.
Regardless of these issues, the illustrated explorations in DAW redesign prove that
more communicative and intuitive models for postproduction audio can be created. Only
time will tell if mainstream and professional markets will venture further towards a
rejection of implementation-‐based design.
Koda 58
Conclusion
As digital technology and technological integration progresses, the DAW’s decidedly
analog approach to postproduction audio becomes more obvious. The organizational
architecture of digital mixes still follows the model set by analog hardware decades ago,
and is unable to separate audio processes from the inner workings of the machines that
enable them. Tracks and busses create one-‐dimensional, data-‐deficient models of editing
and mixing projects, and do not provide enough functional sophistication for rich, legible,
multidimensional project visualizations. Scale and relationships are difficult to follow, and
parameters lack a comprehensive and efficient visual approach. Neither individual nor
team workflow is fully addressed – individuals waste time and effort sifting through
unwanted tools and features, while team exchanges are inefficient and inflexible to changes
and corrections. Performativity in professional audio work is greatly diminished, and the
console-‐with-‐a-‐mouse structure of DAWs prevents new opportunities to inject touch-‐based
performativity into the workflow. The focus on outdated models and methodologies
creates a steep barrier to fresh entrants in the field, as well as other neophytes interested
in postproduction audio.
My efforts to design a new architecture for DAWs thus focused on visualizing
relational structures at different levels of scale, increasing the density and dimensionality
of information displayed along each interface element, and simplifying and adapting
controls for a touch environment while compromising as little functionality as possible.
While some issues with parallel sends and the editing/mixing distinction remain, all
Koda 59
designs illustrate possibilities for DAW reconceptualization that break away from analog
approaches and allow for greater and more effective visualization of audio information.
The advance of greater media technology capabilities will only continue to highlight
the deficiencies that result from modeling interaction with aging metaphors. Redesigning
DAWs for the digital age will not only increase postproduction audio’s productivity – it
holds the possibility to add new dimensions of aesthetic expression to the medium, and the
potential to stoke the creative fires of the next generation of audio and media professionals.
Koda 60
Works Cited
Christensen, Clayton M. The Innovator's Dilemma: the Revolutionary Book That Will Change
the Way You Do Business. New York: HarperCollins, 2003. Print.
Cook, Frank D. Pro Tools 101: Official Courseware Version 8.0. Australia: Course Technology,
2009. Print.
Cooper, Alan. The Inmates Are Running the Asylum. Indianapolis, IN: Sams, 2004. Print.
Dye, Charles. Personal interview. 15 Oct. 2010.
Dye, Charles. Personal interview. 19 Nov. 2010.
Giannakis, Kostas, and Matt Smith. "Auditory-‐visual Associations for Music Compositional
Processes: A Survey." CiteSeerX, 2000. Web. 29 Apr. 2011.
Gohlke, Kristian, Michael Hlatky, Sebastian Heise, David Black, and Jörn Loviscach. "Track
Displays in DAW Software: Beyond Waveform Views." AES 128th Convention (2010).
AES Publications. Audio Engineering Society. Web. 26 Apr. 2011.
Leider, Colby. Digital Audio Workstation. New York: McGraw-‐Hill, 2004. Print.
Pfiffner, Pamela, and Serena Herr. Inside the Publishing Revolution: the Adobe Story.
Berkeley: Peachpit, 2003. Print.
Saffer, Dan. Designing Gestural Interfaces. Sebastopol, CA: O'Reilly Media, 2008. Print.
Sergi, Gianluca. The Dolby Era: Film Sound in Contemporary Hollywood. Manchester:
Manchester UP, 2004. Print.
Stone, Dave. Personal interview. 2 Feb. 2011.
Stone, Dave. Personal interview. 9 Feb. 2011.
Tufte, Edward R. Envisioning Information. Cheshire, CT: Graphics, 2008. Print.
Koda 61
Wexler, Jeff. "Film Production Audio." Professional Audio Student Organization Guest
Lecture Series. Hamilton Hall, Savannah, GA. 19 Feb. 2010. Lecture.
Yewdall, David Lewis. Practical Art of Motion Picture Sound. Burlington, MA: Focal, 2007.
Print.
Koda 62
Bibliography
Adobe Illustrator CS5. San Jose: Adobe Systems, 2010. Computer software.
Adobe Photoshop CS5. San Jose: Adobe Systems, 2010. Computer software.
Brandon, Alexander. Personal interview. 10 Feb. 2011.
Cavanaugh, William. "Hardware and Software Interface Designs for Digital Audio
Workstations (DAWs)." AES 7th International Conference (1989). AES Publications.
Audio Engineering Society. Web. 26 Feb. 2011.
Christensen, Clayton M. The Innovator's Dilemma: the Revolutionary Book That Will Change
the Way You Do Business. New York: HarperCollins, 2003. Print.
Collins, Mike. Choosing and Using Audio and Music Software: a Guide to the Major Software
Applications for Mac and PC. Amsterdam: Focal, 2004. Print.
"The Continuum Fingerboard." Cerlsoundgroup.org. Haken Audio. Web. 26 Apr. 2011.
<http://www.cerlsoundgroup.org/Continuum/>.
Cook, Frank D. Pro Tools 101: Official Courseware Version 8.0. Australia: Course Technology,
2009. Print.
Cooper, Alan. The Inmates Are Running the Asylum. Indianapolis, IN: Sams, 2004. Print.
Dye, Charles. Personal interview. 15 Oct. 2010.
Dye, Charles. Personal interview. 19 Nov. 2010.
Garageband. Vers. 6. Cupertino: Apple, 2011. Computer software.
Giannakis, Kostas, and Matt Smith. "Auditory-‐visual Associations for Music Compositional
Processes: A Survey." CiteSeerX, 2000. Web. 29 Apr. 2011.
Koda 63
Giannakis, Kostas, and Matt Smith. "Imaging Soundscapes: Identifying Cognitive
Associations between Auditory and Visual Dimensions." Musical Imagery (2001):
161-‐79. CiteSeerX. Web. 26 Apr. 2011.
Gohlke, Kristian, Michael Hlatky, Sebastian Heise, David Black, and Jörn Loviscach. "Track
Displays in DAW Software: Beyond Waveform Views." AES 128th Convention (2010).
AES Publications. Audio Engineering Society. Web. 26 Apr. 2011.
Goodwin, Kim. Designing for the Digital Age: How to Create Human-centered Products and
Services. Indianapolis, IN: Wiley Pub., 2009. Print.
Herman, Greg. Personal interview. 11 Feb. 2011.
King, Andrew. "Analogue or Digital? A Case-‐study to Examine Pedagogical Approaches to
Recording Studio Practice." AES 128th Convention (2010). AES Publications. Audio
Engineering Society. Web. 26 Apr. 2011.
"Kitara." Misadigital.com. Misa Digital Instruments. Web. 26 Apr. 2011.
<http://www.misadigital.com/index.php?target=kitara>.
Leider, Colby. Digital Audio Workstation. New York: McGraw-‐Hill, 2004. Print.
Live. Vers. 8. Berlin: Ableton, 2010. Computer software.
Moore, Geoffrey A. Crossing the Chasm: Marketing and Selling Disruptive Products to
Mainstream Customers. New York: CollinsBusiness Essentials, 2006. Print.
Patten, James. "Audiopad." Jamespatten.com. James Patten. Web. 26 Apr. 2011.
<http://www.jamespatten.com/audiopad/>.
Pfiffner, Pamela, and Serena Herr. Inside the Publishing Revolution: the Adobe Story.
Berkeley: Peachpit, 2003. Print.
Pro Tools. Vers. 9. Burbank: Avid, 2010. Computer software.
Koda 64
Quinn, Patrick, and John Lynn. "Interface Design as Part of an Audio Technology Degree."
AES 118th Convention (2005). AES Publications. Audio Engineering Society. Web. 26
Apr. 2011.
Reactable. Reactable Systems. Web. 26 Apr. 2011. <http://www.reactable.com/>.
Reaper. Computer software. Reaper.com. Vers. 3. Cockos. Web. 26 Apr. 2011.
RML Labs. "SAC And The 3M MultiTouch Technology." YouTube.com. YouTube. Web. 26
Apr. 2011. <http://www.youtube.com/watch?v=HiXVO2fiUuE>.
Saffer, Dan. Designing Gestural Interfaces. Sebastopol, CA: O'Reilly Media, 2008. Print.
Schultz, Christopher, Jörn Loviscach, Shailendra Mathur, and Jay LeBoeuf. "An Anatomy of
Graph-‐Based User Interfaces for Media Processing." AES 124th Convention (2008).
AES Publications. Audio Engineering Society. Web. 26 Apr. 2011.
Seago, Allan, Simon Holland, and Paul Mulholland. "A Novel User Interface for Musical
Timbre Design." AES 128th Convention (2010). AES Publications. Audio Engineering
Society. Web. 26 Apr. 2011.
Sergi, Gianluca. The Dolby Era: Film Sound in Contemporary Hollywood. Manchester:
Manchester UP, 2004. Print.
Stone, Dave. Personal interview. 2 Feb. 2011.
Stone, Dave. Personal interview. 9 Feb. 2011.
StudioLive Remote. Computer software. Presonus.com. PreSonus. Web. 26 Apr. 2011.
Tufte, Edward R. Envisioning Information. Cheshire, CT: Graphics, 2008. Print.
V-Control Pro. Computer software. Neyrinck.com. Neyrinck. Web. 26 Apr. 2011.
Wexler, Jeff. "Film Production Audio." Professional Audio Student Organization Guest
Lecture Series. Hamilton Hall, Savannah, GA. 19 Feb. 2010. Lecture.
Koda 65
Yewdall, David Lewis. Practical Art of Motion Picture Sound. Burlington, MA: Focal, 2007.
Print.