51
INOM EXAMENSARBETE TEKNIK, GRUNDNIVÅ, 15 HP , STOCKHOLM SVERIGE 2016 Investigating parameter mapping of the digital musical instrument Force Ghost ANDREAS ALMQVIST NICOLAS JONASON KTH SKOLAN FÖR DATAVETENSKAP OCH KOMMUNIKATION

Investigating parameter mapping of the digital musical ...kth.diva-portal.org/smash/get/diva2:945734/FULLTEXT01.pdf · EXAMENSARBETE INOM TEKNIK, GRUNDNIVÅ, 15 HP STOCKHOLM, SVERIGE

Embed Size (px)

Citation preview

INOM EXAMENSARBETE TEKNIK,GRUNDNIVÅ, 15 HP

, STOCKHOLM SVERIGE 2016

Investigating parameter mapping of the digital musical instrument Force Ghost

ANDREAS ALMQVIST

NICOLAS JONASON

KTHSKOLAN FÖR DATAVETENSKAP OCH KOMMUNIKATION

Kandidatexamensarbete

vid CSC, KTH Investigating parameter mapping of the digital musical instrument Force Ghost

Undersökning av parametermappning hos det digitala musikinstrumentet Force Ghost Jonason, Nicolas [email protected] Almqvist, Andreas [email protected]

Kandidatexamensarbete i: Medieteknik

Uppsatsnivå: Självständigt arbete på grundnivå (kandidatexamen)

Högskolepoäng: 15

Utbildningsprogram: Civilingenjörsexamen -­ medieteknik

Handledare: Elblaus, Ludvig

Examinator: Ternström, Sten

Abstract

This paper investigates the viability of two different mappings of two parameters of the digital musical instrument Force Ghost. The instrument produces sound by having an ambience (a recording of rain or of waves on the beach) filtered by bandpass filters whose center-­frequencies are controlled by a midi-­keyboard. Five bandpass filters are assigned to each note, corresponding to the fundamental frequency its and its multiples (2f,3f,4f,5f). The mapped parameters are the Q-­factor of the bandpass filters and the timbre, defined as the relative level between the even and odd multiples harmonics. These two parameters are mapped to the modulation wheel and the pitch bend wheel. The mappings of the parameters are investigated with the help of musical tasks completed by musicians rounded up with semi-­structured interviews. The interviews revealed that the modulation wheel was to be prefered due to the lack of a spring mechanism (which is an attribute of the pitch bend wheel), forcing the wheel back to its default position (in the middle) when released. The results from the musical tasks suggested an indication that none of the sensors have better controllability than the other, regardless of parameter it controls. In conclusion, a sensor that gravitates towards a resting state does not seem to be suitable to control a parameter (scaling) that lacks a “resting value” (as perceived by musicians).

Sammanfattning

Denna artikel utreder två mappningar av två parametrar hos det digitala musikaliska instrumentet Force Ghost. Instrumentet ljuder genom att en ljudkälla filtreras av bandpassfilter. Ljudkällan i detta fall är ljudet av regnfall i en skog. Varje tangent på ett keyboard är kopplat till minst ett bandpassfilter, vars MIDI-­not kopplat till motsvarande frekvens. Denna frekvens är bandpassfiltrets centerfrekvens. När en tangent nedtrycks genereras bandpassfiltret, vilket skapar en ton ur ljudkällan. De två parametrarna i fokus är kopplade till bandpassfiltrerna. De är Q-­värdet (kopplat till bandpassfiltrernas bandbredd) och klangfärgen (kopplat till övertonerna;; vilka de är och deras amplitudrelation för varje not). Dessa två parametrar är mappade till modulations-­hjulet och pitchbend-­hjulet. Mappningen av dessa parametrar är undersökt genom musikuppgifter och självständig utforskning genomförda av musiker. Detta spelades in och sedan utfördes en semistrukturerad intervju med musikerna. Intervjuerna visade att modulations-­hjulet föredrogs på grund av avsaknaden av fjädermekanismen som finns hos pitchbend-­hjulet. Denna mekanism tvingar pitchbend-­hjulet tillbaka till sin utgångsposition (i mitten) när den släpps. Resultaten från musikuppgifterna indikerar att ingen av sensorerna har bättre kontrollbarhet än den andre, oavsett vilket parameter den kontrollerar. Sammanfattningsvis, en sensor som graviterar mot ett viloläge verkar inte vara lämplig för att kontrollera en parameter (-­skalning) som saknar ett “vilovärde” (så som uppfattat av musiker). Table of contents

1

Table of contents

1. Introduction 3 1.1 Background 3 1.2 Problem formulation & purpose 3 1.3 Limitations 4

2. Theory & Materials 5 2.1 Central concepts 5

2.1.1 Digital musical instrument (DMI) and mapping 5 2.1.2 Force Ghost (bandwidth, timbre) 4 2.1.3 Controllability 6 2.1.4 Expressivity 6 2.1.5 MIDI 6

2.2 Background theory 6 2.3 Materials 9

2.3.1 Parameters 9 2.3.2 Mapping 10 2.3.3 Sensors 11

2.4 Method 12 2.4.1 Testing and evaluating controllability 12

2.4.1.1 Controllability tasks 12 2.4.1.2 Formatting and analysis of controllability data 14

2.4.2 Testing and evaluating expressivity 15 2.4.2.1 Expressivity task 16 2.4.2.2 Treating and evaluating expressivity task data 16

2.4.3. Semi-­structured interview script for qualitative evaluation of controllability and expressivity 16

2.5 Participant selection 17 2.6 Test environment 18

3. Procedure 18 3.2 Pre-­study 18 3.3 Testing 19

4. Method and procedure criticism 19 4.1 General validity concerns 19 4.2 General reliability concerns 20 4.3 Measuring controllability 20 4.4 Concerns of design decisions made during pre-­study 20 4.5 Miscellaneous 21

5. Results 21 5.1 Semi-­structured interviews 21 5.2 Controllability test 23

6. Discussion 26 7. Conclusions 29 8. Future research 29 9. References 30 Appendix A 33 Appendix B 40

2

1. Introduction 3

1.1 Background 3 A musical instrument that encourage and enable players to evolve their skill over time and become virtuoses is most desirable in instrument design (Jordá, 2004). As parameter’s mapping greatly influences an instrument’s intrinsic possibilities, the mapping of specific parameters may be seen as one of the most essential aspects of digital musical instrument design (Hunt et al., 2003). Consider the mapping of the violin;; the string is both the sound generator and controller of various other parameters, such as the timbre (as it is conventionally used). This has an big impact on how a player musically and psychologically perceive the instrument, and thus how the instrument is played (Hunt et al., 2003). The same mapping effect goes for digital musical instruments, although the mapping is separable from the sound source which make it in many other ways vastly different from acoustic instruments, both performance and design-­wise (Hunt et al., 2003;; Marshall, 2009;; Barbosa et al., 2011). This paper investigate different mappings of two sound synthesis parameters of the author-­made DMI (digital music instrument) Force Ghost. The investigation focus on the attributes of controllability and expressivity;; on how the parameters and sensors interact and are perceived in their context. This is done through the completion of reproductional tasks and boundless exploration (Hunt & Kirk, 2000;; Wanderley & Orio, 2002;; Stowell et al., 2009;; Marshall, 2009). Data (audio, MIDI information and subjective experiences) is gathered by recording and interviewing human test subjects regarding their performances. The data is evaluated by comparing the recorded performances to the original sound by MIDI trajectory comparison using the pearson correlation coefficient and the mean absolute error, with the complementary breakdown of the semi-­structured interviews’ data (Hunt & Kirk, 2000;; Marshall, 2009). The target group of users of Force Ghost and its interface is anyone who wants to create music or try a musical instrument with a character not commonly seen. Even though there are low requirements for the target user group, the survey has more strict requirements in the participant selection in order to gather as valuable and usable data as possible.

1.2 Problem formulation & purpose 3 This paper investigates the difference of two mappings of the DMI Force Ghost. The aspects in focus are the controllability and expressivity of the mappings. The parameters that are being looked at is the parameters bandwidth and timbre (as more closely described in 2. Background theory and terminology).

3

The mapping of a musical instrument’s parameters have a huge affect on emotive reaction and the entire character of the instrument (Hunt et al., 2003). As DMIs in many cases lack the physical and other “non-­auditory” feedback that is by nature present for acoustic instruments, this stresses even more the essentiality of the mapping of sound manipulating parameters (Marshall, 2009). Sensors differ in features and characteristics, which may affect the experience of players differently depending on the parameter it is mapped to. The purpose is thus to discover and explore the design space surrounding Force Ghost, consequently leading to more informed design decisions for a final design.

1.3 Limitations 4 The core idea of Force Ghost, which will be explained subsequently, naturally leads the design towards the use of conventional keyboards regarding the note and velocity input. Thus, the investigation is in a way limited and influenced by it regarding expectations by the users. The investigation is strictly limited to the parameters of bandwidth and timbre in its context of Force Ghost. The keyboard that was used is the M-­Audio Axiom 49 . The potential parameter sensors was not limited to the keyboards 1

sensors during the pre-­study. This investigation is limited to the perspective of the performer. There are many stakeholders concerning the development of musical instruments, and not everybody share the same opinion of what is of most importance (O’Modhrain, 2011). However, in a live musical context, the performer has the privileged position of whom’s vision and present experience is key to what is to be perceived through the actions expressed through the instrument (O’Modhrain, 2011).

2. Theory & Materials 5 In this chapter, central concepts and theoretical background of DMI development is presented.

2.1 Central concepts 5

2.1.1 Digital musical instrument (DMI) and mapping 5 A digital musical instrument contains a sound producing element and a control interface (which is not necessarily separate) (Malloch et al., 2006). The control interface includes control input points (sensors) where the sound and parameters of it may be altered (Malloch et al., 2006). The relation between the parameter and the sensor is called mapping (Malloch et al, 2006).

1 Axiom 49: http://www.m-­audio.com/products/view/axiom-­49#.Vv0A4DaLSqA

4

2.1.2 Force Ghost (bandwidth, timbre) 4 Force Ghost is a DMI developed in Pure Data by the authors. The instrument typically 2

inputs sound rich of energy in every audible octave and allows the user to play it as a regular synthesizer. As a key is struck on the keyboard, the MIDI note’s corresponding frequency is used as a center frequency for a bandpass filter. Thus, a tone emerges (with the amplitude dependent on the velocity), as seen in figure 1 below. In the context of Force Ghost, the word “timbre” is used to denote the amplitude relationship between the partials (i.e. the spectral content), whereas “bandwidth” is used to denote the bandwidth of each bandpass filter. Depending on the bandpass filter’s bandwidth, the tone will be more or less audible, and the sound source will be more or less apparent. The partials have a musical relationship which enable a richer sound to the played notes.

Figure 1. This is a simplified illustration of the workings of the DMI Force Ghost. The bandpass filters allow a sound source to sound through tones, dependant on the bandwidth and timbre (and, of course, input velocity). As Force Ghost may input sound sources such as the sound of rain, waterfall or the sound of wind, it may specifically be used as an instrument of ambience. The idea of the instrument is to, with the help of the bandwidth and timbre, move seamlessly between pure atmospheric, ambient sound to actual musical tones, whilst retaining a degree of the sound source’s character. 3

2.1.3 Controllability 6 The “controllability” of a control input is the accuracy, resolution and range of which a controller may alter a corresponding parameter, as perceived by the performer (Orio, Schnell & Wanderley, 2001). This is measured by completion of musical tasks and by semi-­structured interviews, which will be explained deeper subsequently (Hunt & Kirk, 2000;; Orio, Schnell & Wanderley, 2001).

2 Pure Data: https://puredata.info/ 3 Link to sound sample: https://drive.google.com/file/d/0BzJ3PI8GmZd1V1dRMnA2QnJhckE/view?usp=sharing .

5

2.1.4 Expressivity 6 “Expressivity”, as used in this paper, is the perceived ability and potential of which a sensor with a corresponding parameter may be used in an expressive manner. This data is collected through semi-­structured interviews after the participants has carried out musical tasks and free exploration of the instrument (Stowell et al., 2009). This will be more thoroughly explained further on.

2.1.5 MIDI 6 MIDI (musical instrument digital interface) consist of detailed instructions of a communications protocol (Lehrman & Tully, 1993). Continuing, they convey that this communication protocol is mainly used for the communication to and between electronic musical instruments. The communication of MIDI doesn’t convey music per se, but rather the information of how it is performed. For instance, it cannot convey the sound of a violin, but it may convey the way it might be played, for instance what amplitude, what pitch, all the changes over time including tremolo and vibrato (Lehrman & Tully, 1993). In our investigation, MIDI is used as a means of communication to allow the musician to play the DMI and to track the performance for subsequent evaluation.

2.2 Background theory 6 The method of evaluation of digital musical instrument interfaces have no agreed standard procedure (Stowell et al., 2009;; Barbosa et al., 2011). Evaluation techniques of HCI interfaces have been attempted, but they are not a perfect match for DMI interfaces, as evaluations of the latter have to observe divergent aspects such as rhythm, timing, long term learning, creativity and affectivity (Stowell et al., 2009;; Orio, Schnell & Wanderley, 2001). Naturally, various methods have been developed, suggested and tried in order to evaluate parameter mapping of DMI’s and to find an appropriate standard procedure (Hunt & Kirk, 2000;; Orio, Schnell & Wanderley, 2001;; Hunt et al., 2003;; Stowell et al., 2009;; Barbosa et al., 2011). The difficultness of evaluating and creating instruments also stretches into the non-­digital domain. Consider the following question for a brief moment: what is the newest musical instrument that have had a long-­term attraction and popularity? Sound and music computing researcher Falkenberg Hansen (2016) suggests the turntable used by scratching disc jockeys, which emerged during the 1970s at the time of the hip hop genre’s rise. This indicate that developing musical instruments for a wide audience and for long-­term attraction is not elementary. Musical instruments have in the past often come about to overcome various problems, like the difficulties to be heard outdoors (wind instruments) and generally to be heard by many people (electric guitar) (Braun, 2000;; Falkenberg Hansen, 2016). Nowadays, much focus lies on new ways of interacting with musical instruments in an exciting and attractive way by organisations

6

such as NIME . Mapping is a core factor of interaction with these new digital musical 4

instruments (Hunt et al., 2003;; Marshall, 2009). A common agreement suggest that parameter mapping evaluation of DMIs may not be done without performing both quantitative and qualitative studies (Hunt & Kirk, 2000;; Orio & Wanderley, 2001;; Bornand et al., 2005). Music is a subjective experience, thus evaluation of controllers of musical instruments can scarcely be done refraining from the subjective experiences of the musicians. By including the measurement of the performance of test subjects on simple musical tasks, valuable data may be extracted and expressed in terms of controllability (Hunt & Kirk, 2000;; Orio, Schnell & Wanderley, 2001;; Marshall, 2009). There are obstacles that are hard to overcome regarding relevant data collection due to the possibly large amount of tacit knowledge embedded in musical instrument playing (Elblaus et al., 2012). It has been shown that developing 5

and designing DMI’s is work of an interdisciplinary sense and by involving expertize from adjacent areas makes the design to be more prone to be successful (Elblaus et al., 2012). Including musicians to work intimately with the engineers during the design process is thus beneficial (Elblaus et al., 2012). Knowledge communication may be facilitated by having musicians using the instrument and communicating both by manifesting and verbalizing (Westerlund, 2009). Heron (1996) talk about approaches to research methods in human sciences, which this paper touches upon. Heron discuss an overview of research regarding research with people, on people, about people and for people. Heron suggests that research such as design for people should include the researcher participating as a subject in the research. The validity of the research’s outcome is questionable if it is not grounded in the researcher’s experience, he argues. Without personal and subjective embodiment and manifestation, the previously mentioned tacit-­knowledge may be hard to understand and relate to. Nelson and Stolterman (2003) discuss “practical wisdom” as a way to understand the reality, which may only be acquired through practice. These notions support the designer’s involvement and participation in using the evolving product at a development state. When evaluating parameter mapping, it is for obvious reasons important to shed light on the sensor of which the parameter is mapped to and thus controlled by. As Bongers (2000) vividly describes it: “Sensors are the sense organs of a machine.” A sensor allows the transformation of physical energy of i.e. a human to electricity which may be perceived by the machine, he continuous. A sensor of a DMI may be used to induce and/or alter sound. As previously noted, the fact that the sensor is not physically connected to the sound generator (as for acoustic instruments) makes the sensor type very important in order to get an “intuitive feel” for the instrument (a “natural connection” between the sound generation and sound altering) (Hunt et al., 2003;; Marshall, 2009;; Barbosa et al., 2011). Sensors may be classified in order to obtain an

4 NIME is spelled out New Interfaces for Musical Expression. 5 Tacit knowledge is information that is hard to communicate by verbal or written means.

7

overview to facilitate comparison within the field of DMI. As a first, broader classification of sensors, White (1987) suggests sensors to be classified by (in short):

-­ The domain of detection (i.e. electronic, mechanical, magnetic, chemical), -­ The technological aspects of the sensor (i.e. speed of response, sensitivity,

output format, cost, size), -­ The material of the sensor, -­ The field of application.

Vertegaal et al. (1996) narrows the categorization of sensors down to the field of DMI through the following questions:

-­ What movement type is sensed? (i.e. position, movement, force) -­ What is the resolution of the sensing? -­ What is the agent of control? (i.e. hand, fingers, lungs) -­ What is the type of feedback provided? (i.e. tactile , kinesthetic , visual) 6 7

However, as Marshall (2009) partially point out, a categorization like this does not cover a full appliance perspective of the sensor for the end user. Depending on how the sensor is implemented in the software, properties such as the resolution, scaling and feedback may be altered. Decisions regarding this in our investigation were made in the pre-­study. Furthermore, Vertegaal et al. (1996) continue with a suggestion that for mapping a parameter to a sensor, the parameter’s musical function should be observed. Depending on the sensor of which control a specific parameter, its musical function may change. As summarized by Marshall (2009), musical functions may be divided into whether it is an

-­ absolute dynamical function (e.g. a function where often an absolute value of those available is chosen and often changed, such as a note’s pitch on a piano),

-­ relative dynamical function (e.g. a function of something that change over time relatively to a reference which is also often changed. For instance, the modulation of a pitch to produce vibrato),

-­ static function (e.g. rare changes, such as tuning selection). This categorization have been used by Wanderley et al. (2000) and will also be applied to this investigation subsequently for a condign comparison between the mappings of the parameters.

6 Tactile feedback is sensed by the surface of the skin (Vertegaal et al., 1996). 7 Kinesthetic feedback is sensed internally by muscle and other receptors (Vertegaal et al., 1996).

8

Previous research have suggested that design is a so-­called wicked problem (Westerlund, 2009). The definition of a wicked problem is shortly (Rittel & Webber, 1973):

-­ There is no definitive formulation of the problem. -­ There is no stopping rule of the problem. -­ Solutions to the problem are not true-­or-­false, but good-­or-­bad. -­ There is no immediate and no ultimate test of a solution to a wicked problem. -­ There is no finite number of possible solutions.

By investigating several solutions, a greater understanding of the design space may be attained (Westerlund, 2009). By making designs and discovering both good and bad aspects of them, the design space is more comprehended, which may lead to more sensible design decisions. What follows is the methods and materials used to discover a part of the Force Ghost design space.

2.3 Materials 9

2.3.1 Parameters 9 The two parameters that will be investigated and modulated of Force Ghost are, as pointed out, the bandwidth and the timbre. The parameter bandwidth was chosen to be analyzed because of its relatively unusual scope of use and due to its central function in Force Ghost. The timbre parameter is also a core parameter of Force Ghost. It is seldom seen in a specific context like that of Force Ghost, which adds to its allure and makes it a very interesting parameter to investigate. 8

During the pre-­study, the scaling of the sensors and the parameters was decided. A sample of this is illustrated in figure 2 and figure 3 on 11 and 12. The Q factor is calculated from the controller value (0-­127) with the function:

.05 0Q = 1 + 4 c ∈ [0,127] is the controller midi value. The timbre parameter, as defined in the force ghost context, is a bit more complex: a high value increases the amplitude of overtones of frequency 2f and 4f and decreases

8 Link to example of timbre modulation: https://drive.google.com/file/d/0BzJ3PI8GmZd1RU13b2NvTGp4N2s/view?usp=sharing . Link to example of bandwidth modulation: https://drive.google.com/file/d/0BzJ3PI8GmZd1SE00elV3SEhRRlU/view?usp=sharing .

9

the overtones of frequency 3f and 5f while a low value does the opposite. The amplitude of each overtone is calculated from the controller value (0-­127) with the function: Gi =0.75i-­1.(imod2-­(c/127).-­1i-­1) i ∈ 2,3,4,5 is the overtone factor, c ∈ [0,127] is the controller midi value. In the chapter 2.1 Terminology, a possible intended musical use of the bandwidth was explained. Applying the definition of musical function as stated by Vertegaal et al. (1996) (as presented in chapter 2.2 Background theory), the two parameters may represent a composite of a relative dynamical function and a static function. This subjective observation by the authors is done fully aware of that musicians may perceive this differently, which is partly why this investigation was made. It was decided that the parameter scaling would be independent of the sensor, thus the MIDI-­values 0-­127 was supposed to give the same result unrelated to their mapping. The authors wanted each sensor to be neutrally and unbiasedly compared. By keeping certain aspects locked -­ making the parameter independent of the sensor -­ it was seen by the authors to make for a purer comparison.

2.3.2 Mapping 10 The two parameters will be mapped in two of the following variations: In the A mapping, the modulation wheel sensor is mapped to the bandwidth parameter and the timbre is mapped to the pitch bend wheel sensor. The B mapping is the opposite. All the mappings weren’t executed by every participant to prevent a possible learning impact, caused by the participants to acclimatize to a certain mapping, which could conflict with the other one when changed. Thus half of the participants executed tasks with mapping A and the other half with mapping B. The actual mapping is done in Pure Data in the program of Force Ghost. The pitch bend wheel and modulation wheel’s position MIDI values are sent from the keyboard to the computer and is received in Pure Data. The received MIDI data is correspondingly routed to the bandwidth and timbre inputs.

10

Figure 2. Excerpt from the force ghost code displaying the scaling functions of timbre and Q factor parameters.

2.3.3 Sensors 11 As the parameters’ potential musical function are classified with the help of Vertegaal et al. (1996) in chapter 2.3.1, the decision of selecting suitable sensors had to be made. Today and in the past, synthesizers have often been fitted with modulation wheels and pitch bend wheels. Conventionally, the modulation wheel is situated to the left on a 9

keyboard is mapped to the depth of a LFO, whose typical parameter control may be classified as a relative dynamic function. The pitch bend wheel is also usually 10

positioned to the left of the keys, beside the modulation wheel. As the pitch bend wheel typically controls the pitch, temporarily used to allowing generally a couple of semitones of “pitch-­glide”, the musical function may also be classified as a relative dynamic function.

9 For instance, see: http://www.m-­audio.com/products/browse/category/keyboards-­and-­controllers and http://www.roland.com/categories/synthesizers/ 10 Generally, LFO’s (low frequency oscillators) on synthesizers control for instance a vibrato or tremolo function: http://www.roland.com/categories/synthesizers/analog_modeling/, https://yamahasynth.com/index.php?option=com_k2&view=itemlist&layout=category&task=category&id=26&Itemid=861

11

Input controllers such as these were first introduced in the beginnings of Buchla around 1970 (Paradiso, 1997). Thus, owing to its usualness, there may be an expectancy of keyboard players to have experience of these input controllers, which indicate a degree of suitability as sensors. What also has to be considered is in which way the parameters are intended to do;; how they are intended to change their value. This is connected to the previously made classification of the musical functions. A big part of the instrument’s creational intention is to seamlessly move in between noise/pure tones and soft/glassy timbre. The transit should thus be perceived as continuous, which rule out some sensors. The modulation wheel and the pitch bend wheel works in a rolling manner. As decided during the pre-­study, by rolling the wheel away from yourself, the clarity of tones increases (bandwidth decreases) respectively changing the timbre to include only the overtones of two and four times the fundamental (“softer” sound), depending on the mapping. Reversed, by rolling the wheel toward yourself, the original sound source becomes more apparent (bandwidth increases) respectively the timbre includes only the overtones of three and five times the fundamental frequency. The way the sound changed as a consequence of the movement, i.e. the polarity of the scaling, felt natural to the authors, which was the reason to the choices. The pitch bend wheel has a spring mechanism which forces it to the middle position as it is released. This is the only separating property between the two sensors, as the modulation wheel stays in its released position. Often, the pitch bend wheel has a higher resolution than the modulation wheel in order to allow smoother pitch changes (Lehrman & Tully, 1993). This, however, was not the case of our investigation, where the modulation wheels had the same resolution. The default MIDI value of the pitch 11

bend wheel is 64 (in the middle). Both the sensors are able to send values from 0 to 127.

2.4 Method 12

2.4.1 Testing and evaluating controllability 12 Guided exploration of an instrument’s parameters and its mapping is a well-­recognized way of presenting a parameter’s range of possibilities and measuring its usability and controllability (Hunt & Kirk, 2000;; Wanderley et al., 2002;; Stowell et al., 2009;; Marshall, 2009).

2.4.1.1 Controllability tasks 12 The controllability tests consisted of four tasks;; two per parameter. Human test subjects listened twice to a prerecorded sound where one note was constantly pressed

11 The reason to this is unknown. It was discovered during coding that they had the same resolution.

12

and one parameter was modified over the time of sixteen beats with a specific BPM , 12

resulting in a ten second clip. The one note constantly pressed allowed the subjects to focus solely on the modulation of the parameter without any additional distractions, such as playing a melody. Because of the serene nature of the instrument’s sound (as perceived by the authors), a slower tempo was used (92.34 BPM). Subsequent to the listening, the attempted reproduction of the participants was performed and recorded. After this, another listening was made and a second attempt was made and recorded. This was done until every task was completed. Metronome was provided during both listening and performing, facilitating the timing aspect. An example of a task is illustrated in figure 4. Six participants’ performances are also illustrated. The rest of the tasks and the individual data is attached in appendix B.

Figure 4. An example of a controllability task (original sound at the top) with the attempted reproductions beneath, starting with three participants attempts with mapping A (the modulation wheel to bandwidth and the pitch bend wheel to the timbre).

12 BPM is spelled out beats per minute.

13

The tasks’ design was inspired by tasks of earlier research (Hunt & Kirk, 2000;; Marshall, 2009). The tasks were meant to represent basic, introductory modulations that are by the authors seen as typical modulation and a good way of introducing the parameters and sensors to the participants, whilst gathering information of the controllability.

2.4.1.2 Formatting and analysis of controllability data 14 The data collected from the controllability tests (MIDI and audio) had to be formatted and treated in order for an analysis to be performed. A numerical analysis of the MIDI data was done. The purpose of the numerical analysis of the MIDI data is to identify if either of the controller presented an advantage in recreating the originals and thus indicating a higher controllability for a given parameter.

13 MIDI was recorded, along with the Force Ghost audio output, into Logic Pro 9. The clips were then cut and exported manually. The start was defined as the nearest quarter note to the keypress event and the end followed 16 beats later. To extract the controller messages from the raw MIDI data, a new pure data patch was created. This patch reads a MIDI file in real time (like they would if they were being sent to an instrument) and outputs the time and controller value to a text file each time a controller event (pitch bend wheel or modulation wheel event, depending on the task) occurs. The time is measured by a counter that starts at the beginning of the MIDI file's playback and increments each millisecond. This method is equivalent to sampling the control value each millisecond. The resulting text files were then imported into matlab. The contents of the files were put in vectors where each entry represented a controller event. Since the text files only contained entries where a controller event occurred, the blank entries in the vectors were filled with the value of the preceding controller event. The vectors were then cropped to only encompass the last 8 seconds of the recordings (out of 10 seconds), so that the subjects were not required to have the correct starting position of the controller when the key was pressed. After reformatting the MIDI files, the Pearson correlation coefficient and the mean absolute error were used to assess the performance of each controller at each task. The Pearson's correlation coefficient measures linear correlation between two variables (Sedgwick, 2012). It is invariant of location and scaling of the variables. Thus, it can be used as a measure of similarity of the shape of two curves. To compute the correlation, the matlab function corrcoef was used. The mean absolute error was used to measure the overall accuracy of the subjects attempts. It is simply the average of the absolute value of the difference between the two curves at each point.

14

Since the original of the task pertaining to the Q-­factor was recorded with the pitch bend wheel, comparing the raw controller data does not allow for a fair comparison of the sensors. Since the pitch bend wheel returns to the middle position when released, the sections of the tasks where the parameter is set to a middle-­value will naturally be more accurate when reproduced with a pitch bend wheel. Therefore, a second round of comparisons were made for task one. In this variant, the sections of the task where the pitch wheel was left a the middle value were removed (this was the case for most of the first task). This edit was made by looking at the original and manually removing the sections (and the corresponding intervals in the attempts of all subjects) where the controller data was close to the middle value for more than 500 ms. In the second Q factor task, although the pitch bend wheel was never left for long (less than 500 ms) at the middle value, some of the subjects using the pitch bend wheel acted as if this was the case. Perhaps the first task set up the expectation that the wheel would stay at the center for most of the task. Since this error from the participants may signify a separate problem, it was considered of interest to also study the subjects attempts at the second Q-­factor task with the effect of this mistake minimized. Thus, in a similar fashion to the first task, an edited variant was made, only keeping the sections where the parameter would change quickly. The study of these edited variants give a more valid estimate of the controllability granted by each controller. When analyzing the data from the timbre trials, the authors erroneously thought the originals had been recorded using the mod-­wheel, which does not return to a middle position. Therefore none of the timbre trials were given edited variants. In addition, a qualitative, semistructured interview was performed subsequently to the completion of the tasks connected to each mapping, as often conducted (Hunt & Kirk, 2000;;M Marshall, 2009;; Stowell et al., 2009;; Barbosa et al., 2011).

2.4.2 Testing and evaluating expressivity 15 To include the study of expressivity is an addition to the framework suggested by Wanderley et al. (2001), in which only the instrument’s usability through simple musical tasks is evaluated. But, limiting the investigation to study maximally simple tasks performed on the instrument “risks compromising the authenticity of the interaction, creating situations in which the affective and creative aspects of music-­making are abstracted away” (Stowell et al., 2009). The expressivity test attempted to give the participants solitude and undemanding conditions whilst exploring the parameter mappings freely. By letting participants complete tasks that use one parameter at a time, a better performance may be achieved when the two parameters are subsequently combined in a task (Masliah & Milgram, 2000). This is as applied in Force Ghost when two

15

parameters and regular keys are included in the expressivity test after the controllability tests.

2.4.2.1 Expressivity task 16 Both audio and MIDI were recorded also for this task, which lasted ten minutes. When the time was up, a semi-­structured interview was conducted. For the expressivity task, the participants were instructed of additional information of how to change the octave of the keyboard. Then they were instructed to play the instrument alone for ten minutes, doing whatever they pleased and encouraged of using and thinking of the parameters included in the controllability tests. The expressivity task was used to gather as much data as possible and to let the participants submerge into the instrument to understand the parameter mappings further.

2.4.2.2 Treating and evaluating expressivity task data 16 The evaluation of the expressivity was made through a semi-­structured interview. The authors took notes during the interviews and common denominators was highlighted during evaluation. Initially, there was also an intention to extract information also through the authors listening to the recorded audio, but it proved to be hard to interpret the data in a scientific way.

2.4.3. Semi-­structured interview script for qualitative evaluation of controllability and expressivity 16 To get an indication of whether the participants understood the tasks, the two first questions was asked, as illustrated in figure 5 on page 17. The rest of the questions attempted to extract valuable information of the participants experience regarding the parameter mappings. Using a questionnaire with entirely open questions and encouraging free descriptions during the interviews was done to make the survey as unbiased as possible (Friberg, 2016).

16

Figure 5. The questionnaire for the semi-­structured interviews. The unaltered transcripts of the interviews are attached in appendix A.

2.5 Participant selection 17 The participant selection started off combining lists of acquaintances between the authors. The acquaintances were ruled out after the criteria of having at least five years of musicianship of keyboards and synthesizers. In the end, the investigation consisted of 6 participants.

17

2.6 Test environment 18 The performance of the participants during the tests was recorded using Jack routing 13

from the Pure Data patches into the digital audio workstation Logic Pro, as seen in figure 6. The participants did the tests individually with the authors in a secluded area.

Figure 6. Flow scheme during the tests.

3. Procedure 18 The procedure of the whole investigation is explicitly presented.

3.2 Pre-­study 18 After a literature study, a pre-­study was carried out internally among the authors. During this, the choice of sensors was decided, the scaling of parameters was done and the volume envelope was chosen, as partly mentioned in chapter 2.3.1. The controllability and expressivity tasks were created and tuned to make sure the tests were appropriate and well balanced in terms of complexity and the difference between them. This was done by the authors by actually performing with and using the keyboard and the sensors. The controllability tasks were inspired by the tasks used in previous research by Hunt and Kirk (2000) and Marshall (2009).

13 Jack ( http://jackaudio.org/) is an internal audio router.

18

3.3 Testing 19 The subjects entered the testing area and were introduced to the keyboard setup. They were informed of how many tasks that would take place and the nature of them. In order to receive the most useful information out of the semistructured interviews, it was made clear that the input controllers and their experience of the sensor controlling the parameter input was of interest, not the performance of the participants themselves. To create a relaxed atmosphere, it was explained that any difficult experiences would depend on the modulator input (Marshall, 2009). Before each task was introduced, one minute was given to explore and feel their way around the parameter that would be included in the specific task (Stowell et al., 2009;; Marshall, 2009). As the controllability testing begun, the participants were asked to reproduce the sounds, also with respect to timing. Each original sound was presented twice prior their first attempt and once again for the second attempt. There was a 30 second break in between each task to let the participants rest. As the tasks of the two mappings were complete, the participants were asked of their opinion on their performance and if the understood the tasks. Afterwards, the expressivity test begun. Finally, the rest of the interview was conducted.

4. Method and procedure criticism 19

4.1 General validity concerns 19 The interviews were based on open questions and conducted in a manner that encouraged the participants to contribute with opinions openly around and beyond the questions. Questionnaires and interviews like this in this context are seldom seen in the literature, due to the difficultness of representing the data in a easy-­to-­read fashion and quantifiable manner (Friberg, 2016). The nature of an open investigation like this imply that the results should be observed with certain care, partly because of a potentially compromising communication between the participants and the authors, and the fact that every interview was not conducted in the exact same manner. The questions posed to the participants could have been made more quantifiable, such as the questionnaire of Bornand et al. (2005). This would have facilitated lucidity of data presentation, but might have given way to simplifications, distorting the original and precise thought of the participant. However, by combining this method with the conducted one (using more open questions and minimizing the questions biasing), would have resulted in “the best of both worlds”.

19

By recording the audio from the interviews or letting the participants themselves write their answers would have increased the validation of the results. As the interviews was conducted, the authors put the interview in writing. In the questionnaires (attached in appendix A), not every questions has an answer to it. Some of the questions resulted in a discussion that dealt with the other questions. Thus there was an collective understanding of the general experience and opinions in the authors, without all of it being put in explicit writing in the questionnaires. During the interviews, segments the recorded audio of the participant’s expressivity could have been replayed in order to stimulate the discussion even further for more specific details of the experience (Stowell, et al., 2009).

4.2 General reliability concerns 20 As the number of participants is quite low in this survey, the reliability may be seen as low as it greatly depends on the individual participants. The low number of controllability tasks also contribute to the reduction of reliability for that specific part of the survey. A higher degree of reliability may be achieved by adding more tasks and using more participants.

4.3 Measuring controllability 20

Since the pitch-­wheel returns to a middle position, a trajectory made with a pitch wheel

using a pitch wheel will be easier when the trajectory is at a middle position. The controllability tasks were created by the authors performing them, using the pitch wheel. Although a little error in controller position might not entail a significant difference in perception for a listener, it certainly makes a difference in the numerical measures used in this study to assess controllability. The advantage given to the pitch-­wheel was somewhat attenuated in the study of the edited variants of the trials (only made for tasks pertaining for the q factor, see method). Altogether, these points weaken the validity of our measures of controllability. Instead, the tasks could have been constructed using computer generated modulations (Hunt & Kirk, 2000).

4.4 Concerns of design decisions made during pre-­study20

During the pre-­study, major design decisions were made by the authors which influenced the investigation. These decisions may be supported by the thesis presented by Heron (1996), as previously mentioned. However, collaborative inquiries might have been used in a higher degree than it was to achieve a potentially larger extent of understanding. Of the two authors, one of them could have been kept unaware of the specifics of the tasks and also completed them as the rest of the

20

participants. This might have allowed even more submersion in the other participants point of view which could have resulted in additional valuable information. Due to the differences between the pitch bend wheel and the modulation wheel, it was considered to adapt each parameter to respective sensor. For instance, this could’ve been to only use the upper half of the pitch bend sensor for the bandwidth parameter, enabling a default value in one of the extremes. This would have allowed a default value with the highest Q-­factor, which sound close to a pure tone. It would’ve allowed the participants to play without reserving one hand for constant modulation in order to hear a clear tone. But in the end, the scaling of the parameters were decided to not relate to their mapping. This decision had the impact of not using each sensor to its strengths, which from one perspective makes for an unequal comparison. The design decisions made during the pre-­study could have been reinforced as valid decision by including external musicians in the design process in a so-­called participatory design. Instead of only inviting musicians at the stage of evaluation, incorporating the end user during the development process make the product more likely to be successful, as previously mentioned (Elblaus et al., 2012). If resources such as time and money would have been larger, this might have played a less significant role than it did, as additional prototype designs and evaluations could have been made.

4.5 Miscellaneous 21 Every aspect of the testing was done individually with the musicians. Music is many times a collaborative experience and the experience of playing with others should not be neglected. By conducting group discussions and group playing, information may have surfaced that did not surface during the solo sessions (Stowell et al., 2009). During the controllability data treatment, MIDI files had to be collected and cut for evaluation. Although this was done carefully, the tedious nature of manually cutting and labeling 52 snippets of MIDI (4 originals and 48 attempts) makes it difficult to rule out the possibility of human error.

5. Results 21

5.1 Semi-­structured interviews 21 Firstly, all the participants conveyed that they understood the given tasks. There were a number of significant observations in the conducted semi-­structured interviews, which treated both the controllability tasks and the expressivity tasks. Overall, the modulation wheel was rated higher than the pitch bend sensor in the interviews due to the conveyed fact that it kept its position as it was released, in contrast to the bend sensor. This applies to both of the mappings.

21

As the bandwidth has a direct impact of how distinct the tones are, it was seen as beneficial having it mapped to the modulation wheel, where it could be left in a position which allowed two hands playing simultaneously instead of locking one hand to the pitch bend sensor. Besides that, every participant expressed that it was natural having the parameters modulated by the wheel type of sensors at hand. Five participants expressed an ability of the wheel mapping that allowed them to travel from the noisier sound to the more pure tone-­like sound. Three participants with the mapping of bandwidth to the modulation wheel said the mapping and parameter modulation allowed the positioning of the sound to move between the background and the foreground. All test subjects felt that their control over the parameters gradually increased as they gained more experience;; applications of various modulations to a certain way of playing was discovered. This applied to both of the mappings. There was not an “skill-­roof” instantly reached;; every participant explained that they more time they spent with the instrument, the better control they achieved. The recognition from previous instruments were slight. One participant (using mapping timbre to pitch bend wheel, bandwidth to modulation wheel) felt that the timbre mapped sensor reminded her of the controls of a tonewheel organ. The same participant felt that the bandwidth sensor reminded her of the sound and control of a subtractive synthesizer. A common denominator of the participants was that the sensors and the parameters they controlled gave leeway to expressivity. Other than the urge of the bend sensor to return to its default position was annoying, there was no general experienced deviation exposed by the interview between the sensors and the mappings. However, there was also critique on the design as whole and the scaling of the modulated parameters, which was decided during the pre-­study. Two of the participants expressed the opinion of that the effect of the timbre should have been reversed;; i.e. the timbre should have gotten brighter as the wheel was rolled upwards instead of the opposite. One participant expressed a will to be able to achieve more than was allowed with the two sensors and the corresponding parameter. One participant thought the sound in the higher registers was too “bright” and “unpleasant”, which resulted in the participant mostly playing the lower notes. Five participants expressed the instrument to be very good for ambience type of sounds, with mentions such as cinematic, spa and theatralistic. One of these participants also mentioned the bassier sounds to be applicable as a substitute for piano, strings or wind instruments in contexts such as these.

22

Various alternatives to the pitch bend wheel were suggested by two participants with the bandwidth to pitch bend wheel mapping. The suggestions consisted of the encoder , the slider, multiple footpedals , and an X/Y surface, which would allow combinations 14 15

of the parameters in a possibly convenient way.

5.2 Controllability test 23 The following plots show first, the correlation coefficients, and secondly, the mean absolute error, between the original and the subjects attempts at reproducing them. In the illustrations, attempts made with the pitch bend wheel marked with triangles and attempts made with the modulation wheel are marked with a crosses. The numbers in the legend indicate which participant each data point belongs to.

Figure 7. Pearson’s correlation of subjects attempts to the originals for Q-­factor tasks.

14 An encoder may be described as a digital equivalent to the analog potentiometer;; a knob sensor. 15 Link to an example of the type of footpedals suggested: http://www.ehx.com/products/expression-­pedal .

23

Figure 8. Pearson’s correlation of subjects attempts to the originals for Q-­factor tasks, edited variant.

Figure 9. Pearson’s correlation of subjects attempts to the originals for timbre tasks.

24

Figure 10. Mean absolute error of subjects attempts to the originals for Q factor tasks.

Figure 11. Mean absolute error of subjects attempts to the originals for Q factor tasks, edited variants.

25

Figure 12. Mean absolute error of subjects attempts to the originals for Q factor tasks, edited variants.

6. Discussion 26 The one point that is most clearly indicated is that both the parameters bandwidth and timbre were prefered by the participants to be mapped to the modulation wheel, in contrast to the pitch bend wheel. The results from the interviews suggest that the pitch bend wheel should have a parameter with the rest value which allow the sound to be relatively neutral or clear with distinct tones, thus not forcing the performer to constant modulation of the wheel and reservate one hand in order play freely. There seem to be an indication of that the pitch bend wheel have a larger part of the of the static-­characteristics (as discussed in 2.2 Background theory) than the modulation wheel. As Force Ghost is an instrument with its core of the conventional keyboard synthesizer, expectations of being able to play the instrument with two hands probably exist. This expectation would colour the opinions of the participants regarding the pitch bend wheel, especially for the participants with the mapping of the bandwidth to aforementioned sensor. As the bandwidth has to be modulated close to its allowed extreme value if a clear tone is desired, one hand is necessary to keep the sensors position at all times. Therefore, it is easy to understand participants regarding this standpoint.

26

As temporary use of one hand for modulation of parameters was observed to be accepted, mapping parameters to sensors that allow rest values is indicated to be a safe choice. Suggestions by the participant as alternatives to the pitch bend sensor were the encoder, the slider, multiple footpedals, and the X/Y surface, which all fits the aforementioned reasoning (of course, with reservation of their software implementation). Regarding the notion of the perceived unnaturality of the timbre to become brighter as the sensors was rolled toward from the user, this might have had something to do with the conventional presentation of frequencies in charts in audio effect plug-­ins. Usually, up might be perceived as a kind of forward movement, as a movement to the right. As spectral charts in plug-­ins among other applications regularly have frequencies on the X-­axis with an increasing number, this may be a reasons for this to be expected for a sensor. We also noticed that subjects controlling the Q-­factor parameter with the pitch bend wheel performed slightly better as a group than those using the modulation wheel. However, this effect is attenuated by ignoring the sections in the original where the parameter was near the middle value. The data as a whole does not indicate a better controllability with either of the sensors for the Q-­factor. This is interesting, considering that the modulation wheel was prefered in every case as a sensor for either parameter in the interviews. This suggests that the sensors are quite equivalent considering accuracy and usability, but for individual musical submission it is prefered to have the non-­spring mechanism. In the first timbre modulation task, four of the subjects managed to produce a curve with correlation coefficient close to -­1 in their first attempt. This indicates that they matched the curve accurately but with inverse polarity. When the original had a high controller value, they produced a low one and vice versa. This may be related to the complaint about the unintuitive polarity of the timbre parameter expressed in the interviews. Even after relistening to the original, three of these four subjects performed similarly and only one (#6) changed the polarity of the curve. This suggests that these subjects were not able to connect their action on the sensor to the sound. Looking at a second measure of accuracy, the mean absolute error computed by averaging the absolute value of the difference between the originals and respective attempts at each millisecond, we cannot say that either of the two controllers poses an advantage in producing parameter accuracy for either of the parameters. The idea of measuring controllability may only give, at best, an indication of how intuitive the action is versus the sound it makes. Tasks such as these might work best as a controlled introduction and exploration for the participant, as used by Stowell et al. (2009).

27

There are several possible impacts which may have affected the performance of the participants during the controllability tasks. The ability of being able to hold a piece of information in the memory for a brief amount of time to thenceforth use it in a reproductional manner is one of them. Naturally, the ability to memorize is likely to differ from person to person. In an attempt to reduce this impact to solely trying to focus on the use of the sensor, the tasks were made short (ten seconds) and to repeat the tasks twice. The use of a metronome for the tasks may have had the side effect of facilitating the memorization by providing milestones of which the participants could relate to. To further establish the impact of memory, the questionnaire could have included a question of the participant’s thoughts on how difficult it was to retain the sound and parameter evolution as they reproduced it. As sometimes the parameters were subtly modified in the tasks, participants may have had difficulties perceiving these changes. A question in the questionnaire concerning this aspect would have been valuable for validating the results. Force Ghost uses a sound source that is time variant. As the sound source vary over time, the probability of it to have the exact same spectral characteristics for the participants try as the original sound is very low. To prevent this problem, the input signal could have been restarted for each created task and for each try, forcing the participants to start at a certain beat for synchronization. However, the impact of this is estimated by the authors to be slight, based on listening. By adding more tasks to the controllability tests, the confidence of the results might’ve increased. Regarding the controllability evaluation (which was done through the analysis of the Pearson correlation coefficient), there may be MIDI value deviations may be observed, but these may not be audible. Evaluation by human auditory comparative analysis would’ve have worked as a competent complement, as used by Hunt and Kirk (2000). The basic idea of their analysis consisted of one of the authors acting judge and listening to the original sound, comparing it to the reproduced sound. The judge marks the reproduced sound 1-­10 in three categories, based on comparison. The three categories are, as laid out by Hunt and Kirk (2000):

-­ Timing accuracy (how well the timings of the sonic events match the original) -­ Parameter accuracy (how near the original values the reproduced values are) -­ Trajectory accuracy (how well the reproduced values move compared to the

original) Then another person may be used for a second opinion on several random samples, to verify the judgement. Further, human auditory analysis is commonly regarded as a necessity for feedback in musicianship education and music competition (Hunt & Kirk, 2000).

28

Something that could’ve been looked into further is in what register the instrument was played the most. This does deviate from the investigation slightly, but it is of interest to explore the design space further. This is relevant because it would’ve given insight into which register the parameters were most used, and possibly thus exposing where they were the most useful for the musicians. In addition, a question regarding this would have been an appropriate addition in the interviews.

7. Conclusions 29 From the semi-­structured interviews, there are strong indications that the modulation wheel is prefered to the pitch bend wheel, independent of which parameter it control. A sensor that gravitates towards a resting state does not seem to be suitable to control a parameter (scaling) that lacks what the musicians would see as a resting value. However, the data from the controllability tasks do not indicate a significant difference between the two sensors in terms of controllability of either of the two parameters.

8. Future research 29 By definition, the whole design space cannot be fully understood. As only a small part of the design space have been discovered, there is lots to do regarding the design of Force Ghost. Future research may include further studies of ways of interacting with the parameters at hand. As noted, sensors without the ability of the pitch bend wheel to return to a default value should be investigated. The footpedal is an interesting option to allow two hands of key-­playing. Future research could also include more participants, more tests and also the perspective of other stakeholders, as this paper solely focused on the performer’s point of view.

29

9. References 30 Malloch, J., Birnbaum, D., Sinyor, E., Wanderley, M., M., 2006. Towards a new conceptual framework for digital musical instruments, [online]. Available at:<http://contrarymotion.net/publications/towards-­a-­new-­conceptual-­framework-­for-­digital-­musical-­instruments/> [Accessed 15 March 2016]. Orio, N., Schnell, N., Wanderley, M., M., 2001. Input Devices for Musical Expression: Borrowing tools from HCI, [online]. Available at:<http://recherche.ircam.fr/equipes/analyse-­synthese/wanderle/Gestes/Externe/chi_workshop_Final_v2ALL.pdf> [Accessed 9 March 2016]. Stowell, D., Robertson, A., Bryan-­Kinns, N., Plumbley, M. D., 2009. Evaluation of live human–computer music-­making: Quantitative and qualitative approaches, [online]. Available at:<http://www.sciencedirect.com/science/article/pii/S107158190900069X> [Accessed 29 March 2016]. Hunt, A., Wanderley, M.,M., Paradis, M., 2003. The Importance of Parameter Mapping in Electronic Instrument Design, [online]. Available at:<http://www.nime.org/proceedings/2002/nime2002_088.pdf> [Accessed 29 March 2016] Hunt, A., Kirk, R., 2000. Mapping Strategies for Musical Performance, [online]. Available at:<http://www.music.mcgill.ca/~mwanderley/MUMT-­615/Papers/Class10/P.HunKir.pdf> [Accessed 9 March 2016]. O’Modhrain, S., 2011. A Framework for the Evaluation of Digital Musical Instruments. Computer Music Journal , No. 1, pp. 28-­42. Barbosa, J., Calegario, F., Magalhães, F., Cabral, G., Teichrieb, V., Ramalho, G., 2011. Towards an evaluation methodology for digital music instruments considering performer's view: a case study, [online]. Available at:<https://www.researchgate.net/publication/228832995_Towards_an_evaluation_methodology_for_digital_music_instruments_considering_performer%27s_view_a_case_study> [Accessed 31 March 2016] Marshall, M., T., 2009. Physical interface design for digital musical instruments. [online]. Available at:<http://search.proquest.com/docview/305109540/> [Accessed 31 March 2016] Paradiso, J., A., 1997. Electronic Music. [online]. Available at:<http://ieeexplore.ieee.org.focus.lib.kth.se/stamp/stamp.jsp?tp=&arnumber=642965&tag=1> [Accessed 31 March 2016]

30

Eerola, T., Friberg, A., Bresin R., 2013. Emotional expression in music: Contribution, linearity, and additivity of primary musical cues. [online]. Available at: <http://kth.diva-­portal.org/smash/record.jsf?dswid=8968&aq=%5B%5B%5D%5D&aq2=%5B%5B%5D%5D&sf=all&aqe=%5B%5D&af=%5B%5D&searchType=SIMPLE&sortOrder=author_sort_asc&onlyFullText=false&noOfRows=50&language=en&pid=diva2%3A664442&dspwid=8968> [Accessed n.d.] Lyons, M., Fels, S., 2011. Advances in new interfaces for musical expression, [online]. Available at:<http://dl.acm.org.focus.lib.kth.se/citation.cfm?doid=2077434.2077436> [Accessed 4 April 2016] Flick, U., 2009. An introduction to qualitative research. Sage. Elblaus, L., Hansen, K., F., Unander-­Scharin, C., 2012. Artistically Directed Prototyping in Development and in Practice. Journal of New Music Research, Vol 41, No. 4, pp. 377-­387 [e-­journal]. Available at: <http://www.tandfonline.com/doi/abs/10.1080/09298215.2012.738233> [Accessed n.d.] Rittel, H., Webber, M., 1973. Dilemmas in a General Theory of Planning. Elsevier Scientific Publishing Company. Available at: <http://comphacker.org/pdfs/338/Rittel.pdf> [Accessed n.d.] Westerlund, 2009. Design Space Exploration: co-­operative creation of proposals for desired interactions with future artefacts. [online]. Available at: <http://www.diva-­portal.org/smash/get/diva2:241661/FULLTEXT02.pdf> [Accessed n.d.] Sedgwick, P., 2012 Pearson’s correlation coefficient, British Medical Journal, 2012 Jul 4, Vol.344 <http://dx.doi.org/10.1136/bmj.e4483 > [Accessed 15 may 2016] Lehrman, P., D., Tully, T. , 1993. MIDI for the professional, chapter 1. [online]. Available at:<https://www.kth.se/social/upload/52e576b6f276545732bb96ab/Lehrman-­Tully-­overview-­of-­midi.pdf> [Accessed 1 April 2016] Bornand, C., Camurri, A., Castellano, G., Catheline, S., Crevoisier, A., Roesch, E., Scherer, K., Volpe, G., 2005. Usability evaluation and comparison of prototypes of tangible acoustic interfaces. [online]. Available at: <ftp://ftp.infomus.org/pub/Staff/AntonioCamurri/2005-­Enactive2005-­TangibleAcousticInterfaces.pdf> [Accessed n.d.]

31

Falkenberg Hansen, K., 2016. Lecture on musical human-­computer interaction and instrument design. [Lecture] (Personal Communication, 15 April). Friberg, A., 2016. Lecture on music and emotion. [Lecture] (Personal Communication, 28 April). Braun, H., J., 2000. Music and technology in the twentieth century. ca. pp. 158. Johns Hopkins University Press. Bongers, B., 2000. Physical interfaces in the electronic arts: Interaction theory and interfacing techniques for real-­time performance. [online]. Available at: <http://bertbon.home.xs4all.nl/downloads/IRCAM.pdf> [Accessed n.d.] White, R., M., 1987. A Sensor Classification Scheme. [online]. Available at: <http://ijlalhaider.pbworks.com/w/file/fetch/64130986/A%20Sensor%20Classification%20Scheme.pdf> [Accessed n.d.] Vertegaal et al. 1996. Towards a Musician’s Cockpit: Transducers, Feedback and Musical Function. [online]. Available at: <https://www.researchgate.net/profile/Roel_Vertegaal/publication/2353033_Towards_a_Musician's_Cockpit_Transducers_Feedback_and_Musical_Function/links/546b5d140cf2f5eb18091aa2.pdf> [Accessed n.d.] Wanderley, M., M., Viollet, J., P., Isart, F., Rodet, X., 2000. On the choice of transducer technologies for specic musical functions. [online]. Available at: <http://recherche.ircam.fr/anasyn/wanderle/Gestes/Externe/Wanderley_OntheChoiceFinal.pdf> [Accessed n.d.] Masliah, M., R., Milgram, P., 2000. Measuring the allocation of control in a 6 degree-­of-­freedom docking experiment. [online]. Available at: <http://dl.acm.org/citation.cfm?id=332403> [Accessed n.d.] Jordà, S. , 2004. Digital Instruments and Players: Part II–Diversity, Freedom and Control , [online] Available at:<https://scholar.google.se/scholar?q=Digital+Instruments+and+Players:+Part+II%E2%80%93Diversity,+Freedom+and+Control&hl=sv&as_sdt=0&as_vis=1&oi=scholart&sa=X&ved=0ahUKEwj1urTk_czLAhXDJJoKHfmYB6EQgQMIGjAA> [Accessed 15 March 2016]. Heron, J., 1996. Co-­operative inquiry: Research into the human condition. pp. 19 ff. London: Sage. Nelson, H., Stolterman, E., 2003. The design way: Intentional change in an unpredicable world. Englewood Cliffs, NJ: Educational Technology Publications.

32

Appendix A 33

Controllability + Expressivity Questionnaire Mappningsgrupp: Timbre på modulation wheel och bandwidth på pitch bend. Namn: Hugo Controllability Förstod du uppgiftern? Om nej, varför? Ja Controllability Hur tyckte du att du presterade? Varför? Pitch gick sådär. Mod-­wheel gick åt helvete. Kändes tvärtom när, lite förvirrad. Utgångspositionen, svårt att veta vart man skulle börja, mod-­wheelets position. 1-­ Upplevde du en koppling mellan din rörelse och ljudet? Isåfall, beskriv den. Mod gjordes inte så mycket. Grainade mest ljudet. Båda ändrar tonerna. Pitch -­ oktav högre. Mod -­ oktav ändring vid nedextrem? 2-­ Kände du att du hade kontroll/precision över instrumentet och kontrollerna till vänster? Efter ett tag, kändes det bättre. Lät bättre efter ett tag. Lät alltid bäst med pitch högst upp, klarast ton. Transponera ner pitch binaural beats. Ned för bas. 3-­ Påminner den här instrumentet med kontrollerna till vänster om erfarenheter med ett annat instrument? … nej inte va ja kan tänka 4-­ Hur uttrycksfull kände du att du kunde vara med kontrollerna till vänster och instrumentet? Inom en viss ram ja, ja vill gå vidare, vill ha mer. Lite för ljust/högt med höga frekvenser. Passade bättre hos lägre toner. Dra bägge sensorerna samtidigt vid låga toner, finns mer utrymme att göra saker där. Spela mörkt, mer flytande känsla 5-­ Hur skulle du beskriva påverkan av mod-­wheelen på ljudet/känslan som det gav upphov till? Pitchar. Den ändrar tonen, ändrar ton läget.

33

6-­ Hur skulle du beskriva påverkan av pitch-­bend på ljudet/känslan som det gav upphov till? 7-­ Kände du att du blev bättre eller säkrare med instrumentet och kontrollerna till vänster allt eftersom? 8-­ Känner du att kontrollerna till vänster är användbara på något sätt? Vilket? Pitch upp -­ klarast ton. 9-­ Tyckte du att sensorerna var lämpade för parametrarna? Varför eller varför inte? En annan kontroll för att styra de parametrarna? Ställa in det på en annan knob istället för att tappa en hand. Kul å spela. Skog o vatten o fåglar o shit.

Controllability + Expressivity Questionnaire Mappningsgrupp: Timbre på mod-­wheel och bandwidth på pitch bend. Namn: Jesper Controllability Förstod du uppgiftern? Om nej, varför? Ja. när pitchbend och modwheel skulle upp o ner Controllability Hur tyckte du att du presterade? Varför? Mod svårare nedåt. Svårare vilket håll 1-­ Upplevde du en koppling mellan din rörelse och ljudet? Isåfall, beskriv den. Ja. med bägge sensorerna 2-­ Kände du att du hade kontroll/precision över instrumentet och kontrollerna till vänster? Ja. knacklade lite när man gick upp. Lite mer på pitchbenden än modden. 3-­ Påminner den här instrumentet med kontrollerna till vänster om erfarenheter med ett annat instrument? Påminner lite grann. Om yamaha es2000 pizzcato string, påminde om ljudet. 4-­ Hur uttrycksfull kände du att du kunde vara med kontrollerna till vänster och instrumentet? Väldigt uttrycksfull , i synnerhet med pitchbendedn. Styr hur mycket noise som kommer in och ut, uttrycksfullt.

34

5-­ Hur skulle du beskriva påverkan av mod-­wheelen på ljudet/känslan som det gav upphov till? Subtilt. Är det en pitchändring?.. Känns som det finns en lägre frekvens som inte pitchas upp. Pitch. Ja. Det är pitch som ändras. Fast. Lite dovare när den drogs uppåt. 6-­ Hur skulle du beskriva påverkan av pitch-­bend på ljudet/känslan som det gav upphov till? Sidechain liknande effekt. Mindre brus om man går uppåt. Det låter inte exakt likadant. [notering: kan bero på insignalen]. Gick lite snabbt med moduleringen, ska vara lite smoothare. Låter bäst när den är högst upp. Klarare och klarare, närmre och närmre 7-­ Kände du att du blev bättre eller säkrare med instrumentet och kontrollerna till vänster allt eftersom? Ja, lärde sig mer och mer. 8-­ Känner du att kontrollerna till vänster är användbara på något sätt? Vilket? Pitch bend uppe hela tiden i stort sett. Noiset försvinner inte riktigt, det gillar jag. Det blir lite lägre. Pitch bend volym pedal för hur mycket noiset ska ta över. 9-­ Tyckte du att sensorerna var lämpade för parametrarna? Varför eller varför inte? Tyckte egentligen att byta plats på dem känns bättre. Modden kan du lämna var du vill, vore najs att kunna göra det med noiset.

Controllability + Expressivity Questionnaire Mappningsgrupp: Bandwidth mod-­wheel och Timbre på pitch bend. Namn: Josanna Controllability Förstod du uppgiftern? Om nej, varför? Ja Controllability Hur tyckte du att du presterade? Varför? Ok, lät ungefär som exemplerna. ⅗. 1-­ Upplevde du en koppling mellan din rörelse och ljudet? Isåfall, beskriv den. Aaa 2-­ Kände du att du hade kontroll/precision över instrumentet och kontrollerna till vänster? Aa, ljudet förändrades lite i karaktär oavsett hur man spelade.

35

3-­ Påminner den här instrumentet med kontrollerna till vänster om erfarenheter med ett annat instrument? Ljudet påminner lite om en orgel i låga register. 4-­ Hur uttrycksfull kände du att du kunde vara med kontrollerna till vänster och instrumentet? Att glida mellan ojlud och tonljud gav rum för utryck. 5-­ Hur skulle du beskriva påverkan av mod-­wheelen på ljudet/känslan som det gav upphov till? Går mellan ljud av regner och toner. Man får känslan av en flod som visslar. 6-­ Hur skulle du beskriva påverkan av pitch-­bend på ljudet/känslan som det gav upphov till? Nedåt, glasig, uppåt mer om stränginstrument. 7-­ Kände du att du blev bättre eller säkrare med instrumentet och kontrollerna till vänster allt eftersom? Aa jag förstod mer vad man kunde använda dom till, speciellt modwheelen. Pitchbenden var shysst i ett fast läge. För subtila förändringar för att användas på kort tid. 8-­ Känner du att kontrollerna till vänster är användbara på något sätt? Vilket? Aa, modwheel var bra, kändes cinematisk tpå något sätt, pitch bend inte lika användbar för att ljudet lät bäst i ett fast läge. 9-­ Tyckte du att sensorerna var lämpade för parametrarna? Varför eller varför inte? Modwheel ja, pitch bend var jobbig då man inte kunde lämna parametern i ett fast läge. Det kändes vetttigt att ha både parametrarna på hjul.

Controllability + Expressivity Questionnaire Mappningsgrupp: Bandwidth Pitch Bend wheel och Timbre på modulation wheel. Namn: Julia Controllability Förstod du uppgiftern? Om nej, varför? Ja Controllability Hur tyckte du att du presterade? Varför? Ja top 1-­ Upplevde du en koppling mellan din rörelse och ljudet? Isåfall, beskriv den. Tunnel, stort långt. Ja,

36

2-­ Kände du att du hade kontroll/precision över instrumentet och kontrollerna till vänster? Nej, det ändras väldigt fort för det är så låg upplösning. Men hade ganska bra kontroll. Pedal hade vart bättre kanske. Vill att bäge ska kunna vara i fast läge kanske. Förslag, en rund knapp, eller x-­y pad hade vart ännu bättre för kombinationer mellan parametrarna samtidigt och misdiggt. 3-­ Påminner den här instrumentet med kontrollerna till vänster om erfarenheter med ett annat instrument? Ljudet påmineer om orchestra hit (ett?). Ja . crasch samtidigt som en ton, typ. 4-­ Hur uttrycksfull kände du att du kunde vara med kontrollerna till vänster och instrumentet? Ja. man kunde gå från fint och städat till värsta bruset, underjordiskt. Stor skillnad när man gjorde grejer. 5-­ Hur skulle du beskriva påverkan av mod-­wheelen på ljudet/känslan som det gav upphov till? Det läggs på en oktav, den ändrar oktav. 6-­ Hur skulle du beskriva påverkan av pitch-­bend på ljudet/känslan som det gav upphov till? Brus ned, mer klart uppåt.Tunnel 7-­ Kände du att du blev bättre eller säkrare med instrumentet och kontrollerna till vänster allt eftersom? Ja. 8-­ Känner du att kontrollerna till vänster är användbara på något sätt? Vilket? Ja verkligen. Skulle behövt tre händer. 9-­ Tyckte du att sensorerna var lämpade för parametrarna? Varför eller varför inte? Vill spela med två händer. Andra sensorer. Två pedaler? Mod-­wehl är bättre.

Controllability + Expressivity Questionnaire Mappningsgrupp: Bandwidth mod-­wheel och Timbre på pitch bend. Namn: Max Controllability Förstod du uppgiftern? Om nej, varför? Inte till att börja med. Efter första exemplet var det bra. Controllability Hur tyckte du att du presterade? Varför? Svårt att veta, relativt.

37

1-­ Upplevde du en koppling mellan din rörelse och ljudet? Isåfall, beskriv den. Ja. Efter ett tag vänja sig med vilka rörelser som gav vad….. 2-­ Kände du att du hade kontroll/precision över instrumentet och kontrollerna till vänster? Hyffsad kontroll. Träna mer på sensoererna, kombinationer mellan dem. 3-­ Påminner den här instrumentet med kontrollerna till vänster om erfarenheter med ett annat instrument? Inget med samma kontroller, ändrar ljuden på detta sättet. Påminner om 90tals synt 100 ljud , noise attack, orkester. 4-­ Hur uttrycksfull kände du att du kunde vara med kontrollerna till vänster och instrumentet? Ja! 5-­ Hur skulle du beskriva påverkan av mod-­wheelen på ljudet/känslan som det gav upphov till? Högt upp -­ renare. Smutsigare längre ner. (Inte på ett dåligt sätt). Lite större långt ned. 6-­ Hur skulle du beskriva påverkan av pitch-­bend på ljudet/känslan som det gav upphov till? Fasnings känsla, stänger in ljudet på olika sätt. Vågformen förändras lite. Mycket överotoner vid upp, vasst neddragen. 7-­ Kände du att du blev bättre eller säkrare med instrumentet och kontrollerna till vänster allt eftersom? Absolut. 8-­ Känner du att kontrollerna till vänster är användbara på något sätt? Vilket? När man spelar på den. Förgrund/bakgrund, breda ut ljudet, sticka ut. Pitchbend. Synt solo sticka ut. 9-­ Tyckte du att sensorerna var lämpade för parametrarna? Varför eller varför inte? (Andra labels på kontrollerna). Pitch bend går till mitten. Kanske hade vart bekvämt om man slapp hålla den i en viss punkt.

Controllability + Expressivity Questionnaire Mappningsgrupp: Bandwidth mod-­wheel och Timbre på pitch bend. Namn: Wilda Controllability Förstod du uppgiftern? Om nej, varför?

38

Ja Controllability Hur tyckte du att du presterade? Varför? 5/5, lite svårt i slutet. Första två tasken var enklare. 1-­ Upplevde du en koppling mellan din rörelse och ljudet? Isåfall, beskriv den. Ja. ju hårdare man tryckte på keyboardet desto starkare blev det. Dem där till till vänstr påverkade ljudets kvalitet. 2-­ Kände du att du hade kontroll/precision över instrumentet och kontrollerna till vänster? Ja 3-­ Påminner den här instrumentet med kontrollerna till vänster om erfarenheter med ett annat instrument? PitchBend påminnde om hjul på en orgel, draw-­bars där man kan höja och sänka vissa röster. Annars påminde ljudet mer allmänt, det lät som ett, mycket reverb, reverbigt på något sätt. Analog känsla, gammalt vintage. Juno 106 , subtraktiva syntar dr man kan subtrahera noise. 4-­ Hur uttrycksfull kände du att du kunde vara med kontrollerna till vänster och instrumentet? Väldigt. Mod-­wheel kunde va väldigt användbar vid långa “pad”iga ackord. Man kunde gradvis gå från ett åskigt stormljud till en ren matta,syntmatta. 5-­ Hur skulle du beskriva påverkan av mod-­wheelen på ljudet/känslan som det gav upphov till? Längst ner så var det åskigt, stormljud som hade ton ändå, tonen blev vassaer desto högre upp jag förde hjulet. 6-­ Hur skulle du beskriva påverkan av pitch-­bend på ljudet/känslan som det gav upphov till? Högst upp, mer metalliskt steel drum aktigt ljud. Mer flöjt, blås längre ner. Man kunde ge en annan färg på ljudet, karaktär. 7-­ Kände du att du blev bättre eller säkrare med instrumentet och kontrollerna till vänster allt eftersom? Ja. Speciellt när man förstod deras användningsområdet. 8-­ Känner du att kontrollerna till vänster är användbara på något sätt? Vilket? Ja , de passade. De hade vart bra om båda var, om man kunde asså att pitch bend åkte tillbaka till mitten var frustrerande. Önskade att modwheelen var större, nå högre värden. Bara högre.

39

Spa musik, ambient musik. Dans musik (bastonerna) som substitut till piano som substitut till stråk som subtsistuk till blås och kanske som teater effekt, om det är något dramatiskt som händer kan man spela en ton som man kan förädnra. 9-­ Tyckte du att sensorerna var lämpade för parametrarna? Varför eller varför inte?

Appendix B 40

Midi controller data from tasks The following are the last 8 seconds of the controllability tasks

40

www.kth.se