Correlations Between Acoustic Features, Personality Traits

Embed Size (px)

Citation preview

  • 8/19/2019 Correlations Between Acoustic Features, Personality Traits

    1/10

    Correlations Between Acoustic Features, Personality Traits

    and Perception of Soundscapes.

    PerMagnus Lindborg,*1

    * Nanyang Technological University, Singapore# KTH Institute of Technology, Stockholm

    [email protected]

    ABSTRACT

    The present study reports results from an experiment that is part ofSoundscape Emotion Responses (SSER) study. We investigated theinteraction between psychological and acoustic features in the perception of soundscapes. Participant features were estimated withthe Ten-Item Personality Index (Gosling et al. 2003) and the Profileof Mood State for Adults (Terry et al. 1999, 2005), and acousticfeatures with computational tools such as MIRtoolbox (Lartillot

    2011).We made ambisonic recordings of Singaporean everyday sonicenvironments and selected 12 excerpts of 90 seconds duration each,in 4 categories: city parks, rural parks, eateries and shops/markets. 43 participants rated soundscapes according to the Swedish SoundscapeQuality Protocol (Axelsson et al. 2011) which uses 8 dimensionsrelated to quality perception. Participants also grouped ‘blobs’representing the stimuli according to a spatial metaphor andassociated a colour to each.A principal component analysis determined a set of acoustic featuresthat span a 2-dimensional plane related to latent higher-level featuresthat are relevant to soundscape perception. We tentatively namedthese dimensions Mass and Variability Focus; the first depends onloudness and spectral shape, the second on amplitude variability

    across temporal domains.A series of repeated-measures ANOVA showed that there is are patterns of significant correlations between perception ratings andthe derived acoustic features in interaction with personalitymeasures. Several of the interactions were linked to the personalitytrait Openness, and to aural-visual orientation. Implications for futureresearch are discussed.

    I. BACKGROUND

    The present study is inspired by research in the fields ofsoundscape studies, music emotion, and acoustics. Describingeveryday sonic environments, understanding how they affect

     people, and designing soundscapes for pleasure, improvedhealth and efficient communication are weighty tasks that noone can take on alone. The Positive Soundscape Project(Davies et al. 2007, 2009, Adams et al. 2008), with its broadapproach to cross-disciplinary research, involved acoustics,sound artists, social scientists, psychoacousticians, andexperts in physiology and neuroscience. The authors holdforth that the multiple methods are complementary: forexample, the ecological validity of soundwalk annotationscomplement the accuracy of measurements in a laboratorysetting. A project with a community science and perhapsactivist ambition is NoiseTube (Maisonneuve et al. 2009,2012), aiming to focus resources that can support decision-making about the public sonic environment. The importance

    of soundscape research for architects and urbanists has been

    underlined by many authors e.g. (Cain 2007, Andringa 2010,and Jennings 2009).

     Nilsson studied perception of the quality of sonicenvironments in city parks and suburban areas in Stockholmand found that “sound source identification… within thesoundscape was… a better predictor of soundscape qualitythan measured sound levels” (Nilsson 2007). To provide a toolfor the measurement of the perception of urban soundscapes,in particular the influence of traffic on quality, he and

    collaborators developed the Swedish Soundscape QualityProtocol (SSQP; Axelsson, Nilsson & Berglund, March 2011).It consists of 5 items and was designed to enable on-sitesurveys where minimal amount of time and attention can beexpected from ad-hoc raters passing by.

    Axelsson, Nilsson and Berglund (2010) investigated how people perceived recordings of soundscapes that had beencategorised as predominantly ‘technological’, ‘natural’ or‘human’. The authors collected ratings on 116 unidirectionalscales, using adjectives (such as lively, brutal, warm…) on 50soundscapes. A principal component analysis led to a solutionwith 3 meaningful dimensions: pleasantness (50%) andeventfulness (16%) and familiarity. Hence SSQP includes an

    item where the quality of soundscapes is rated on 8 adjectivalunidirectional dimensions, vector added together in a 2-dimensional circumplex model. They summarised the resultsas: “soundscape excerpts dominated by technological soundswere mainly perceived as unpleasant and uneventful, andsoundscape excerpts dominated by human sounds weremainly perceived as eventful and pleasant”. Axelssondeveloped analysis methods further in his thesis (2011), which

     presents a model where a latent variable, Information Load, isa key factor underlying aesthetic appreciation.

    A pioneering work on perceptual-emotional qualities inmusic was made by (Wedin 1972), leading over the pastdecade to a wealth of research and computational methods.Examples include CUEX, focussing on onset detection in

    music (Friberg et al. 2007, 2012); MIRtoolbox, with a broadrange of spectral and other features (Lartillot et al. 2008,Lartillot 2011); and CataRT, running in real-time as anintegrated system for analysis and performance (Schwartz, .For an overview, see the MIREX webpages. Much of theknowledge gained in music can be applied to soundscapes.

    It seems obvious that people perceive sounds in differentways, but what kinds of patterns are involved? Vouskoski andEerola (2011, 2012) investigated individual differences inemotional processing, specifically the role on personality andmood in music perception, and preference ratings. Theyhypothesised that both personality and mood would contributeto the perception of emotions in trait- and mood- congruent

    manners, and that mood and personality would also interact in producing affect-congruent biases. The authors investigatedhow mood may moderate the influence of personality traits on

    594

  • 8/19/2019 Correlations Between Acoustic Features, Personality Traits

    2/10

    emotion perception in excerpts of film music which had beenevaluated in a pilot experiment according to perceived basicemotion in five categories (anger, fear, happiness, sadness andtenderness). They concluded that “the degree of mood-congruence in the emotion ratings is at least to some extentmoderated by personality traits”. The idea with the authors’analysis method was to parcel out the variability of short-term

    mood swings from those of the more persistent personalitytraits. Among other things, they found significant correlations between ratings of perceived happiness in the music withvigorous mood state, interacting with extrovert personality.They also found a correlation between vigour and happinessratings that increased with increased extraversion.

    To gauge their subjects’ mood, Vouskoski used The Profileof Mood States, in a version adapted for use with adults,(POMS; Terry et al. 1999, 2003). It is a questionnaire with asingle instruction: “Mark the answer which best describeshow you feel right now”, followed by 24 adjectives. Thesubject answers by marking a point on a 5-point Likert scaleanchored by “Not at all” and “Extremely well”. The adjectivesinclude “muddled”, “alert”, “nervous” and so forth. A score is

    calculated for each of 6 mood dimensions: Anger, Confusion,Depression, Fatigue, Tension and Vigour. POMS has beenreported to have good concordance with other measurementinstruments (Morfeld et al. 2006). Mood states are consideredrelevant for the current or same-day emotional state.

    Personality traits are considered to be stable over longerduration, even over life spans. The Big Five (John &Srinathavan 1999, McCrae & Costa 1997) is a well-established model. For our study, we used the Ten-ItemPersonality Index (TIPI), which is a “light-weight version”developed by Gosling et al. (2001, 2005). TIPI is headed by asingle statement, the self-reflecting “I see myself as…” andthen lists 10 pairs of adjectives, such as “anxious, easily

    upset” or “extroverted, enthusiastic”. The subject answers bymarking a point on a 7-point Likert scale anchored by“Disagree strongly” and “Agree strongly”. TIPI produces ascore in 5 personality dimensions, named Openness,Conscientiousness, Extrovertedness, Aggressivity, andEmotional Stability. The last one is simply the reverse of themore common Neuroticism, part of the OCEAN models.Gosling has shown that TIPI has a good construct validitywhen compared with both the 42-item BFI (used byVouskoski) and larger instruments. It is less specific but hasan advantage when experiment designs allot little time.

    The Soundscape Emotion Response (SSER) study is part ofa research aiming to chart people’s responses to everydaysoundscapes in different modalities: perceptual, physiological,

    movement, colour association and verbal commentary. Ourresearch is localised to Singapore, a fast-developing citywhere people’s general attention to the quality of sonicenvironments has low priority. The long-term aim is to bringknowledge as a resource for architects, urban planners, anddecision-makers. One area of application is to contribute toimproved learning efficiency in local schools. The singularstudy in a Singaporean context related to this matter (Nyuk &Wy, 2003) calls for further research into soundscape

     perception in an urban, tropical environment.In short, the aims of our present study have been to

    investigate the interaction between psychological and acousticfeatures in the perception of soundscapes, using established

    analytical tools.

    II. METHOD

    A.Soundscape Stimuli

    We made ambisonic recordings of Singaporean publicspaces using a TetraMic (#2144) onto a SoundDevices 788tmobile recording device. For inclusion in the SSER study,soundscapes would have to be sufficiently long for the

     psycho-physiological study (not part of the present text) andhave high intra-stimulus homogeneity. The collection as awhole had to be large and varied enough to represent ameaningful sample of Singaporean environments, in somesense “everyday”. The choice was also influenced by the waya soundscape can be understandable as representing a physicalenvironment in the absence of all other sensorial information.The size of the set was limited so that the experiment could becompleted in one hour. The final selection consisted of 12excerpts of 90 seconds duration, in four “a priori” categoriesroughly dividing the set into rural and urban parks, eateries,and places to shop.

    In parallel with the audio recordings, we took SPLmeasurements at several of the locations with an Extech407790 using the LeqP(30s) un-weighted setting. The B-format files were transformed into binaural (HRTF KEMAR1)with Harpex-B for the perceptual ratings using studio qualityheadphones. Acoustic feature computation was performed onthe W (mono omni) channel at +3 dB. LeqA and otherloudness measures were calculated from on-site LeqP whenavailable, or by comparing the dBFS level to that of arecording with known LeqP. See Table 1 below for the mainfeatures of the set of soundscapes used as stimuli in the

     present study.

    Table 1. Overview of the SSER soundscapes.

    name description LeqA

    market Little India Market, large & old-style, covered roof,medium crowded weekday afternoon, butcherschopping meat

    76

    hawker  Queen Street Block 270 Foodcourt, large & worn-down, covered roof, very crowded, fans, scraping plates

    73

    constructQueen Street, small concrete & grass parl, bencheswith 5 pax, 1 child playing, diesel generator nearby

    77

    café Café “Food for Thought” Waterloo street, aircon,chairs scratching floor, churchbells in distance

    69

     bolly Bollywood Veggie, tropical park, no peopleweekday afternoon, airplanes & diesel pump indistance

    52

    night Little India small park near Bagdad Street, benches

    with 2 pax, some passers-by, bar music in distance55

    resto Bussorah Road street-side restaurants, verycrowded Friday night, dense slow-moving traffic

    69

    shop Golden Mile Mall, mainly Thai shops, steadystream of people near escalators, child bouncing a ball, dense traffic in distance

    73

    oriol Sungei Buloh nature reserve, mangrove, no peoplein early morning, 2 oriols singing

    56

    water  Sungei Buloh nature reserve, water sounds, smallwaves, no people, facing Johor Bahru (city) indistance

    58

    crowd Vivo City Mall outside Golden Village Cinemas,near escalators, extremely crowded Saturday night

    84

     people Vivo City Mall rooftop open area towards Sentosa,crowded, groups of young people laughing, chatting

    74

    595

  • 8/19/2019 Correlations Between Acoustic Features, Personality Traits

    3/10

    B.Participants

    Participants (N=43) were university students inSingapore (N=32) and Norway (N=11) currently enrolled inmusic or sound-related courses, and took part in the study as

     part of course requirements.For the present study, we developed a screen-based

    interface developed in MaxMSP, running on individual

    computers in a school lab with up to 18 participants at eachtime. Neither lab (Singapore or Norway) was acousticallyisolated, but generally suitable for sound work. There was noloud intermittent sound disturbance at any time (e.g. rain orconstruction noises). The computer sound output had beenadjusted to a predetermined level (three clicks belowmaximum), same for all participants. Identical studio qualitycircum-aural headphones were used, but no measure of theactual SPL in the ear canal could be taken. Neither were weable to conduct a test to gauge if any hearing impairment was

     present.Participants were given verbal information about the

    experiment before starting, and the same text was displayedon the computer screen. Participants completed three tasks:

     provide information about themselves, rate each soundscape,and group the 12 soundscapes by similarity.

    C.Procedure

    In the first part, participants anonymously filled outforms for a) general participant data (GPD): age, gender,handedness, language and race (5 items); b) activities, i.e. thetypical number of hours per day & night spent resting,sleeping, working or studying, making music, doing visualarts, watching TV or playing video games, doing sports orsocialising (7 items); c) the relative importance they accordedthe five senses; d) the Ten-Item Personality Index (TIPI) asabove, but with adjectives in randomised order, and e) the

    Profile of Mood State for Adults (POMS), as above, but withadjectives in randomised order. The participants were free tocomplete questions at their own pace, taking typically 10-12minutes for this part.

    In the second part, the participants donned studioquality headphones (and thus became raters). There were two

    tasks: ‘grouping’ the 12 soundscapes, and ‘rating’ eachsoundscape.The interface for the grouping task consisted of a‘white-square’ of apparent screen size very close to 100mm x100mm, containing 12 anonymous, at first grayish ‘blobs’. Byclicking on one, the associated soundscape started playing at arandom position in the soundfile (smoothly looping at theend-of-file). The blob could be dragged to a position

    anywhere in the white square; see Figure 2. The writteninstructions were: “1) click on a blob to listen to itssoundscape; 2) doubleclick to open, and rate how you

     perceive the soundscape; 3) drag blobs around to groupsimilar soundscapes next to each other.” There was no furtherexplanation given how to interpret the word “similar”, eventhough some participants asked.

    By double-clicking a blob, the user could open a‘rating’ interface for that soundscape, as seen in Figure 1. The

     participants entered ratings by adjusting horizontal continuoussliders with apparent screen length very close to 100mm andmarked by labels as in SSQP. 3 items were employed:

    “To what extent do you presently hear the following 5types of sounds?” [Traffic noise, Fan noise, Other noise,

    Sounds from human beings, Natural sounds] The order of the5 types/categories was vertically randomised for each raterand soundscape, and the sliders had equidistant labels [Do nothear at all, A little, Moderately, A lot, Dominates completely].In the analysis, we refer to the results as a 5-dimensionalvariable ‘content’.

    “Overall, how would you describe the presentsurrounding sound environment?” The slider had equidistantlabel [Very good, Good, Neither good, nor bad, Bad, VeryBad]. In the analysis, we refer to the results as a variable‘overall’.

    “To what extent do you agree with the 8 statements below on how you experience the present surrounding sound

    environment?” [pleasant, exciting, eventful, chaotic,annoying, monotonous, uneventful, calm] The order of the 8adjectives was vertically randomised for each rater andsoundscape, and the sliders had equidistant labels [Agreecompletely, Agree largely, Neither agree, nor disagree,Disagree largely, Disagree completely].

    Figure 1. The ‘rating’ task interface.

    596

  • 8/19/2019 Correlations Between Acoustic Features, Personality Traits

    4/10

    SSQP has 2 more items, but as they aim to capturethe relationship between soundscape and visual elements ofthe landscape, they were not employed in this study.

    Instead, the rater was asked to “Associate thesoundscape with a colour:” using a Hue-Saturation-Luminosity (HSL) colour picker. We were inspired to

    include colour by (Bresin 2005). Whereas he used a set of predetermined colour patches, our interface had a swatchinterface allowing quasi-continuous choice of colours. Whenall perceptual features had been rated, this colour becamevisible in the white-square ‘grouping’ interface (see Figure2), and when all soundscapes had been rated and grouped (ittypically took 50 minutes), the second part was completed,and the experiment as a whole.

    Finally, the software made some behind-the-scenesstatistics of rater behaviour. The position in the soundfilewhen a slider was set was registered, because it wassuspected that a marked event in a soundscape could causeraters to focus on that, and if significant, this would lead usto question intra-stimulus homogeneity. It also calculated the

    total amount of time in seconds that a rater spent listening toeach soundscape. We speculated that it could be as ameasure of attention.

    Figure 2 The ‘grouping’ task interface. Ratings for twosoundscapes have been fully completed.

    III. RESULTS

    A.Participants

    The mean age of the 43 participants was 22 years, withmost Norwegians being 19 and Singaporeans being between19 and 26 years old. 33 participants were women (8/11

     Norwegians, 25/32 Singaporeans) but the gender imbalancewas not significant ( X 2(42) = 6.23, p=1). Nevertheless resultsthat depend upon gender should be interpreted carefully. We

    will discuss four aspects of the participants’ profiles:

    activities, senses, personality traits and mood, and argue thatthe sample is useable for the analyses undertaken.

    A one-way ANOVA with group (Singaporean or Norwegian) as dependent variable against all other participant data showed that the two groups differedsignificantly at the two-tailed alpha=0.05 level in three

    regards: age (F(1,26)=38.2, p=0.000); the amount of music-making (F(1, 26)=11.9, p=0.003); and the amount of timespent watching TV or playing video games (F(1, 26)=6.80,

     p=0.019). The large difference in music is explained by thefact that all the Norwegian participants were students at aconservatory. The difference in watching (Singaporeanstwice as many hours as the Norwegians) might be explained

     by the fact that a large portion of the Singaporean participants were students at a school predominantly for thevisual arts. On a typical day, the participants spent 7.9 hoursresting, doing work or study 5.5 hours, and socialising 3.8hours. Sports occupied the participants for 1.3 hours. Doingvisualarts, i.e. drawing or painting, clocked in at less thanone hour, though with a few Singaporeans spending up to 6

    hours daily; however, the group mean difference was notsignificant (F(1, 26)=4.07, p=0.061). As expected,

     participants self-reported sight as their foremost sense,followed by hearing and touch, then taste and smell. Thelatter pair correlated strongly, at r=0.66, in line with e.g.(Lindborg 2010a).

    More importantly, the ANOVA revealed that there was nosignificant group difference in regards to any of the

     personality (TIPI) or mood (POMS) measures; the closestmiss was for Emotional Stability (F(1, 26)=2.15, p=0.16).This lends support to the assumption that TIPI and POMSare useable in the following analysis. Figure 3 shows theTIPI means with confidence intervals, between groups as

    well as with normative data from (Gosling 2003).

    Figure 3. Barplots of TIPI with normative data.

    Within TIPI, no correlation was significant and the mean

    of pairwise correlations was 0.071. The low value indicatesthat the 5 dimensions are reasonably orthogonal in our

    Comparison of means for personality traits (TIPI)

    (95% confidence limits)

            0

                  1

            2

            3

            4

            5

            6

            7

            0

                  1

            2

            3

            4

            5

            6

            7

    Openness   Conscientiousness   Extraversion   Agreeableness   EmoStability

    group

    Singaporeans (N=32)

    Norwegians (N=11)

    normative data (N=1813)

    597

  • 8/19/2019 Correlations Between Acoustic Features, Personality Traits

    5/10

  • 8/19/2019 Correlations Between Acoustic Features, Personality Traits

    6/10

  • 8/19/2019 Correlations Between Acoustic Features, Personality Traits

    7/10

    with event density, which detects relatively slow-rateamplitude variation. We tentatively label it Variability Focus.Variability refers to changes, i.e. rhythm and variation acrossloudness and pitch domains, and Focus refers to the timedomain of the listening context, i.e. from slow and global, tofast and local. We may think of sounds with high VariabilityFocus as sizzling or whizzing, and sounds with low

    Variability Focus as thumping or booming.Between these two new dimensions appear quite neatlytwo pairs of acoustic features, measuring opposingconstructs. CminusA estimates the energy difference

     between C- and A-weighted Leq. We may reason that thisdifference should be smaller for sounds of smaller Mass

     because their energy is spread out and resemble the C-weighting curve, and at the same time the relativedominance of low frequencies gives room for slow

     periodicity, that is, lower Variability Focus. N10m90measures the relative amount of shorter, louder (moremassive) sounds, against the background. We may speculatethat sound events “stick out” from the background whenthey have higher pitch, and have higher zero-cross count.

    Tempo is quite naturally covarying with low timescaleVariability Focus, but may also be more prevalent, and easierto detect for the algorithm, for louder (more massive)sounds. Finally, rolloff quite naturally covaries with bothspectspread and zero-cross count, as all 3 measures are likelyto increase with sounds that have lots of high-frequencyenergy.

    We tentatively adopt the Mass - Variability Focus planefor acoustic features, and calculate coordinates for the 12soundscapes.

    D.Repeated Measures Analysis of Variance

    The dependent variables have been discussed: overall,timespent, Pleasantness, Eventfulness, pos.d2m, and

    col.d2m. Looking at the correlation matrix in Table 3, wesuspected that analyses on overall and Pleasantness would be likely to show similarities.

    Table 3. Correlations between dependent variables.

    overall PleasantnessEventfulness pos.d2m col.d2mPleasantness 0.79Eventfulness -0.29 -0.29

     pos.d2m 0.09 0.11 -0.12

    col.d2m 0.22 0.22 -0.11 0.07

    timespent -0.06 -0.01 0.03 0.05 -0.02

    For the independent variables describing psychologicalfeatures, we included auralvisual orientation, TIPI (5

    dimensions) and POMS (6 dimensions), and for theindependent variables describing stimuli features the derivedacoustic components: Mass and Variability Focus. We areaware that employing 11 measures for 43 participants risksoverfitting; however, the fact that TIPI and POMS are wellestablished instruments makes their inclusion reasonable.The results in Table 4 includes interactions with p

  • 8/19/2019 Correlations Between Acoustic Features, Personality Traits

    8/10

    From the ANOVA results we can make a number ofinferences. Note that we are only discussing interactioneffects. In the plots below we have centered all variables anddivided the participants in three groups with different levelon the independent rater feature. For example, the blue linerefers to the 33% with lowest vigour, the gray to the middle-

    vigour 33%, and the red to the 33% with highest vigourscore.

    1) Overall and Pleasantness

    People who were more open-minded perceived largerquality differences in terms of soundscape Mass, but lessquality differences in terms of Variability Focus. Those who

     paid more attention to the sense of hearing generally perceived larger quality differences. The same situationholds for the compound rating of Pleasantness, as expectedfrom Table 3.

    See Figure 8 for a plot of the interaction betweensoundscape Mass and Openness, in their correlation withPleasantness ratings . To get a high score on TIPI Openness,

    the respondent would tend to see herself as more open tonew experiences, more complex, less conventional and lessuncreative. To explain the results, could it be that Opennessacts as a moderator in the relationship between soundscapeMass and the perception of Pleasantness? and the same foroverall quality.

    The interaction effects involving mood states are moredifficult to interpret.

    Figure 8. Interaction plot of Pleasantness ~ Mass : Openness

    2)  Eventfulness

    People who paid more attention to the sense of hearing perceived larger differences in Eventfulness of sound Mass.Since our definition of Mass is based on acoustic featuredetection of (low-frequency) foreground events, this resultconfirms that aurally oriented people are more able to tell thedifference. See Figure 9 for an interaction plot.

    In Eventfulness, open-minded people perceived lesserdifferences in terms of soundscape Mass. This could beexplained by the fact that the two dimensions are

    -1.5   -1.0   -0.5   0.0   0.5   1.0   1.5

        -        2

         -           1

            0

                  1

            2

    Pleasantness vs. Mass

    for different levels of Openness

    Mass

          P      l     e     a     s     a     n      t     n     e     s     s

    Openness level

    high {1.00!}, 23%, r = -0.7

    mid {-1.00!1.00}, 60%, r = -0.69

    low {!

    -1.00}, 16%, r = -0.59

    (theoretically) orthogonal, and we are looking at a verysmall effect.

    Finally, high Conscientiousness, Emotional Stability andvigour were all associated with more discriminating ratingstowards soundscapes when spaced along the Massdimension.

    Figure 9. Interaction plot of Eventfulness ~ Mass : auralvisual

    3)  Blob Position Spread and Colour Spread 

    People who scored low on Openness showed a tendency

    to make larger differences in the amount of spreading the blobs out in the ‘white-square’ interface as a response tosoundscape Mass. In parallel, the same pattern applies totheir broader useage of the colour palette. To get a low scoreon TIPI Openness, the respondent would tend to see herselfas less open to new experiences, less complex, moreconventional and more uncreative. It is remarkable that sucha self-image is associated with larger spread in terms of blob

     position and colours.

    4) Time Spent Listening 

    Again in a similar way, people scoring low on Opennessshowed a tendency to make larger differences in the amountof time spent on listening in regards to Variability Focus.One may speculate that they were “spreading out” their timein the same way they spread out blobs and colours. SeeFigure 10 for an interaction plot.

    IV. CONCLUSION

    We used the Ten-Item Personality Index, the Profile ofMood States protocol, and a simple measure for aural-visualorientation as psychological features of the raters. Ourresults from the ratings of soundscape perception along the 8adjectival dimensions of the Swedish Soundscape QualityProtocal lend support to the 2-dimensional model proposed

     by Axelsson, Nilsson & Berglund. We found patterns in theway raters made colour associations to soundscapes, and the

    way they grouped soundscapes according to a spatialmetaphor.

    -1.5   -1.0   -0.5   0.0   0.5   1.0   1.5

        -        2

         -           1

            0

                  1

            2

    Eventfulness vs. Mass

    for different levels of aural-visual

    Mass

          E     v     e     n      t       f     u      l     n     e     s     s

    aural-visual level

    high {0.64!}, 26%, r = 0.68

    mid {-0.18!0.64}, 30%, r = 0.68

    low {!-0.18}, 44%, r = 0.4

    601

  • 8/19/2019 Correlations Between Acoustic Features, Personality Traits

    9/10

    We used principal component analysis to determine a setof acoustic features that span a 2-dimensional plane relatedto latent higher-level features that would be specificallyrelevant to soundscape perception. We tentatively namedthese new dimensions Mass and Variability Focus; the firstdepends primarily on loudness and spectral shape, thesecond on the relative prominence of amplitude variability at

    either end of the spectrum.

    Figure 10. Interaction plot of Timespent ~ Variability Focus :Openness

    A series of ANOVA revealed patterns of significantcorrelations between perception ratings and the derivedacoustic features in interaction with personality measures.Several of the interactions were linked to the personality traitOpenness and others to aural-visual orientation;Conscientiousness and Emotional Stability were alsorepresented.

    Cross-correlation and analysis of mean pairwisecorrelation indicated that the TIPI dimensions wererelatively independent and more reliable than the POMSmeasures. The interaction results involving personality traitswere relatively straightforward to interpret, while those withmood were not. We conclude that personality traits are more

     promising in regards to soundscape perception studies. When it comes to the perceptual ratings as a whole, we

    aim to pursue the analysis by using multidimensional scaling(MDS) of overall (1D), blob position (2D), colour (3D),content (5D) and quality (8D). These 5 dissimilarity matricescan be cross-correlated, to reveal further patterns in the way

     participants use space, colour and semantic descriptions ofsoundscapes.

    When it comes to the acoustic features, it remains to beshown if soundscape perception can be adequately describedwith a 2-dimensional model, perhaps with dimensions suchas the proposed Mass and Variability Focus. In future work,we will certainly need larger numbers of soundscape stimuli,

     perhaps of shorter duration, and a larger group of rater-

     participants. It is also possible to investigate computationalmethods for feature selection optimisation, as discussed in(Eerola, Lartillot & Toivainen 2009), who suggest using

    -1.5   -1.0   -0.5   0.0   0.5   1.0   1.5

        -        2

         -           1

            0

                  1

            2

            3

    Time spent with each soundscape vs. Variability focus

    for different levels of Openness

    Variability focus

         t     i    m    e    s    p    e    n     t

    Openness level

    high {0.00!}, 56%, r = -0.081

    mid {0.00!0.00}, 0%, r = NA

    low {!0.00}, 44%, r = -0.22

    Projection to Latent Structure (PLS) regression in this kindof situation. PLS is a multivariate regression whereby a

     predictor-variable space is projected onto a space withsmaller dimensionality. The problem with ‘numbercrunching’ methods in general is that the explanatorydimensions are hard to grasp, and it becomes a challenge toexpress their meaning verbally in simple terms.

    APPENDIX

    SSER soundscape stimuli and raw response data areavailable at http://www.permagnus.net.

    REFERENCES

    Adams, M., Bruce, N., Davies, W., Cain, R., Carlyle, A., Cusack,P., Hume, K., Jennings, P. & Plack, C. (2008). “Soundwalkingas methodology for understanding soundscapes” InProceedings of the Institute of Acoustics Spring Conference2008 – Widening Horizons in Acoustics, Reading UK, April2008, pp 552-558

    Andringa, Tjeerd C. (2010). “Soundscape and core affectregulation”. Proceedings of Inter-noise 2009, Portugal.

    Axelsson, Östen (2011). Aesthetic Appreciation Explicated. PhDThesis, Stockholm University.

    Axelsson, Östen, Nilsson, Mats E. & Berglund, Birgitta (2010). “A principal components model of soundscape perception”.Journal of the Acoustical Society of America #128 (5), November 2010.

    Bodin, Lars-Gunnar (2012). “Meaning and Meaningfulness inElectroacoustic Music”. Keynote address to Conference ofElectroacoustic Music studies (EMS) 2012, Stockholm.

    Bresin, Roberto (2005). “What is the color of that performance?”.International Computer Music Conference (ICMC), Barcelona,2005.

    Cain, R., P. Jennings, J. Poxon, A. Scott (2009), “Emotionaldimensions of a soundscape”. In Proceedings of InterNoise2009, 23-26th August, Ottawa, Canada.

    Cain, R., P. Jennings, M. Adams, N. Bruce, A. Carlyle, P. Cusack,W. Davies, K. Hume and C. Plack (2008), “SOUND-SCAPE:A framework for characterising positive urban soundscapes”,In Proceedings of Acoustics 08 – Euronoise, the Europeanconference on noise control, Paris France, June 2008, pp1019-1022

    Cain, R. & P. Jennings (2007), “Developing best practice for lab- based evaluations of urban soundscapes”, In Proceedings ofInter-Noise 2007, Istanbul, August 2007.

    Davies, W. and M. Adams, N. Bruce, R. Cain, A. Carlyle, P.Cusack, K. Hume, P. Jennings, C. Plack (2007), “The PositiveSoundscape Project”, In Proceedings of the 19th InternationalConference on Acoustics, Madrid, September 2007.

    Davies, William J. and Adams, Bruce, Marselle, Cain, Jennings,Poxon, Carlyle, Cusack, Hall, Hume & Plack (2009). “The positive soundscape project: A synthesis of results from manydisciplines”. Proceedings of Inter-noise 2009, Canada.

    Eerola, Larti l lot & Toivainen (2009). Prediction ofmultidimensional emotional ratings in music from audio usingmultivariate regression models. 10th International Society forMusic Information retrieval Conference (ISMIR 2009).

    Friberg, A., Schoonderwaldt, E., & Juslin, P. N. (2007). CUEX: Analgorithm for automatic extraction of expressive tone parameters in music performance from acoustic signals. ActaAcustica United with Acustica, 93, 411-420.

    Genesis S.A. (2009). Loudness Toolbox, for MatLab. http://www.genesis.fr (accessed 14 May 2012).

    Gosling, Samuel D., Rentfrow, Peter J. & Swann Jr., William B.(2003). “A very brief measure of the Big-Five personalitydomains”. Journal of Research in Personality 37 (2003) 504– 528.

    602

    http://www.genesis.fr/http://www.permagnus.net/http://www.genesis.fr/http://www.genesis.fr/http://www.genesis.fr/http://www.genesis.fr/http://www.permagnus.net/http://www.permagnus.net/

  • 8/19/2019 Correlations Between Acoustic Features, Personality Traits

    10/10

    Jennings, P. & Cain, R. (2009), “A Framework for assessing thechange in perception of a public space through its soundscape”,In Proceedings of InterNoise 2009, 23-26th August, Ottawa,Canada.

    John, Oliver P. & Srivastava, Sanjay (1999). “The Big Five TraitTaxonomy: History, Measurement, and TheoreticalPerspectives”. Chapter 4, pp. 102-38 in Handbook ofPersonality. Theory and Research. 2nd edition. Pervin,

    Lawrence A. & John, Oliver P. (Eds). The Guilford Press 1999.Lartillot, Olivier, Eerola, Tuomas, Toiviainen, Petri & Fornari, Jose(2008). "Multi-feature modeling of pulse clarity: Design,validation, and optimization", International Conference onMusic Information Retrieval, Philadelphia, 2008.

    Lartillot, Olivier (2011). MIRtoolbox 1.3.2, for MatLab. http://www.mathworks.com/matlabcentral/fileexchange/24583-mirtoolbox (accessed 14 May 2012).

    Lindborg, PerMagnus (2010b). “Aural and Visual Perceptions of aLandscape”. Unpublished pilot study.

    Lindborg, PerMagnus (2010a). “Perception of emotion portrayal incartoons by aurally and visually oriented people”. Proceedingsof the 11th International Conference in Music Perception andCognition (ICMPC11, University of Washington, USA):Australia/USA: Causal Productions.

    Maisonneuve, Nicolas, Matthias, Stevens, Niessen, Maria E.,Hanappe, Peter & Steels, Luc (2009). “Citizen Noise PollutionMonitoring”. The Proceedings of the 10th International DigitalGovernment Research Conference.

    Maisonneuve et al. (2008-11). NoiseTube. http://www.noisetube.net(accessed 14 May 2012).

    McCrae, Robert R. & Costa, Paul T. (1999). “A Five-Factor Theory of Personality”. Chapter 5, pp. 139-53 in Handbook ofPersonality. Theory and Research. 2nd edition. Pervin,Lawrence A. & John, Oliver P. (Eds). The Guilford Press 1999.

    MIREX, The Music Information Retrieval Evaluation eXchange.Webpages at ht tp: / /www.music- i r .org/mirex/wiki /MIREX_HOME (accessed 14 May 2012).

    Morfeld, Matthias, Petersen, Corinna, Krüger-Bödeker, Anja, Mackensen, Sylvia von & Bullinger, Monika (2006). “The

    assessment of mood at workplace - psychometric analyses ofthe revised Profile of Mood States (POMS) questionnaire”.Psychosoc Med. 2007; 4: Doc06. Published online in May2007 and available at http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2736534/ (accessed 14 May 2012).

     Nilsson, Mats (2007. Soundscape quality in urban open spaces. In Proceedings of INTER-NOISE 2007, Istanbul, Turkey.

     Nygren, Paul (2009). Loudness calculation according to ITU-R BS.1770-1, for MatLab.

     Nyuk, Hien Wong & Wy, Leng Seow Jan (2003). “Total building performance evaluation of academic institution in Singapore”.Building and Environment 38 (2003) 161 – 176. ElsevierScience Ltd.

    Schwarz, Diemo (2004): Data-Driven Concatenative SoundSynthesis. PhD Thesis in Acoustics, Computer Science, Signal

    Processing Applied to Music, Université Paris 6 - Pierre etMarie Curie.

    Schwarz, Diemo (2008). Principles and Applications of InteractiveCorpus-Based Concatenative Synthesis JIM 2008.

    Terry, P. C., Lane, A. M., & Fogarty, G. J. (2003). “Con struct validity of the POMS for use with adults”. Psychology of Sportand Exercise, 4 (2), 125-139.

    Terry, Peter C. , Lane, Andrew M. , Lane, Helen J. and Keohane,Lee(1999) “Development and validation of a mood measure foradolescents”. Journal of Sports Sciences, 17: 11, 861 — 872.

    Vuoskoski, Jonna K. & Eerola, Tuomas (2011). “The role of moodand personality in the perception of emotions represented bymusic”. Cortex 47 (2011) 1099-1106

    Vuoskoski, Jonna K. & Eerola, Tuomas (2012). “Measuring music-induced emotion : A comparison of emotion models, personality biases, and intensity of experiences.” 2011 15: 159Musicae Scientiae.

    603

    http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2736534/http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2736534/http://www.music-ir.org/mirex/wiki/MIREX_HOMEhttp://www.music-ir.org/mirex/wiki/MIREX_HOMEhttp://www.noisetube.net/http://www.mathworks.com/matlabcentral/fileexchange/24583-mirtoolboxhttp://www.mathworks.com/matlabcentral/fileexchange/24583-mirtoolboxhttp://www.mathworks.com/matlabcentral/fileexchange/24583-mirtoolboxhttp://www.ncbi.nlm.nih.gov/pmc/articles/PMC2736534/http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2736534/http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2736534/http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2736534/http://www.music-ir.org/mirex/wiki/MIREX_HOMEhttp://www.music-ir.org/mirex/wiki/MIREX_HOMEhttp://www.music-ir.org/mirex/wiki/MIREX_HOMEhttp://www.music-ir.org/mirex/wiki/MIREX_HOMEhttp://www.noisetube.net/http://www.noisetube.net/http://www.mathworks.com/matlabcentral/fileexchange/24583-mirtoolboxhttp://www.mathworks.com/matlabcentral/fileexchange/24583-mirtoolboxhttp://www.mathworks.com/matlabcentral/fileexchange/24583-mirtoolboxhttp://www.mathworks.com/matlabcentral/fileexchange/24583-mirtoolboxhttp://www.mathworks.com/matlabcentral/fileexchange/24583-mirtoolboxhttp://www.mathworks.com/matlabcentral/fileexchange/24583-mirtoolbox