2
MUSIC THAT LISTENS TO WHAT'S GOING TO HAPPEN: INTERNET- ENHANCED, SELF-ADAPTING SOUNDSCAPES Andrea Cera Composer Conservatorio "C. Pollini", Padova Accademia di Belle Arti di Brera, Milano ABSTRACT In this text, I will describe a project which brings together three elements: 1) real-time soundscape analysis and imitation; 2) the aesthetics of elevator music, ambient music, musique d'ameublement. I explored these first two elements in several works: Reactive Ambient Music (2005), Nature (2005), Undertones (2004/2006), Dueling Zombies (2006/2007). 3) the Internet as a way to find and analyze sounds that will be heard in the immediate future (from a static point, or from a moving point). 1.SELF-ADAPTING SOUNDSCAPES The starting point of this project is to develop technologies to create (in real time) an intelligent audio counterpoint to noise pollution (one of the pioneers in this field is Achim Wollscheid; another source of inspiration is the work of Tristan Jehan). The resulting soundscape is a mixture of real and artificial sounds, the latter carefully hidden, in order to simulate the presence of some sort of order in the chaotic turbulence of a given state of noise pollution. The system is composed by: - a "center" (where we are living) - a "computing engine" (the computer for analysis and sound production) - a "speaker system" (the way we diffuse the sounds produced by the "computing engine") - one or more "listening points" (the devices that record sound and transfer it to the "computing engine" - they have to be placed far from the "center"). Figure 1. The house with one ear. First example: a house is the "center". A "computing engine" (a computer with a maxMSP [or other software] patch), linked to several "speaker systems", analyzes sounds coming from a "listening point" (a microphone in the roof). Figure 2. The intelligent walkman Second example: my head is the "center". A "computing engine" (a maxMSP [or other software] patch running on a PDA or laptop), linked to a "speaker system" (headphones) analyzes sounds coming from a "listening point" (an external microphone clipped to the headphones). 2.STRATEGIES Up to now, I programmed two kinds of "computing engines": 1) COUNTERPOINT - time counterpoint: the computer fills the gaps between two occurring sound events. - frequency counterpoint: the computer fills the frequency zones which are not present in the occurring sound event. 2) AUGMENTATION - the computer "augments" an occurring sound event with another sound event. I am working on a third strategy: 3) MORPHING - the computer creates rhythmical "holes" in the frequency content of some music being played, according to the sound events occurring outside. The music we want to listen to, will be slightly morphed with the sounds of outside. 3.INTERNET ENHANCEMENT The use of the Internet could enhance this technology. For me an audio stream is interesting to use if it allows to: - [SPACE] link places / people that are impossible to physically link (ex. linking people that cannot move; using triangulations to determine the location of specific

Music That Listens to Whats Going to Happen Internet

  • Upload
    ipires

  • View
    216

  • Download
    2

Embed Size (px)

DESCRIPTION

Music That Listens to Whats Going to Happen Internet

Citation preview

  • MUSIC THAT LISTENS TO WHAT'S GOING TO HAPPEN: INTERNET-ENHANCED, SELF-ADAPTING SOUNDSCAPES

    Andrea CeraComposer

    Conservatorio "C. Pollini", Padova

    Accademia di Belle Arti di Brera, Milano

    ABSTRACT

    In this text, I will describe a project which brings together three elements: 1) real-time soundscape analysis and imitation; 2) the aesthetics of elevator music, ambient music, musique d'ameublement. I explored these first two elements in several works: Reactive Ambient Music (2005), Nature (2005), Undertones (2004/2006), Dueling Zombies (2006/2007). 3) the Internet as a way to find and analyze sounds that will be heard in the immediate future (from a static point, or from a moving point).

    1.SELF-ADAPTING SOUNDSCAPES

    The starting point of this project is to develop technologies to create (in real time) an intelligent audio counterpoint to noise pollution (one of the pioneers in this field is Achim Wollscheid; another source of inspiration is the work of Tristan Jehan). The resulting soundscape is a mixture of real and artificial sounds, the latter carefully hidden, in order to simulate the presence of some sort of order in the chaotic turbulence of a given state of noise pollution. The system is composed by: - a "center" (where we are living)- a "computing engine" (the computer for analysis and sound production)- a "speaker system" (the way we diffuse the sounds produced by the "computing engine")- one or more "listening points" (the devices that record sound and transfer it to the "computing engine" - they have to be placed far from the "center").

    Figure 1. The house with one ear.

    First example: a house is the "center". A "computing engine" (a computer with a maxMSP [or other software] patch), linked to several "speaker systems", analyzes

    sounds coming from a "listening point" (a microphone in the roof).

    Figure 2. The intelligent walkman

    Second example: my head is the "center". A "computing engine" (a maxMSP [or other software] patch running on a PDA or laptop), linked to a "speaker system" (headphones) analyzes sounds coming from a "listening point" (an external microphone clipped to the headphones).

    2.STRATEGIES

    Up to now, I programmed two kinds of "computing engines": 1) COUNTERPOINT - time counterpoint: the computer fills the gaps between two occurring sound events.- frequency counterpoint: the computer fills the frequency zones which are not present in the occurring sound event. 2) AUGMENTATION - the computer "augments" an occurring sound event with another sound event.I am working on a third strategy:3) MORPHING- the computer creates rhythmical "holes" in the frequency content of some music being played, according to the sound events occurring outside. The music we want to listen to, will be slightly morphed with the sounds of outside.

    3.INTERNET ENHANCEMENT

    The use of the Internet could enhance this technology. For me an audio stream is interesting to use if it allows to:- [SPACE] link places / people that are impossible to physically link (ex. linking people that cannot move; using triangulations to determine the location of specific

  • sounds in a city; linking places that are acoustically or visually interesting to compare, but physically too far, like three caves in three parts of the world...)- [TIME]: move sounds or images faster than possible physically (ex. anticipating the traffic noise that will arrive in a specific area; ignoring the local time around the world...) In my project, audio streams from places relatively far from the "center" would allow the computer(s) to know what kind of sound will be heard in the "center" in the near future.

    Figure 3. The house with many ears.

    Third Example: if the "center" is a house, a few "listening points" could be placed 1 km. away, in different locations. The sound of cars they record could be sent to the Internet, and then back to the "computing engine" before the cars actually come near to the house. In that way, the "computing engine" will anticipate the nature of the sounds to be counteracted, providing a more intelligent reaction. The "listening points" could be used to calculate triangulations, giving information about the precise location and speed of the sounds to be counteracted.

    Figure 4. The pre-cog.

    Fourth example: the Internet could be used to determine the position and the trajectory of someone carrying a wearable system, and walking towards a jammed intersection. The information gathered by GPS would allow the "computing engine" to be prepared for an increasing amount of overall noise to counterbalance.

    4.CONCLUSION

    Many projects of Augmented Reality explored the fuzzy space between the categories of "natural" / "artificial", "real" / "virtual". Their existence reminds us that what we call "reality" is a construction: our mind creates the forms, the objects, and the reality that we actually experience. Following this suggestion, we can probably find more interesting to associate the notions of "quietness" and "noise pollution" with conditions of our attention system, rather than with external states of things (natural environments vs. artificial environments). This project deals with a slightly paradoxical architecture of time, space and memory. Its aim is to generate a "counter audio reality", where artificial sounds help us to find the natural qualities (and even the beauty?) hidden behind our everyday experience of deteriorated soundscapes.

    5.REFERENCES

    [1] Aristotele Poetica, trad. it. Manara Valgimigli, Bari, Laterza, 1988.

    [2] Cera, A. "Noir Miroir. Ambiguits topographiques, sociales et interactives de la musique", Circuit, musiques contemporaines 17 (3), pp. 29-38, 2007.

    [3] Jehan, T. Creating music by listening, PhD diss., Massachusetts Institute of Technology, 2005.

    [4] Lanza, J. Elevator Music, University of Michigan Press, 2004.

    [5] Manovich, L. The language of new media, Massachusetts Institute of Technology Press, 2001.

    [6] Schafer, R. Murray. The tuning of the world, Philadelphia, University of Pennsylvania, 1980.

    [7] Tiffin, J. - Terashima, N. Hyperreality: Paradigm For The Third Millennium, New York, Routledge, 2001.

    [8] Wollscheid, A. Selected Works, Frankfurt, Selektion, 2001.

    [9] Wollscheid, A. "Does the Song remain the Same?", Site of Sound: of architecture and the ear, ed. B.LaBelle and S.Roden, pp. 5-10, Los Angeles, Errant Bodies Press, 1999.

    IndexICMC 2008 HomeConference InfoWelcome from the ICMA PresidentICMA OfficersWelcome from the ICMC 2008 Organising CommitteeICMC 2008Previous ICMCsICMC 2008 Paper Panel & Music CuratorsICMC 2008 ReviewersICMC 2008 Best Paper Award

    SessionsMonday, 25 August 2008Languages and Environments 1Interaction and Improvisation 1Sound SynthesisComputational Modeling of MusicDemos 1Posters 1Interaction and Improvisation 2Aesthetics, History, and Philosophy 1

    Tuesday, 26 August 2008MiscellaneousAlgorithmic Composition Tools 1Network PerformanceComputational Music Analysis 1Panel 1: Reinventing Audio and Music Computation fo ...Panel 2: Towards an Interchange Format for Spatial ...

    Wednesday, 27 August 2008Studio Reports 1Mobile Computer Ensemble PlayDemos 2Posters 2Algorithmic Composition Tools 2Interface, Gesture, and Control 1

    Thursday, 28 August 2008Interface, Gesture, and Control 2Languages and Environments 2Spatialization 1Computational Music Analysis 2Panel 3: Network PerformanceDemos 3Posters 3

    Friday, 29 August 2008Sound ProcessingAesthetics, History, and Philosophy 2Interface, Gesture, and Control 3Spatialization 2Algorithmic Composition Tools 3Studio Reports 2

    AuthorsAll authorsABCDEFGHIJKLMNOPQRSTUVWYZ

    PapersAll papersPapers by Sessions

    Topicscritical theory/philosophy of technology, postmodern cy ...sociology/anthropology of everyday sounds, situated per ...history of computer music, women and gender studies, ed ...philosophy/culture/psychology, music information retrie ...electroacoustic music composition, aesthetics of music, ...singing analysis/synthesis, music analysis/synthesis, v ...interactive and real-time systems and languages, music ...human-computer interaction, sound synthesis/analysis, i ...interaction design, computer music, performance art, el ...physical interface design, performance systems, gesture ...language/education/history/sociology of computer music, ...composition systems and techniques, languages for compu ...programming languages/systems, audio synthesis/analysis ...composition, music cognition, music informatics, human- ...music information retrieval, audio signal processing, p ...computational musicology, music cognition, music and AI ...music cognition, rhythm/meter/timing/tempo, computation ...music information retrieval, audio content analysis, to ...spatial audio, audio signal processing, auditory percep ...physical modelling, spatial audio, room acoustics, aura ...sonic interaction design, physics-based sound synthesis ...audio signal processing, sound synthesis, acoustics of ...audio signal processing, acoustics, software systemsphysics-based sound synthesis, virtual room acousticscomposition, music analysis, software for pedagogyPANEL: Towards an Interchange Format for Spatial audio ...PANEL: Network PerformancePANEL: Reinventing Audio and Music Computation for Many ...

    SearchHelpBrowsing the Conference ContentThe Search FunctionalityAcrobat Query LanguageUsing the Acrobat ReaderConfiguration and Limitations

    AboutCurrent paperPresentation sessionAbstractAuthorsAndrea Cera