interactive composition

  • Upload
    ipires

  • View
    9

  • Download
    0

Embed Size (px)

DESCRIPTION

interactive composition

Citation preview

  • SPAA- AN AGENT BASED INTERACTIVE COMPOSITION

    Michael SpicerSingapore Polytechnic

    School of Info-Communications Technology

    ABSTRACT

    An interactive composition for flute and computer is presented, entitled SPAA. This piece uses an interactive composition environment that makes use of an intelligent agent to realise the computer part. This agent plays back samples of the live flute, with appropriate transformations, in order to achieve a predetermined musical structure. The agent decides which samples to play, and what signal processing to apply by analysing its recent output combined with the live flute part.

    1. INTRODUCTION

    SPAA 1 is the first piece in a suite of interactive compositions that feature the interactions between synthetic performers and an improvising flutist. (The flute was chosen, as I play flute, and it has a pure tone that I thought would be easy to analyze. It is obviously also possible to use other instruments). The synthetic performer is implemented as an autonomous agent written in C++. SPAA stands for Signal Processing Autonomous Agent. The design of this agent is loosely derived from the basic agent design used in two other goal driven agent based Interactive composition environments I have built, AALIVENET [1] and AALIVE [2]. The biggest difference is that instead of the agent controlling a MIDI synthesizer, as in the previous two systems, the agent in SPAA manipulates samples of the live flute performance. This is not a purely free form improvisation environment. The overall form of the piece is pre-determined, which provides a set of goal states to guide the agent in its decision making process. The live performer and agent performer combine to try and realize this form as closely as possible. The synthetic performer may cooperate (reinforce) or may interfere (contradict) with the live performer in order to achieve the goals as they change throughout the duration of the piece. My motivation for this piece was an interest in the interplay between the live flute player and the agent as they try to realize the form.

    2. STRUCTURAL OUTLINE OF THE PIECE

    The structure of the piece is articulated as a set of goal states, indicating how various musical dimensions change over the course of the piece. These dimensions include:

    x Average pitchx Average note ratex Timbre (amount of high frequencies present).x Average Loudness

    The states are stored in an array of C++ objects that encapsulate a four dimensional vector. The performance of the flute is be periodically analyzed, as is the output of the agent. An error is calculated, based on the values of these four dimensions. The agent then chooses which buffer to play so as to minimize the magnitude of this error.

    3. AGENT DESIGN

    The agent is based on a sample playback system, which could be considered a descendent of the tape delay/ digital delay looping systems that have been commonlyused over the last forty years. The structure of the agent is shown in Figure 1 below.

    Figure 1. Overall structure of the SPAA agent.

    The agent has a number of C++ objects (currently ten, but the final version will have many more), each containing a buffer of audio input from the live flute performance, lasting about two seconds. The agent program, invoked by the PortAudio callback function,chooses which buffer to play, and how the audio in each buffer can be processed during playback. These decisions are made by comparing the combination of last live sample the agent has heard (analyzed), as well as its own output, with the current goal state.

    The agents percepts are derived from analysis of the incoming audio stream. Each of the C++ objects containing the audio also has methods used for this

  • analysis and attributes to store the results. These buffers are filled in the PortAudio callback function that runs in its own thread. The analysis is done asynchronously in the main thread. To do the analysis, the buffer is divided into a number of windows, each 1024 samples long. The R.M.S. and a F.F.T. of each window arecalculated, and from these, a normalized measure of the average amplitude, average pitch, average amount of high frequencies present, and number of note attacks, can be calculated. These values are stored in a four dimensional vector that is representative of the flute performance stored in the buffer.

    The analysis result is stored in the same form (a four dimensional vector) as the goal states that are used to specify the desired evolution of the piece, so it is simple to calculate the current error by subtracting the sum of the analysis results from the most recent input and the currently playing buffer from the goal state. The buffer with the smallest magnitude error vector is chosen as the next playback buffer.

    One area that is currently being explored is simple real-time DSP techniques so as to reduce the errors in the timbral and average rate dimensions. Waveshaping and amplitude modulation/ring modulationcan cheaply add more high frequency components. Low frequency amplitude modulation with suitably shaped ramp waveforms can create the effect of more note attacks.

    4. IMPLEMENTATION

    The system is written in C++, and developed on a G4 Macintosh Powerbook, using Xcode. In order to make the program as portable as possible, I have used no platform specific API's. PortAudio is used to handle the audio I/O, OpenGL for the display, and Glut is used for handling the user interface (keyboard, mouse and menus). All of the analysis of the input buffers is done during the glut idle time function. I am using some global boolean variables to act as semaphores, so as to avoid the analysis and callback threads to interfere with each other. This is seems to be working ok.

    In order to get a clean signal, without a lot of spill, I am close micing the flute with a dynamic microphone. It is important to set the gain of the microphone correctly, so that the amplitude measurements corresponded appropriately with the system goal states.

    5. FUTURE WORK AND CONCLUSION

    The current state of the software (June 2005) is at the proof on concept stage. All the parts are in place, and they work, but are quite crude. Much work needs to be done on the analysis stage, so as to extract higher level knowledge from the raw input signal. The agent function could be improved by adding some classification capability. The DSP routines used to enhance the sample playback are very rudimentary. Also, the user interface is not very friendly. Even so, the system as it stands is usable and shows the potential for using the agent approach for building interactive composition systems.

    6. REFERENCES

    [1] Spicer, M.J. ''AALIVENET: An agent based distributed interactive composition environment paper'', Proceedings of the International Computer Music Conference, Miami, USA, 2004.

    [2] Spicer, M.J., Tan, B.T.G. and Tan, C.L A Learning Agent Based Interactive Performance System. In Proceedings of the International Computer Music Conference, pp. 9598. San Francisco: International Computer Music Association.2003

    IndexICMC 2005

    Conference InfoWelcome MessagesSponsorsCommitteesProgram Guide

    SessionsMonday 5, September 2005MonAmOR1-Paper Session 1: FrameworksMonAmPO1-Demo Session 1MonAmOR2-Paper Session 2: History of Electroacoustic Mu ...MonAmPO2-Poster Introduction SessionMonAmPO3-Demo Session 2MonPmOR1-Paper Session 3: Automatic Performance Renderi ...MonPmOR2-Studio reportsMonPmPO1-Demo Session 3MonPmOR3-Paper Session 4: Sound Synthesis and AnalysisMonPmPO2-Demo Session 4

    Tuesday 6, September 2005TueAmOR1-Paper Session 1: Sound Synthesis and AnalysisTueAmPO1-Demo Session 1TueAmOR2-Paper Session 2: Music Analysis and Representa ...TueAmPO2-Poster Introduction SessionTueAmPO3-Demo Session 2TuePmOR1-Paper Session 3: Mathematical Music TheoryTuePmPO1-Demo Session 3

    Wednesday 7, September 2005WedAmOR1-Paper Session 1: Sound Synthesis and AnalysisWedAmPO1-Demo Session 1WedAmOR2-Paper Session 2: PsychoacousticsWedAmPO2-Poster Introduction SessionWedAmPO3-Demo Session 2WedPmOR1-Paper Session 3: Systems for Composition and M ...WedPmOR2-Studio reportsWedPmPO1-Demo Session 3WedPmOR3-Paper Session 4: Sound Processing and Synthesi ...WedPmPO2-Demo Session 4

    Thursday 8, September 2005ThuAmOR1-Paper Session 1: Music Information Retrieval a ...ThuAmOR2-Paper Session 2: PerformanceThuAmPO1-Poster Introduction SessionThuAmPO2-Demo Session 2ThuPmOR1-Paper Session 3: Interactive MusicThuPmOR2-Studio reportsThuPmPO1-Demo Session 3ThuPmOR3-Paper Session 4: General Computer Music TopicsThuPmPO2-Demo Session 4

    Friday 9, September 2005FriAmOR1-Paper Session 1: Composition SystemsFriAmOR2-Paper Session 2: Composition SystemsFriAmPO1-Poster Introduction SessionFriAmPO2-Demo Session 2FriPmOR1-Paper Session 3: Sound Synthesis and AnalysisFriPmPO1-Demo Session 3FriPmOR2-Paper Session 4: PerformanceFriPmPO2-Demo Session 4

    AuthorsAll authorsABCDEFGHIJKLMNOPRSTUVWXYZ

    PapersPapers by SessionAll papersPapers by Topic

    TopicsDigital Audio Signal ProcessingSound Synthesis and AnalysisMusic AnalysisMusic Information RetrievalRepresentation and Models for Computer MusicArtificial Intelligence and MusicLanguages for Computer MusicMathematical Music TheoryPsychoacoustics, Music Perception and CognitionAcoustics of MusicAesthetics, Philosophy and Criticism of MusicHistory of Electroacoustic MusicComputer Systems in Music EducationComposition Systems and TechniquesInteractive Performance SystemsSoftware and Hardware SystemsGeneral and Miscellaneous Issues in Computer MusicStudio Reports

    SearchHelpBrowsing the Conference ContentThe Search FunctionalityAcrobat Query LanguageUsing Acrobat ReaderConfigurations and Limitations

    AboutCurrent paperPresentation sessionAbstractAuthorsMichael Spicer