lamberto coccioli

on music and beauty

Tag: live electronics (page 1 of 2)

Seminar on live electronics at the Royal Academy of Music

Philip Cashian, Head of Composition at the Royal Academy of Music, kindly invited me to give a seminar to the students of his department. The title I chose for the talk today was “A new approach to composing music with live electronics”. I gave an overview of live electronics in practice, and the challenges and frustration that often accompany performances involving technology. Referring to my experience with Luciano Berio’s musical actions with live electronics (Outis, Cronaca del Luogo), I remarked on the sad absence of these seminal works from the repertoire today and outlined the challenged posed by technology in performing works created only 15-20 years ago. I went on presenting the philosophy of the Integra project and its role in developing the Integra Live software, with the intention to address the caducity and unfriendliness of live electronic systems developed using programming languages like Max.

Showing Integra Live in action I was able to demonstrate how the software and its different views tried to mimic the creative process and the workflow of a composer. From an initial exploratory, imaginative phase (Module View), to a more structured stage where events start being organised in time (Arrange View), to a rehearsal and finally performance phase (Live View), where things are fixed and the most important thing is reliability and control of every relevant aspect of the performance.

I hope I conveyed to the students my salient point: always ask yourself why you should use technology, and if you do, make sure it is borne out of your musical ideas, and is an integral part of your musical thinking. I enjoyed very much the interaction with them, they were curious and lively, and asked interesting questions, among others, about the future of Integra Live in a hypothetical post-coding world, and – this one more technical – about using MIDI notes to control partials in the various spectral modules of the software, highlighting the need for a built-in MIDI note to frequency converter in all spectral modules. At the end of the seminar Philip took a straw poll among the students and the overwhelming majority voted in favour of trying Integra Live in their own music. Not bad!

The problem with MaxMSP

I had the first glimpse of what was then called simply Max in 1994, when my good friend Diego Dall’Osto introduced me to the software, but I only started using it in 1996, when working at Centro Tempo Reale in Florence. At first, like so many other composers, I was completely taken by the power and beauty of a programming language that allowed me to work in a graphical environment and test the results on the fly, without having to compile the code. Moreover, I had two wonderful mentors, Nicola Bernardini and Alvise Vidolin. They gave me generous advice and help, so that I was soon able to develop my own patches without having prior programming skills.

Soon, though, a number of issues with Max started to emerge, and in various ways, they are still unresolved ten years later. To be fair, many of the issues depend on the way Max, now MaxMSP, is used, but I still find it surprising that David Zicarelli and his company have not acted more energetically to adapt the software to the needs of the growing Max community. I will look at MaxMSP from the two angles that interest me most, usability and sustainability, but first I will try to answer the question of whom this software is written for.

I think that the main problem with MaxMSP is the fact that it sits in a sort of no man’s land between programming languages and software applications. It is too cumbersome and prescriptive as a programming language, but it lacks the user interface and a consistent set of tools that we usually associate with commercial software packages. It may be retorted that this perceived weakness is in fact the main strength of MaxMSP: to give total freedom to artists and musicians so that they can develop their own interactive set-ups without the rigid constraints of commercial software but also without the need to become programmers. My opinion is that in the long term, and looking at the way MaxMSP is now the de facto standard in performing music with live electronics, the problem has become more acute.

Composers that go past MaxMSP’s rather steep learning curve greedily embrace the programme, and start developing their patches, either from scratch, or using existing objects or libraries by members of the community. In the first case they often end up with very inefficient and buggy patches, in the second they create many dependencies, limiting portability and sustainability of their work. Max is great at two things – experimenting with your ideas and prototyping virtual set-ups – but as soon as you enter production mode, it becomes quite unfriendly. There is a historical reason for this; Max was first developed at IRCAM, an institution characterised by a rather rigid separation between composers and music technology assistants. The idea was that composers dealt with the creative part, while the assistants provided a human interface to the technology tools. This meant that the code was looked after by the technologists, and composers didn’t need to engage directly with it. Also, a big institution like IRCAM ensured the long-term preservation of the works, by employing assistants to maintain and upgrade the patches as needed.

This initial dichotomy is part of MaxMSP’s genetic code: the software is used mainly by composers and artists, but is written for programmers. This is why I find difficult to identify the target audience of the software: it is too complex and difficult to learn to be mastered fully by artists, but its true potential is wasted in the hands of programmers, who will also complain that as a development platform MaxMSP lacks many important features. In fact, I haven’t found yet a good application built with MaxMSP. So it looks like the MaxMSP target user is either a highly talented composer-technologist, equally versed in computer programming and music composition, or a creative team supplying the required skills. Not surprisingly, MaxMSP is frequently found in higher education.

Let’s look now at MaxMSP from the usability perspective. MaxMSP provides out-of-the-box quite a number of graphic objects, and has recently added powerful new functions, like Java support, the ability to encapsulate/de-encapsulate patches and create/save prototypes [template patches that can be used everywhere]. Nevertheless, the actual user interface is entirely the responsibility of the user – there are no standard Graphic User Interface models or templates. The result is that a given patch – say a sound spatializer – can be realised in many different ways, each one providing a very different user experience. Testing and comparing of patches is thus made very difficult, as the same spatializer engine can be visualised omitting certain parameters altogether or hiding them in remote subpatches. Sharing of patches, or having your live electronics performed by someone else, is also compromised, since every user builds their patches according to their personal needs and taste. If you add the fact that MaxMSP has no easy way for commenting or documenting patches, you see how hard it can be sometimes to reconstruct signal and control flow in a complex patch, even for the person that wrote it!

Probably it is from the sustainability point of view that MaxMSP fares worse. The software gives artists and musicians the impression to be in control, but in fact locks them into a closed system, difficult to scale, adapt or maintain over time. I’m talking here mainly from the perspective of live electronics concert performance, the kind of mission-critical application where everything has to work as planned. My experience over the years is that in order to work properly a MaxMSP patch has to be tweaked or rewritten every year or so, especially if external dependencies are included in the patch. In some cases, objects and libraries are not upgraded when the software is, and an alternative must be found or developed from scratch. Conflicts between objects with the same name can also prevent patches from functioning properly.

As I said, MaxMSP is an invaluable tool for trying out ideas, experimenting and prototyping, but falls short of usability and sustainability requirements, the two areas that matter most for a creative, musical use of the software and for the long-term preservation and maintenance of patches and the artistic works that depend on them. MaxMSP remains the first choice for musicians working with live electronics, but I think I have identified a gap that needs to be filled if we really want to empower musicians and offer them more accessible tools for interacting with technology.

Organised Sound article

Modernising musical works involving Yamaha DX-based synthesis: a case study

This article written in collaboration with Jamie Bullock has been published on Organised Sound, issue no. 5 2006.

We describe a new approach to performing musical works that use Yamaha DX7-based synthesis. We also present an implementation of this approach in a performance system for Madonna of Winter and Spring by Jonathan Harvey. The Integra Project, “A European Composition and Performance Environment for Sharing Live Music Technologies” (a three year co-operation agreement part financed by the European Commission, ref. 2005-849), is introduced as framework for reducing the difficulties with modernising and preserving works that use live electronics.

Download the Organised Sound article.

ICMC 2005 Harvey paper

Modernising live electronics technology in the works of Jonathan Harvey

This paper was written together with Jamie Bullock and presented at the 2005 International Computer Music Conference in Barcelona. Here follows the abstract:

Many twentieth century works composed for instruments and live electronics are seldom performed due to their use of near obsolete technology. Some performing bodies avoid such works because the necessary technology is either unavailable or too expensive to hire. This paper describes the current status of a project to modernise the technical aspects of Jonathan Harvey’s works in order to increase the likelihood of performance and improve longevity. The technical and ideological implications of the project are discussed in the context of a broader need for the preservation of contemporary works involving technology. The use of open source software and standard protocols is proposed as a way of reducing technological obsolescence. New solutions for two of Harvey’s works are proposed, and discussed in relation to the problems encountered with the project so far. Conclusions are then drawn about the current status of the project and its implications for further work.

Download the ICMC paper.

Carnegie Hall lecture

Notes for a talk I gave on works by Berio and Maderna.

Pre-concert Lecture, The Luciano Berio – Tempo Reale Workshop
Carnegie Hall, New York, October 1997

The three compositions that are going to be played tonight are of a very different nature. A new work for two soloists and orchestra, Alternatim, a short piece for small instrumental ensemble, Serenata per un satellite; and Ofanim, a long composition for two children’s choirs, a female voice, two instrumental groups and live electronics. Those of you who are used to come to this hall will notice the differences: we don’t want to show off the technology, but you will nonetheless see a certain number of loudspeakers all around the place, plus a – quite well hidden, I must say – mixing desk in the parquet. I will explain to you in a short while what is the purpose of those devices.

Tonight we will take part, in fact, in three different musical experiences: one – listening to Serenata per un satellite by Bruno Maderna – where the musical gestures are fixed, and repeated at leisure by the performers. Another – Alternatim by Luciano Berio – where two solo instruments – a clarinet and a viola, generate the orchestral landscape from their melodies and figurations, amplifying and giving more resonance and greater scope to their solo discourse. And a third – Ofanim, again by Berio – where technology is employed to amplify and clarify the complex texture of sounds, and to place the bold musical gestures of the score in a completely new acoustical scenario.

Why am I talking of gestures, of musical gestures? This is a very important point, and one that I’d like to stress, because it’s very close to the heart of Luciano Berio’s musical thought. When an instrument plays, or a voice sings, it makes gestures. The physical act that translates the written page in sounds is heavy with meaning for us, and a deep knowledge of the meaning of a musical gesture is necessary if we want to control it, to master the rhetoric of the instrumental tradition without being mastered by it. Luciano Berio has always been well aware of the powerful meaning of our musical tradition, be it western classical, folk, ethnic, and knows how to cope with it, and how to use it for his artistic purposes. This is why so often his music reaches the audience with great, immediate force, without compromising a rich and complex musical language. It really works on two layers, one of strong communication through the subtle control of every possible musical gesture, the other of a composite musical fabric where both instrumental and formal experiments are carried out in depth. In other words, an abstract approach and a concrete one meet in a multi-layered musical experience.

This in part is true of the work of Bruno Maderna too, whose Serenata per un satellite, in the version realised by Paul Roberts, will be played tonight.
Serenade – it’s a rather uncommon term for a composition of contemporary music, especially in the Fifties, when the titles were more like Structures, Mutations, Kontrapunkte, very hard titles in a way, for a music who didn’t want to compromise with the emotions, and, for that matter, with the past. In fact, Berio too wrote a Serenade, during the Fifties, and Maderna too, before writing the one we’ll listen to tonight. Maderna wrote many Serenatas during his life, four, and the last one, called Juilliard Serenade was composed for the famous New York music school in 1971.

It is interesting to quote what Berio himself said on the subject of Serenade: “In the Fifties the composers were deeply involved with the search for structural references and a new serial order; the face of music was always grouchy. Bruno Maderna’s Serenata and mine were the first to come out after the war. They seem to me the first examples in which serial music becomes more relaxed and shows a less severe aspect.”

It was a sign that something had changed in the music of those years, something that gave way to a happier, less abstract approach to composition. These words portray both men, really, and allow us to understand one of the strong elements always present in Berio’s music: its lack of ideological “partis pris”, of prejudices of every sort. Music according to Berio and Maderna too (he was an enfant prodige, conducting his first concert at 8 and playing the violin at La Scala when he was 7 years old) can only be approached taking into proper account the fact that it has to be performed, and listened to. Too abstract an approach, too scientific, simply will not work. In music, more so than in any other art form, the abstract and the practical meet together.

In music the concrete is the idea, and vice versa. In the musical experience there’s always a drama hidden beneath the surface. The players, through their musical gestures, convey to the audience the ideas of the composer, adding a theatrical dimension to the music – that is to say, the performance. Not to be aware of this, as a composer, can impoverish, deplete the music of a fundamental dimension, that will always be there, even if the composer ignores it.

Going back to our serenades, it is clear then that such an old-fashioned title was a kind of provocation against the clichés of new music. Maderna was well aware of this. He was a witty man, as Serenata per un satellite shows. The idea of the piece is to have a set of different musical phrases or figures, that are to be played in any order, together, or divided by small groups, or one instrument at a time. There is complete freedom in the construction of the piece then, but the notes cannot be changed. This is typical of Maderna’s approach to what is called aleatoric technique in music, a technique that John Cage used intensively during his whole life. Maderna always wrote down the notes and the freedom he left the performers was always confined to the order of the events, their duration, or their superimposition.

Serenata per un satellite is also really a conductor’s game, a piece that Pierre Boulez would have liked. There are a number of musical figures that can be played freely by any instrument – ad libitum, as it were – and the conductor is a bit like the co-ordinator of this musical traffic, starting and stopping the players. The phrases to be played are all presented in a beautiful manuscript page written by Maderna himself, where they all interweave and bend in every direction. The interplay of these lines makes up the piece, a witty and intelligent musical joke, that in the hands of good and inspired performers like tonight’s can become a small masterwork full of humour.

Alternatim refers in its title to a technique of medieval music, commonly found in European tradition until the XVth century. Guillaume Dufay, among many others, employed it in its motet written for the opening of the church of Santa Maria del Fiore in Florence. The technique consisted in alternating polyphony and monody, soloists and organ. Here we have two solo players, a viola and a clarinet, and an orchestra with strings, brass and winds, but no percussion. Two questions jump to the mind immediately: the first – how this work relates with the tradition of the concerto, or double concerto to be precise, and what is the relationship between the two soloists and the orchestra?

The answer to the first question lies in the very nature of the classical concerto – seen as a display of instrumental virtuosity and intelligence, and always very homogeneous in its nature. As Berio himself says, there is no longer a way to establish homogeneity of meaning between one or more soloists and a mass of musicians of different density or nature – such as existed in Baroque, Classical, and Romantic concertos, when the “individual” and the “mass” could practically say the same thing despite their completely different densities and acoustic characters. Today the relationship between soloist and orchestra is a problem that must ever be solved anew, and the word concerto can be taken only as a metaphor.

These are bold statements, and not everybody will agree with them. Well, it is indeed possible to write a concerto today, but the composer has to take into account that the reassuring unity of the classical concerto is lost forever. The answer given to this dilemma by Berio is to make the soloist or the soloists – as it is the case in Alternatim – the starting point of the work, from where originates the whole musical journey. In other words, the musical lines played by the soloists engender, create in a way the musical functions of the whole orchestra.

The choice of the clarinet and the viola is a telling one: they are the real chameleons of the orchestra, and better than other instruments can act as a link between different instrumental families. Like many contemporary composers Berio has never been interested in instrumental families like the ones we find on the orchestration textbooks, but has always explored what we can call the sound families, the families that underline analogies between instruments that are normally very far one from the other. The clarinet and the viola are probably the most useful instruments for an exploration of sound families, given their different registers and their not too specific or confined sound. Think of a violin, or a piano, or even an oboe, and you’ll see how difficult it is to find similarities in other instruments of other families. It is possible, yes, but the viola and clarinet have many more choices for interacting with other members of the orchestra.

As I said before, the lines played by the soloists are the starting point of the piece. We could define those lines as melodies, but the term “line” is less charged with meaning, and probably explains better their role in generating different musical events for the orchestra. We could think of the line as a kind of complex melody – and a melody is rich and interesting when it implies many different musical functions.

Let’s consider Bach’s music for solo cello, solo violin or solo flute, as an example. In those works, a melody implies always a strong polyphonic texture, as the instrument jumps from one register to the other, carrying on different independent lines and at the same time merging them into a single one. On the other hand, think of the importance of the theme during the Classical and Romantic era. A musical theme shaped and ordered all the material of a sonata or a symphony movement. Melody in classical music has always hidden many powerful functions affecting all the elements of composition. That is why a melody written today needs to have the same range of different musical functions.

In Alternatim one of the basic areas of investigation is the relationship between the soloists and the orchestra, that is to say how a monodic line transforms itself into a polyphonic texture, into a complex musical fabric. If we take a quick look at the other works written by Berio for solo instruments and orchestra we always find out different solutions to this challenging problem. I’ll point out two examples: the series of the Chemins, where the original Sequenza for solo instrument is transcribed, projected, in the orchestral field, and the radical solution of Coro, where there are forty voices and forty instruments, and every single instrument is coupled with a different singer. In Alternatim we have this beautiful melodic line, that starts with a series of leaps of a fourth, both perfect fourth and augmented fourth, an interval that comes back all the time in the course of the work. Why this insistence?

The historical importance of the interval of fourth cannot be understated: it is like a bridge that links the oldest European music, the music of the Middle Ages, with the music of the beginning of our century, primarily Debussy and Schoenberg, but also Stravinsky, and Scrjabin. In between we have the supremacy of the classical tonal language, based on the interval of a third, like in a major or minor chord of the scale. The relationship with musical history in Berio is never an innocent one: if he chooses to work with certain elements it is because he wants to bring to the surface their hidden power, and make them react with other – rather different musical objects. In a way this is an approach very dear to another great composer of our century, Igor Stravinsky, but Berio extends the scope of this musical investigation further and reaches new territories.

This initial melodic line is an ever-present element of Alternatim, and comes back always different, but always recognisable. This gives me the opportunity to spend a few words over the fertile idea of redundancy in music, an idea that I’m sure Berio has spent some time investigating. The repetition, the coming back of the same element is a very strong feature of music of all times, and a fundamental way for communicating musical ideas. It has all to do with perception, and the way we listen to music. Contemporary composers shunned for a long time the very idea of repeating whatever. Without repetition, though, there can be no comprehension. This is especially true of a music that is not written using the tonal language, that powerful – still powerful – tool for giving the ear guidance. Redundancy, repetition of the same musical element, be it a line, a series of chords, a rhythmic pattern, becomes a way for helping the listener grasp the musical thought of a composer.

At the beginning of this discussion I mentioned the machines in the hall. I will now explain to you why we have filled Carnegie Hall with these big, black loudspeakers, and what is their purpose. But let me first say a word on the relation between technology and music. Music made with electro-acoustic machines and devices has been around now for more than forty years, and Berio himself, as he said to those of you that were taking part in the workshop this afternoon, started writing electronic music during the Fifties. Yet – I think that a fundamental difference exists between that period and today. In those years the composers were the ones that started experimenting and they created the demand for new machines to realise their musical experiments. In a way, musical thought guided the birth of new machines and their characteristics, so that there was a situation comparable to the introduction of new instruments during the preceding centuries. Nowadays, on the contrary, and for quite a long time now, probably from the end of the Seventies, electronic machines started to be an incredibly useful tool for commercial music and ceased to be under the direct influence of musical thought. In fact, it was musical thought that started to run after new technologies, trying to cope with the startling amount of new machines coming out every moment.

Structure of the piece, alternating great density and calmer moments, static, harmonically static.

Electronics in the piece: harmonizers, delay, spatialization.

Functions: amplifying, clarifying the harmonic structure, emphasising the structure, and amplifying the expressive range, through amplitude and density.

Ofanim is a piece that can be played in many different spaces, and every time we perform it in a different space, the music changes, according to many different factors. And every time we learn more on the relationship of sounds with space. It is really a work in progress, but one whose many faces, corresponding to its subsequent performances – we hope – convey always the same musical meaning. Because, I say it once again, live electronics technology should always be part of a wider musical vision, and it should really act as an amplifier – in every direction – of a musical meaning, that – even sketched – has to be already there, in the score.

© 1997 Lamberto Coccioli

Leverhulme application

We have just submitted to the Leverhulme Trust a funding application for a 5-year research plan at UCE Birmingham Conservatoire. Together with Jamie Bullock we have identified usability and sustainability as the key areas of development in the field of live electroacoustic music research in the foreseeable future.

Download the Leverhulme Research Leadership Award statement of research.

The Dancing Pig

Shortly after arriving in Birmingham I was involved in the realisation of The Dancing Pig, in collaboration with Mark Lockett, a wonderfully polymorphous musician, and Roy Kwabena, then Birmingham’s poet laureate, and a brilliant story-teller originally from Trinidad.

The Dancing Pig is an Indonesian story about two curious children and a witch. I worked on a live electronics setup to transform Roy’s narrating voice and some of the gamelan instruments, to give the story and the music an eerie, outworldly quality.

Integra, a novel approach to music with live electronics

Anders Beyer invited me to write an article on the Integra project for Nordic Sounds. Here it is. Read on or download the magazine issue.

Integra, a novel approach to music with live electronics

A desire to empower composers and performers to work with live electronics technology in a musical and user-friendly way is at the heart of the Integra project, an international collaboration of research centres and new music ensembles supported by the European Commission. Thanks to a programme of interrelated activities along the three main axes of research, creation and dissemination, Integra seeks to initiate a widespread change of perception towards technology among all the professional actors involved in contemporary music creation and diffusion in Europe.

Integra started taking shape during many long and inspired telephone conversations that I had with Luca Francesconi, the renowned Italian composer and professor of composition at the Malmö Academy of Music, in September 2004. Luca must also be credited for the project name – Integra – a simple and powerful way to remind us of our real focus: the integration of artistic and scientific elements in the creation and performance of music with technology. After agreeing on the project structure and strategy, Richard Shrewsbury (formerly project administrator of Connect, another large European music project) and myself started to establish a network of partner institutions and we completed the final application in October 2004.

While drafting the project, we set out to find concrete answers to pragmatic issues. Inevitably, we ended up making strong assumptions on the philosophical and aesthetic implications of technology in music. The fundamental issue with technology lies in its unlimited potential and its self-replicating nature: technology is inherently meaningless. If we are going to use it in music we will have to ask ourselves some hard questions. Why do we need it? How can it be musical? How can it be controlled?

In order to be harnessed, technology should be brought back to a human dimension, and considered just like another musical instrument – a polymorphous one, to be sure, but still an instrument – that we can learn and play. To achieve this, Integra aims to simplify live electronics technology, and to establish a standard vocabulary to describe it. The word “standard” is often disliked, but we should not forget that the musical instruments employed in our concerts are themselves “standard”, in fact quite limited ones: nevertheless, they allow the transmission of an extremely complex and diversified musical message.

Integra is not alone in this effort towards more user-friendly technology, although it is only recently that usability, good interface design and a preoccupation for how humans interact with machines have started to appear in technology products. Sadly, as far as the history of music technology is concerned, we are still living in the colonisation phase. I like to compare our current experience with the Wild West: new territories are conquered every day, there are no common laws, survival depends from individual initiative. And we are all still digging in search of that elusive gold mine. This explains the proliferation of do-it-yourself systems over the past three decades, when each work, even by the same composer, required a different technological setup (hardware, software, or both). The often-poor documentation of the electronic parts and the rapid obsolescence of the original hardware and software have prevented the adoption of a core repertoire of works using live electronics in mainstream concert programmes.

True to its name, Integra brings together research centres (the scientific group) and new music ensembles (the artistic group): two often-different worlds, with different agendas and priorities, will share their experience and work together. This is possibly the single most important aspect of Integra: all the activities of the project are designed to allow the findings of the scientific group to feed back into the events organised by the artistic group, and vice-versa.


Research

The research activities will cover two main areas: the modernisation of works that use obsolete technology, and the development of a new software-based environment for the composition and performance of music with live electronics.

These two activities are closely related: during the first year of the project the research centres will transfer the technology of around thirty works, chosen together with the artistic ensembles for their musical and historical relevance. The transferred music will include works by Gérard Grisey, Jonathan Harvey, Tristan Murail and Arne Nordheim among the others. This migration process will mainly consist in adopting standard software-based solutions in order to emulate faithfully the original set up and overcome the inherent problems of accessing and maintaining old equipment. Most of the migrated works will quickly find a place in the repertoire of the artistic members of the project, and, it is hoped, of many other contemporary music ensembles around the world.

The knowledge and experience acquired in this vast migration exercise will be used as one of the two starting points for the development of the Integra environment, the other being the feedback from the ten composers that will receive the Integra commissions. By combining the lesson of the tradition with the requirements of contemporary creation, we ensure that the Integra environment will be flexible and robust, spanning an ideal bridge between past and present technology.

Usability and sustainability are the key words here. The Integra environment will be easy to use, and first and foremost a musical tool for composing and performing with electronics; it will also define a new vocabulary to represent electronic events in a standard, software and platform-independent way to ensure their long-term maintenance and survival. More in detail, the environment will be composed of four distinct elements:
1. Database – The back-end of the environment, a standard online database to store modules, performance data and documentation, initially for each transferred and commissioned work.
2. Namespace – An OSC-compliant (Open Sound Control) Integra XML namespace to represent and share all live electronics data among the various elements of the Integra environment.
3. Interface – The front-end of the environment, an intelligent graphic user interface designed around the needs of musicians and for maximum ease of use.
4. Engine – the actual DSP engine of the environment, an extended collection of analysis, synthesis, processing and control software tools.

The concept underlying our modular approach is the representation of the audio network, the control network and their behaviour over time independently from a specific implementation. In other words, we propose a higher level description of live electronics that can stay the same while technology changes.


Creation

Ten European composers will receive Integra commissions, with each new music ensemble commissioning two composers from other European countries. The recipients of the first five commissions are Malin Bång (Sweden), Natasha Barrett (UK/Norway), Andrea Cera (Italy), Tansy Davies (UK) and Juste Janulyte (Lithuania). These five composers will be writing for small chamber ensemble (from three to five players) and live electronics. The works will be premiered between January and September 2007.

The second set of commissions, for large ensemble and live electronics, will be announced at the end of November 2006. The creations of these works will happen between January and July 2008. Mixed-media interaction will be encouraged, as well as site-specific performance events.

Integra will retain exclusive rights on the performance of the commissioned works for three years after the creation, thus enabling every ensemble to perform all the works commissioned by the other ensembles.
Each composer and the performers involved in the piece will be working with a research centre in producing the electronics. This collaboration, extended over a period of two visits (four for the larger works), will allow the composer to work with the tools being developed for the Integra environment. The feedback from the composers will be used to help design tools that are intuitive, powerful, and above all musical.


Dissemination

The success of the Integra environment will be measured by its public support and widespread adoption by composers and performers in Europe and around the world. We aim to build a community of musicians and researchers to look after Integra once it arrives the end of its official life in September 2008. To achieve this ambitious goal we are devoting a considerable effort to create a network of institutions and individual contacts. We are also keen to establish links with ongoing projects in related areas (digital content preservation and storage, Human-Computer Interaction, etc.), promoting standards and ensuring interoperability between Integra and other related applications.

In rough numbers, during the life of the project we will be delivering: thirty individual training sessions on live electronics technology for the commissioned composers (each lasting three days), and forty individual training sessions for the performers of the new music ensembles (some of these sessions will overlap, to allow composers and performers to work together on the commissioned works); a minimum of fifteen concerts and performance events, featuring the commissioned works and many transferred works from the existing repertoire.

We will run open workshops before the concerts for local musicians and composers and produce an innovative DVD on the project documenting the Integra activities and presenting the Integra environment through practical demos. The DVD will be distributed to all new music actors in Europe. We hope that the Integra environment will become a de facto standard for the preservation, composition and performance of music with live electronics. If the project will be successful, the repertoire of European contemporary music ensembles will grow accordingly and performances of music with live electronics will become more frequent, while forgotten works using obsolete technology will become again active agents in our musical life. Integra will also contribute to the creation of a new breed of highly mobile professional musicians: empowered by light, accessible and reliable technology, they will be able to travel and perform around Europe with their expanded repertoire, helping to bring down the barriers that still today prevent many musicians from using technology in the first place.


Fact Box

Integra – A European Composition and Performance Environment for Sharing Live Music Technologies is a €1,035,048, 3-year cooperation agreement part financed by the European Commission through the 2005 call of the Culture 2000 programme [ref 2005-849]. Started in September 2005, Integra is led by UCE Birmingham Conservatoire in the United Kingdom. The project partners are:

New Music Ensembles

Ensemble Ars Nova, Malmö
Athelas Sinfonietta, Copenhagen (co-organiser)
Birmingham Contemporary Music Group, Birmingham
BIT20 Ensemble, Bergen (co-organiser)
Court-circuit, Paris (co-organiser)

Research Centres
CIRMMT, McGill University, Montreal
Krakow Academy of Music, Krakow
La Kitchen, Paris
Lithuanian Academy of Music and Theatre, Vilnius
Malmö Academy of Music, Malmö (co-organiser)
NOTAM, Oslo
SARC, Queen’s University, Belfast

Association of European Conservatoires

The composers commissioned so far are:
Malin Bång, Sweden (Athelas Sinfonietta)
Natasha Barrett, Norway (Ensemble Ars Nova)
Andrea Cera, Italy (Court-circuit)
Tansy Davies, United Kingdom (BIT20 Ensemble)
Juste Janulyte, Lithuania (Birmingham Contemporary Music Group)

www.integralive.org

Lamberto Coccioli, an Italian composer currently working as Head of Music Technology at Birmingham Conservatoire, is Integra’s Project Manager.

Music as Memory conference

Geir Johnson, Artistic Director of Ultima, the Oslo Contemporary Music Festival, invited me to give a talk on “Music and Technology: past, present and future” at the Music as Memory conference, on Friday 6 October 2006. The conference was part of this year’s Ultima Festival. I enjoyed listening to Geir’s profound, personal talk introducing the conference, and to Stein Henrichsen (BIT20 Ensemble and Opera Vest), Luca Francesconi, Lasse Thoresen and Asbjörn Schaathun, also giving very interesting talks. Asbjörn gave an entertaining definition of a “perfect” creative person, obtained by combining together the different talents and characters of the four Beatles. The notes for my talk follow below.

Music and Technology – past, present and future

Music as memory
The relationship with tradition, the interaction of current artistic trends with the past is a central aspect of music making. Thanks to technology, composers, performers, indeed all music actors like you – artistic directors etc, have direct access to a wealth of resources that extend both on the geographical, horizontal plane, and on a very long vertical axis towards the past. This three-dimensional, always available on-demand mapping of human creativity in the arts is an unprecedented feat that demands a complete rethinking of our relationship with the past, with musical tradition.

The same technology that allows us to explore and appropriate the musical universe in space and time, has altered our perception of the world in many ways. Digital technology allows anything. When everything is equally available, what is the aesthetic, artistic value of a choice? How do we establish a dialogue with tradition in the current situation?
We live in the age of technology, after science, history, philosophy, religion.
Technology is overwhelming. Also, technology is never neutral. How can we make sense of it? By reconducing it to a human dimension. How?

The Humanist Challenge:

  1. junghian, ethnomusicological alternative – the consolation of archetypes
  2. ethical alternative – music with a message
  1. gestural alternative – music with the body

    I’ll try to give an answer to these questions later on in my talk, from the point of view of music technology.

    The past
    Music as memory – from the perspective of music with technology the challenge is to allow music to become memory in the first place. Preservation of interactive, real-time, live electronics works is a daunting task that has to be tackled in a novel way.
    [If we look at electroacoustic music, the situation is comparatively quite good. Once the original analogue supports have been converted to digital, preservation is ensured. True, the passage from analog to digital is difficult, and many works from the 50s and 60s where conceived with the idiosyncrasies of early recording, mixing and diffusion equipment in mind [another exemple of non-neutral technology!]. Those peculiarities became an essential part of the work, as it has been shown time and again in the works of Berio, Stockhausen, Schaeffer, etc.]
    To get back to live electroacoustic music, as we should call live electronics, the obvious problem is the longevity of technologies, hardware and software, their rapid change, the commercial, hence temporary, nature of many of the devices used and the overall lack of documentation from the composers and interpreters. To maintain a piece using obsolete technology is very difficult, sometimes downright impossible. If a piece is not performed anymore, then it ceases to be an active agent in cultural and music life. This is too bad.

    The role of the research centres
    IRCAM – it has certainly helped to shape the contemporary music scene, and its contribution cannot be played down. Boulez managed to create something that lasted and thrived for many years, expanding in more directions as time went by. If we compare this with the UK experience or the Italian experience, for example, IRCAM has been an outright success. In the UK, the efforts to create a National Centre for Electronic Music were never taken seriously by the government, and in Italy, a place like the Centro Tempo Reale in Florence, founded by the late Luciano Berio, never took off properly, and was widely regarded to be just Berio’s own technology plaything – of course this says more about the difference in character between the two composers (Boulez and Berio) than about local obstacles to achieve a similar goal.

    IRCAM, nevertheless, as a growing big institution has suffered from many organisational and structural problems, that have become often artistic problems, like the establishment of an IRCAM style that can be quickly recognised – again, technology, the means of producing music with technology, are never neutral, but they affect every aspect of the creative compositional process.

    Computers are not neutral tools. Software and hardware impose their own architecture. As any composer that works with technology will tell you, when you are working on a new piece and sit down at the computer to do any of a number of things (analyse, design, edit and mix sounds, prepare your own compositional material by using algorithms, note generators, etc., put together your performance environment and so on), your frame of mind changes, and you have to adapt and limit your thought processes to those that the machine, the software you’re using will allow. It is all very well to say that if you need to alter the software you are interacting with, or if you are unhappy with it you can modify or write your own – in practice you can’t transform yourself in a programmer – apart from the vast amount of time that would be needed, if you do it you will need to distance yourself even more from your musical mind, the one that initially triggered the need.

    So again, technology is not neutral. And if a composer works on the technology with a musical assistant, this creates another layer between the musical mind and the machine, another constraint. (Critique of IRCAM) At IRCAM and elsewhere, the main policy has been for many years to support composers through musical assistants, acting as a filter between the composer and the machine. It is no wonder that the software developed at IRCAM has never reached the simplicity of use, the fluidity that one would expect from an institution with such great minds and resources behind it. The maintenance and documentation of the software has also been always very patchy. Obviously a pachydermic institution like IRCAM has a built-in inertia that makes change difficult, but creating simple tools for musicians has never been one of the IRCAM’s priorities.

    Maintenance and preservation are also thorny issues. We are trying to find a possible solution to these issues with the Integra project.

    The present
    Technology is the beast. From a philosophical standpoint, music technology offers a very exciting challenge: artists working with computers, altering the code, hacking it, to realise things that were not planned by the software designers, fulfill the historical role of art: disrupting received knowledge, reordering, reassembling the symbols and objects of our society in an original, critical way. But can we apply to technology the same concepts that worked for art in the past? I doubt it.

    Technology is so embedded in our lives, but we still feel the distance from it. It’s here but it’s not here. We think we have the philosophical tools to dominate it, to explain and describe technology, but in reality we don’t. Technology is a self-feeding monster, what can be realised will be realised. There is no goal in technology, no purpose, everything is outside our human horizon of meaning. We are now learning to find a new vocabulary to deal with this monster, but it is early days.

    Technology has no memory, Technology has no meaning. Why technology in music? We have to humanise it, and adopt standards. The fundamental issue with technology lies in its unlimited potential and its self-replicating nature: technology is inherently meaningless. If we are going to use it in music we will have to ask ourselves some hard questions. Why do we need it? How can it be musical? How can it be controlled? In order to be harnessed, technology should be brought back to a human dimension, and considered just like another musical instrument – a polymorphous one, to be sure, but still an instrument – that we can learn and play. To achieve this, we should simplify music technology, and to establish a standard vocabulary to describe it.

    The word “standard” is often disliked, but we should not forget that the musical instruments employed in our concerts are themselves “standard”, in fact quite limited ones: nevertheless, they allow the transmission of an extremely complex and diversified musical message.

    Integra is not alone in this effort towards more user-friendly technology, although it is only recently that usability, good interface design and a preoccupation for how humans operate have started to appear in technology products. Sadly, as far as the history of music technology is concerned, we are still living in the colonisation phase. I like to compare our current experience with the Wild West: new territories are conquered every day, there are no common laws, survival depends from individual initiative. And we are all still digging in search of that elusive gold mine.

    This explains the proliferation of do-it-yourself systems over the past three decades, when each work, even by the same composer, required a different technological setup (hardware, software, or both). The often-poor documentation of the electronic parts and the rapid obsolescence of the original hardware and software have prevented the adoption of a core repertoire of works using live electronics in mainstream concert programmes.

    Design – usability = make it simple! We need to trade the technological DIY approach (temporary, non-standard, often undocumented) with a user-centered approach, to ensure more performance opportunities and long-term preservation. Standards and limitations in technology can be an incentive for creativity.

    The future
    Integra project

    Integra environment outline

    Integra namespace – class hierarchy describing all the modules, parameters and functionalities, including time. Built-in inheritance. Everything is an object. The namespace is OSC-compliant for interaction with other software, network performance, etc., but not internally. OSC is one of the possible implementation of the Integra namespace model.

    Database [postgres sql]
    contains
    •Integra modules [the building blocks of the system]
    •Composition metadata [documentation on the work, the composer, the technical setup, etc.]
    •Composition performance data [control and audio network and signal flow, behaviour in perfomance]

    All data is encoded in XML format. All the XML files that constitute a work can be downloaded and will automatically generate modules and connections in the GUI.

    GUI [any graphic library, prototype realised with Max/MSP using custom graphic library]
    •interface for the musician, fine-tuned for the three main modes of utilisation: composition, rehearsal, live performance
    •modular and powerful: everything is an object, every object can interact withj everything else
    •extremely user-friendly
    •uses new paradigms to represent concurrent timelines [Iannix]
    •generates xml files
    •talks to the engine in real-time
    •visualizes processes in real-time

    Engine [any DSP application supporting the Integra namespace]
    •runs Integra modules

Mixtur by Stockhausen at the South Bank

Down to London to listen to Mixtur and say hello to Thierry Coduys, responsible for the electronics together with Sound Intermedia (Ian Dearden and David Sheppard).

Mixtur is the daddy of live electronics… a late discovery for me. Some awkward moments (a funny trombone glissando up and down a perfect fourth that comes from nowhere, the long pauses) but beautiful complex timbres especially in the lowest register for cello, double bass and contra-bassoon. Conceptually it was fantastic in 1967, and it still retains some of that aura, although the music has aged. And the performance with the reversed order of the sections, thankfully played in the first half of the concert, just doesn’t work.

Older posts

© 2017 lamberto coccioli

Theme by Anders NorenUp ↑