lamberto coccioli

on music and beauty

Tag: live electronics

Seminar on live electronics at the Royal Academy of Music

Philip Cashian, Head of Composition at the Royal Academy of Music, kindly invited me to give a seminar to the students of his department. The title I chose for the talk today was “A new approach to composing music with live electronics”. I gave an overview of live electronics in practice, and the challenges and frustration that often accompany performances involving technology. Referring to my experience with Luciano Berio’s musical actions with live electronics (Outis, Cronaca del Luogo), I remarked on the sad absence of these seminal works from the repertoire today and outlined the challenged posed by technology in performing works created only 15-20 years ago. I went on presenting the philosophy of the Integra project and its role in developing the Integra Live software, with the intention to address the caducity and unfriendliness of live electronic systems developed using programming languages like Max.

Showing Integra Live in action I was able to demonstrate how the software and its different views tried to mimic the creative process and the workflow of a composer. From an initial exploratory, imaginative phase (Module View), to a more structured stage where events start being organised in time (Arrange View), to a rehearsal and finally performance phase (Live View), where things are fixed and the most important thing is reliability and control of every relevant aspect of the performance.

I hope I conveyed to the students my salient point: always ask yourself why you should use technology, and if you do, make sure it is borne out of your musical ideas, and is an integral part of your musical thinking. I enjoyed very much the interaction with them, they were curious and lively, and asked interesting questions, among others, about the future of Integra Live in a hypothetical post-coding world, and – this one more technical – about using MIDI notes to control partials in the various spectral modules of the software, highlighting the need for a built-in MIDI note to frequency converter in all spectral modules. At the end of the seminar Philip took a straw poll among the students and the overwhelming majority voted in favour of trying Integra Live in their own music. Not bad!

The problem with MaxMSP

I had the first glimpse of what was then called simply Max in 1994, when my good friend Diego Dall’Osto introduced me to the software, but I only started using it in 1996, when working at Centro Tempo Reale in Florence. At first, like so many other composers, I was completely taken by the power and beauty of a programming language that allowed me to work in a graphical environment and test the results on the fly, without having to compile the code. Moreover, I had two wonderful mentors, Nicola Bernardini and Alvise Vidolin. They gave me generous advice and help, so that I was soon able to develop my own patches without having prior programming skills.

Soon, though, a number of issues with Max started to emerge, and in various ways, they are still unresolved ten years later. To be fair, many of the issues depend on the way Max, now MaxMSP, is used, but I still find it surprising that David Zicarelli and his company have not acted more energetically to adapt the software to the needs of the growing Max community. I will look at MaxMSP from the two angles that interest me most, usability and sustainability, but first I will try to answer the question of whom this software is written for.

I think that the main problem with MaxMSP is the fact that it sits in a sort of no man’s land between programming languages and software applications. It is too cumbersome and prescriptive as a programming language, but it lacks the user interface and a consistent set of tools that we usually associate with commercial software packages. It may be retorted that this perceived weakness is in fact the main strength of MaxMSP: to give total freedom to artists and musicians so that they can develop their own interactive set-ups without the rigid constraints of commercial software but also without the need to become programmers. My opinion is that in the long term, and looking at the way MaxMSP is now the de facto standard in performing music with live electronics, the problem has become more acute.

Composers that go past MaxMSP’s rather steep learning curve greedily embrace the programme, and start developing their patches, either from scratch, or using existing objects or libraries by members of the community. In the first case they often end up with very inefficient and buggy patches, in the second they create many dependencies, limiting portability and sustainability of their work. Max is great at two things – experimenting with your ideas and prototyping virtual set-ups – but as soon as you enter production mode, it becomes quite unfriendly. There is a historical reason for this; Max was first developed at IRCAM, an institution characterised by a rather rigid separation between composers and music technology assistants. The idea was that composers dealt with the creative part, while the assistants provided a human interface to the technology tools. This meant that the code was looked after by the technologists, and composers didn’t need to engage directly with it. Also, a big institution like IRCAM ensured the long-term preservation of the works, by employing assistants to maintain and upgrade the patches as needed.

This initial dichotomy is part of MaxMSP’s genetic code: the software is used mainly by composers and artists, but is written for programmers. This is why I find difficult to identify the target audience of the software: it is too complex and difficult to learn to be mastered fully by artists, but its true potential is wasted in the hands of programmers, who will also complain that as a development platform MaxMSP lacks many important features. In fact, I haven’t found yet a good application built with MaxMSP. So it looks like the MaxMSP target user is either a highly talented composer-technologist, equally versed in computer programming and music composition, or a creative team supplying the required skills. Not surprisingly, MaxMSP is frequently found in higher education.

Let’s look now at MaxMSP from the usability perspective. MaxMSP provides out-of-the-box quite a number of graphic objects, and has recently added powerful new functions, like Java support, the ability to encapsulate/de-encapsulate patches and create/save prototypes [template patches that can be used everywhere]. Nevertheless, the actual user interface is entirely the responsibility of the user – there are no standard Graphic User Interface models or templates. The result is that a given patch – say a sound spatializer – can be realised in many different ways, each one providing a very different user experience. Testing and comparing of patches is thus made very difficult, as the same spatializer engine can be visualised omitting certain parameters altogether or hiding them in remote subpatches. Sharing of patches, or having your live electronics performed by someone else, is also compromised, since every user builds their patches according to their personal needs and taste. If you add the fact that MaxMSP has no easy way for commenting or documenting patches, you see how hard it can be sometimes to reconstruct signal and control flow in a complex patch, even for the person that wrote it!

Probably it is from the sustainability point of view that MaxMSP fares worse. The software gives artists and musicians the impression to be in control, but in fact locks them into a closed system, difficult to scale, adapt or maintain over time. I’m talking here mainly from the perspective of live electronics concert performance, the kind of mission-critical application where everything has to work as planned. My experience over the years is that in order to work properly a MaxMSP patch has to be tweaked or rewritten every year or so, especially if external dependencies are included in the patch. In some cases, objects and libraries are not upgraded when the software is, and an alternative must be found or developed from scratch. Conflicts between objects with the same name can also prevent patches from functioning properly.

As I said, MaxMSP is an invaluable tool for trying out ideas, experimenting and prototyping, but falls short of usability and sustainability requirements, the two areas that matter most for a creative, musical use of the software and for the long-term preservation and maintenance of patches and the artistic works that depend on them. MaxMSP remains the first choice for musicians working with live electronics, but I think I have identified a gap that needs to be filled if we really want to empower musicians and offer them more accessible tools for interacting with technology.

Leverhulme application

We have just submitted to the Leverhulme Trust a funding application for a 5-year research plan at UCE Birmingham Conservatoire. Together with Jamie Bullock we have identified usability and sustainability as the key areas of development in the field of live electroacoustic music research in the foreseeable future.

Download the Leverhulme Research Leadership Award statement of research.

Music as Memory conference

Geir Johnson, Artistic Director of Ultima, the Oslo Contemporary Music Festival, invited me to give a talk on “Music and Technology: past, present and future” at the Music as Memory conference, on Friday 6 October 2006. The conference was part of this year’s Ultima Festival. I enjoyed listening to Geir’s profound, personal talk introducing the conference, and to Stein Henrichsen (BIT20 Ensemble and Opera Vest), Luca Francesconi, Lasse Thoresen and Asbjörn Schaathun, also giving very interesting talks. Asbjörn gave an entertaining definition of a “perfect” creative person, obtained by combining together the different talents and characters of the four Beatles. The notes for my talk follow below.

Music and Technology – past, present and future

Music as memory
The relationship with tradition, the interaction of current artistic trends with the past is a central aspect of music making. Thanks to technology, composers, performers, indeed all music actors like you – artistic directors etc, have direct access to a wealth of resources that extend both on the geographical, horizontal plane, and on a very long vertical axis towards the past. This three-dimensional, always available on-demand mapping of human creativity in the arts is an unprecedented feat that demands a complete rethinking of our relationship with the past, with musical tradition.

The same technology that allows us to explore and appropriate the musical universe in space and time, has altered our perception of the world in many ways. Digital technology allows anything. When everything is equally available, what is the aesthetic, artistic value of a choice? How do we establish a dialogue with tradition in the current situation?
We live in the age of technology, after science, history, philosophy, religion.
Technology is overwhelming. Also, technology is never neutral. How can we make sense of it? By reconducing it to a human dimension. How?

The Humanist Challenge:

  1. junghian, ethnomusicological alternative – the consolation of archetypes
  2. ethical alternative – music with a message
  1. gestural alternative – music with the body

    I’ll try to give an answer to these questions later on in my talk, from the point of view of music technology.

    The past
    Music as memory – from the perspective of music with technology the challenge is to allow music to become memory in the first place. Preservation of interactive, real-time, live electronics works is a daunting task that has to be tackled in a novel way.
    [If we look at electroacoustic music, the situation is comparatively quite good. Once the original analogue supports have been converted to digital, preservation is ensured. True, the passage from analog to digital is difficult, and many works from the 50s and 60s where conceived with the idiosyncrasies of early recording, mixing and diffusion equipment in mind [another exemple of non-neutral technology!]. Those peculiarities became an essential part of the work, as it has been shown time and again in the works of Berio, Stockhausen, Schaeffer, etc.]
    To get back to live electroacoustic music, as we should call live electronics, the obvious problem is the longevity of technologies, hardware and software, their rapid change, the commercial, hence temporary, nature of many of the devices used and the overall lack of documentation from the composers and interpreters. To maintain a piece using obsolete technology is very difficult, sometimes downright impossible. If a piece is not performed anymore, then it ceases to be an active agent in cultural and music life. This is too bad.

    The role of the research centres
    IRCAM – it has certainly helped to shape the contemporary music scene, and its contribution cannot be played down. Boulez managed to create something that lasted and thrived for many years, expanding in more directions as time went by. If we compare this with the UK experience or the Italian experience, for example, IRCAM has been an outright success. In the UK, the efforts to create a National Centre for Electronic Music were never taken seriously by the government, and in Italy, a place like the Centro Tempo Reale in Florence, founded by the late Luciano Berio, never took off properly, and was widely regarded to be just Berio’s own technology plaything – of course this says more about the difference in character between the two composers (Boulez and Berio) than about local obstacles to achieve a similar goal.

    IRCAM, nevertheless, as a growing big institution has suffered from many organisational and structural problems, that have become often artistic problems, like the establishment of an IRCAM style that can be quickly recognised – again, technology, the means of producing music with technology, are never neutral, but they affect every aspect of the creative compositional process.

    Computers are not neutral tools. Software and hardware impose their own architecture. As any composer that works with technology will tell you, when you are working on a new piece and sit down at the computer to do any of a number of things (analyse, design, edit and mix sounds, prepare your own compositional material by using algorithms, note generators, etc., put together your performance environment and so on), your frame of mind changes, and you have to adapt and limit your thought processes to those that the machine, the software you’re using will allow. It is all very well to say that if you need to alter the software you are interacting with, or if you are unhappy with it you can modify or write your own – in practice you can’t transform yourself in a programmer – apart from the vast amount of time that would be needed, if you do it you will need to distance yourself even more from your musical mind, the one that initially triggered the need.

    So again, technology is not neutral. And if a composer works on the technology with a musical assistant, this creates another layer between the musical mind and the machine, another constraint. (Critique of IRCAM) At IRCAM and elsewhere, the main policy has been for many years to support composers through musical assistants, acting as a filter between the composer and the machine. It is no wonder that the software developed at IRCAM has never reached the simplicity of use, the fluidity that one would expect from an institution with such great minds and resources behind it. The maintenance and documentation of the software has also been always very patchy. Obviously a pachydermic institution like IRCAM has a built-in inertia that makes change difficult, but creating simple tools for musicians has never been one of the IRCAM’s priorities.

    Maintenance and preservation are also thorny issues. We are trying to find a possible solution to these issues with the Integra project.

    The present
    Technology is the beast. From a philosophical standpoint, music technology offers a very exciting challenge: artists working with computers, altering the code, hacking it, to realise things that were not planned by the software designers, fulfill the historical role of art: disrupting received knowledge, reordering, reassembling the symbols and objects of our society in an original, critical way. But can we apply to technology the same concepts that worked for art in the past? I doubt it.

    Technology is so embedded in our lives, but we still feel the distance from it. It’s here but it’s not here. We think we have the philosophical tools to dominate it, to explain and describe technology, but in reality we don’t. Technology is a self-feeding monster, what can be realised will be realised. There is no goal in technology, no purpose, everything is outside our human horizon of meaning. We are now learning to find a new vocabulary to deal with this monster, but it is early days.

    Technology has no memory, Technology has no meaning. Why technology in music? We have to humanise it, and adopt standards. The fundamental issue with technology lies in its unlimited potential and its self-replicating nature: technology is inherently meaningless. If we are going to use it in music we will have to ask ourselves some hard questions. Why do we need it? How can it be musical? How can it be controlled? In order to be harnessed, technology should be brought back to a human dimension, and considered just like another musical instrument – a polymorphous one, to be sure, but still an instrument – that we can learn and play. To achieve this, we should simplify music technology, and to establish a standard vocabulary to describe it.

    The word “standard” is often disliked, but we should not forget that the musical instruments employed in our concerts are themselves “standard”, in fact quite limited ones: nevertheless, they allow the transmission of an extremely complex and diversified musical message.

    Integra is not alone in this effort towards more user-friendly technology, although it is only recently that usability, good interface design and a preoccupation for how humans operate have started to appear in technology products. Sadly, as far as the history of music technology is concerned, we are still living in the colonisation phase. I like to compare our current experience with the Wild West: new territories are conquered every day, there are no common laws, survival depends from individual initiative. And we are all still digging in search of that elusive gold mine.

    This explains the proliferation of do-it-yourself systems over the past three decades, when each work, even by the same composer, required a different technological setup (hardware, software, or both). The often-poor documentation of the electronic parts and the rapid obsolescence of the original hardware and software have prevented the adoption of a core repertoire of works using live electronics in mainstream concert programmes.

    Design – usability = make it simple! We need to trade the technological DIY approach (temporary, non-standard, often undocumented) with a user-centered approach, to ensure more performance opportunities and long-term preservation. Standards and limitations in technology can be an incentive for creativity.

    The future
    Integra project

    Integra environment outline

    Integra namespace – class hierarchy describing all the modules, parameters and functionalities, including time. Built-in inheritance. Everything is an object. The namespace is OSC-compliant for interaction with other software, network performance, etc., but not internally. OSC is one of the possible implementation of the Integra namespace model.

    Database [postgres sql]
    contains
    •Integra modules [the building blocks of the system]
    •Composition metadata [documentation on the work, the composer, the technical setup, etc.]
    •Composition performance data [control and audio network and signal flow, behaviour in perfomance]

    All data is encoded in XML format. All the XML files that constitute a work can be downloaded and will automatically generate modules and connections in the GUI.

    GUI [any graphic library, prototype realised with Max/MSP using custom graphic library]
    •interface for the musician, fine-tuned for the three main modes of utilisation: composition, rehearsal, live performance
    •modular and powerful: everything is an object, every object can interact withj everything else
    •extremely user-friendly
    •uses new paradigms to represent concurrent timelines [Iannix]
    •generates xml files
    •talks to the engine in real-time
    •visualizes processes in real-time

    Engine [any DSP application supporting the Integra namespace]
    •runs Integra modules

Mixtur by Stockhausen at the South Bank

Down to London to listen to Mixtur and say hello to Thierry Coduys, responsible for the electronics together with Sound Intermedia (Ian Dearden and David Sheppard).

Mixtur is the daddy of live electronics… a late discovery for me. Some awkward moments (a funny trombone glissando up and down a perfect fourth that comes from nowhere, the long pauses) but beautiful complex timbres especially in the lowest register for cello, double bass and contra-bassoon. Conceptually it was fantastic in 1967, and it still retains some of that aura, although the music has aged. And the performance with the reversed order of the sections, thankfully played in the first half of the concert, just doesn’t work.

© 2017 lamberto coccioli

Theme by Anders NorenUp ↑