on music and beauty

Tag: live electronics (Page 1 of 2)

Seminar on live electronics at the Royal Academy of Music

Philip Cashian, Head of Composition at the Royal Academy of Music, kindly invited me to give a seminar to the students of his department. The title I chose for the talk today was “A new approach to composing music with live electronics”. I gave an overview of live electronics in practice, and the challenges and frustration that often accompany performances involving technology. Referring to my experience with Luciano Berio’s musical actions with live electronics (Outis, Cronaca del Luogo), I remarked on the sad absence of these seminal works from the repertoire today and outlined the challenged posed by technology in performing works created only 15-20 years ago. I went on presenting the philosophy of the Integra project and its role in developing the Integra Live software, with the intention to address the caducity and unfriendliness of live electronic systems developed using programming languages like Max.

Showing Integra Live in action I was able to demonstrate how the software and its different views tried to mimic the creative process and the workflow of a composer. From an initial exploratory, imaginative phase (Module View), to a more structured stage where events start being organised in time (Arrange View), to a rehearsal and finally performance phase (Live View), where things are fixed and the most important thing is reliability and control of every relevant aspect of the performance.

I hope I conveyed to the students my salient point: always ask yourself why you should use technology, and if you do, make sure it is borne out of your musical ideas, and is an integral part of your musical thinking. I enjoyed very much the interaction with them, they were curious and lively, and asked interesting questions, among others, about the future of Integra Live in a hypothetical post-coding world, and – this one more technical – about using MIDI notes to control partials in the various spectral modules of the software, highlighting the need for a built-in MIDI note to frequency converter in all spectral modules. At the end of the seminar Philip took a straw poll among the students and the overwhelming majority voted in favour of trying Integra Live in their own music. Not bad!

Aluna electronics

This is the latest version of the live electronics of Aluna. It has been tested on a MacBook Pro with macOS High Sierra 10.13.6 and Max version 8.1.4

Download the Aluna electronics zip archive

Top-level patcher: Aluna_performance.maxpat
Electronics tested with Max 8.1.4

All files, including Aluna_performance.maxpat, contained in the Max8 folder.

Externals dependencies:

1. tap.vocoder~
from TapTools by Timothy Place, version 4 beta 2 26 April 2013
source: https://github.com/tap/TapTools/releases

2. analyzer~
by Tristan Jehan, 64 bit version by Volker Böhm 22 June 2018
source: http://vboehm.net/downloads/

3. Lcatch, Lscale, Ltocoll
by Peter Elsea, from the Lobjects collection, 24 February 2015
source: http://peterelsea.com/lobjects.html

4. vdb~ and envfollo-float~
abstractions by Benjamin Thigpen, part of the “bennies” collection, IRCAM Forum distribution
source: https://forum.ircam.fr

Audiofiles:

blood_orch.aif
blood_viola.aif
ocean_breakers.aif
ocean_calm.aif
ocean_waves.aif
oceanvocoder.aif
viola.aif (MIDI simulation of the solo viola part)

LC, 25 June 2020

Aluna in depth

Aluna is a 30′ work for viola, ensemble and live electronics written for renowned viola virtuoso Rivka Golani. In Aluna instrumentation, form, harmonic structure, and the use of technology all derive from a single, powerful model: the rich and complex creation myth of the Kogi, an indigenous community living on the slopes of the Sierra Nevada de Santa Marta in northern Colombia.

The starting idea for Aluna was a ritual sequence where a female figure would lead a group of followers through a transformative process. From a musical perspective the solo viola would represent the leading female figure, the ensemble her people, and the electronics the transformative tools. Conceptually the music of the solo viola generates all the musical material of the piece, while the electronics setup allows the soloist to transform her own sound and the sound of the ensemble through her musical gestures.

Gerardo Reichel-Dolmatoff has been the first anthropologist to disentangle the complexities of the Kogi mythology and to notice the uniqueness of their culture in the wider perspective of South American ethnology. The Kogi creation myth is based on a sequence of nine worlds, representing both a chronological series of events and the permanent structure of their universe. A graphical representation of the latter would consist of two conical shapes joined at the base, with the smaller worlds at the top and the bottom, and at the centre the largest, the fifth world, our world. Accordingly Aluna is composed of nine sections of related lengths. An introduction (“Sea”) and a conclusion (“Blood”) frame the nine “Worlds”. Each of the “worlds” links to the following one through a short “weaving” section.

The importance of weaving in the Kogi culture cannot be overemphasised. Every aspect of their physical and spiritual life is dominated by the act of weaving, and the creation myth as a whole can be visualised as a giant spinning fuse around an imaginary axis passing through the highest peak of the Sierra Nevada.

The music of Aluna is derived entirely from an interwoven fabric of nine harmonic fields, built from nine intervals of a perfect fifth. This harmonic structure expands and contracts according to the form of the piece, but maintains its ritual, static character throughout.

For Aluna I developed an interface written with MaxMSP. It performs a realtime spectral analysis and resynthesis of the viola sound, and spatialise the resulting FFT bins individually. The trajectories reflect the weaving paradigm, both in space and frequency. The generative nature of the viola is reflected in how spectral and amplitude data from the instrument are used to control and transform the sound of the orchestra through a bank of dynamic resonant filters, spectral delays and a convolution engine.

The problem with MaxMSP

I had the first glimpse of what was then called simply Max in 1994, when my good friend Diego Dall’Osto introduced me to the software, but I only started using it in 1996, when working at Tempo Reale in Florence. At first, like so many other composers, I was completely taken by the power and beauty of a programming language that allowed me to work in a graphical environment and test the results on the fly, without having to compile the code. Moreover, I had two wonderful mentors, Nicola Bernardini and Alvise Vidolin. They gave me generous advice and help, so that I was soon able to develop my own patches without having prior programming skills.

Soon, though, a number of issues with Max started to emerge, and in various ways, they are still unresolved ten years later. To be fair, many of the issues depend on the way Max, now MaxMSP, is used, but I still find it surprising that David Zicarelli and his company have not acted more energetically to adapt the software to the needs of the growing Max community. I will look at MaxMSP from the two angles that interest me most, usability and sustainability, but first I will try to answer the question of whom this software is written for.

I think that the main problem with MaxMSP is the fact that it sits in a sort of no man’s land between programming languages and software applications. It is too cumbersome and prescriptive as a programming language, but it lacks the user interface and a consistent set of tools that we usually associate with commercial software packages. It may be retorted that this perceived weakness is in fact the main strength of MaxMSP: to give total freedom to artists and musicians so that they can develop their own interactive set-ups without the rigid constraints of commercial software but also without the need to become programmers. My opinion is that in the long term, and looking at the way MaxMSP is now the de facto standard in performing music with live electronics, the problem has become more acute.

Composers that go past MaxMSP’s rather steep learning curve greedily embrace the programme, and start developing their patches, either from scratch, or using existing objects or libraries by members of the community. In the first case they often end up with very inefficient and buggy patches, in the second they create many dependencies, limiting portability and sustainability of their work. Max is great at two things – experimenting with your ideas and prototyping virtual set-ups – but as soon as you enter production mode, it becomes quite unfriendly. There is a historical reason for this; Max was first developed at IRCAM, an institution characterised by a rather rigid separation between composers and music technology assistants. The idea was that composers dealt with the creative part, while the assistants provided a human interface to the technology tools. This meant that the code was looked after by the technologists, and composers didn’t need to engage directly with it. Also, a big institution like IRCAM ensured the long-term preservation of the works, by employing assistants to maintain and upgrade the patches as needed.

This initial dichotomy is part of MaxMSP’s genetic code: the software is used mainly by composers and artists, but is written for programmers. This is why I find difficult to identify the target audience of the software: it is too complex and difficult to learn to be mastered fully by artists, but its true potential is wasted in the hands of programmers, who will also complain that as a development platform MaxMSP lacks many important features. In fact, I haven’t found yet a good application built with MaxMSP. So it looks like the MaxMSP target user is either a highly talented composer-technologist, equally versed in computer programming and music composition, or a creative team supplying the required skills. Not surprisingly, MaxMSP is frequently found in higher education.

Let’s look now at MaxMSP from the usability perspective. MaxMSP provides out-of-the-box quite a number of graphic objects, and has recently added powerful new functions, like Java support, the ability to encapsulate/de-encapsulate patches and create/save prototypes [template patches that can be used everywhere]. Nevertheless, the actual user interface is entirely the responsibility of the user – there are no standard Graphic User Interface models or templates. The result is that a given patch – say a sound spatializer – can be realised in many different ways, each one providing a very different user experience. Testing and comparing of patches is thus made very difficult, as the same spatializer engine can be visualised omitting certain parameters altogether or hiding them in remote subpatches. Sharing of patches, or having your live electronics performed by someone else, is also compromised, since every user builds their patches according to their personal needs and taste. If you add the fact that MaxMSP has no easy way for commenting or documenting patches, you see how hard it can be sometimes to reconstruct signal and control flow in a complex patch, even for the person that wrote it!

Probably it is from the sustainability point of view that MaxMSP fares worse. The software gives artists and musicians the impression to be in control, but in fact locks them into a closed system, difficult to scale, adapt or maintain over time. I’m talking here mainly from the perspective of live electronics concert performance, the kind of mission-critical application where everything has to work as planned. My experience over the years is that in order to work properly a MaxMSP patch has to be tweaked or rewritten every year or so, especially if external dependencies are included in the patch. In some cases, objects and libraries are not upgraded when the software is, and an alternative must be found or developed from scratch. Conflicts between objects with the same name can also prevent patches from functioning properly.

As I said, MaxMSP is an invaluable tool for trying out ideas, experimenting and prototyping, but falls short of usability and sustainability requirements, the two areas that matter most for a creative, musical use of the software and for the long-term preservation and maintenance of patches and the artistic works that depend on them. MaxMSP remains the first choice for musicians working with live electronics, but I think I have identified a gap that needs to be filled if we really want to empower musicians and offer them more accessible tools for interacting with technology.

Intercos


In 2000 AGON invited me to compose the music and sound effects for a six-room interactive installation to be realised at Bologna’s COSMOPROF international cosmetics fair. Intercos, the company that commissioned the installation, wanted to create an immersive environment around a specific narrative.
Memory, Garden, Irony, Seduction, Laziness and Future were the six themes, one for each room.

The music I wrote consisted in six loops of the same length, one for each room. The sense of continuity was given by a common harmonic structure shared by all the loops, while rhythms and instrumentation were very different, according to the room’s theme. Various models of interaction were implemented in the installation, the most interesting one being the Future room, were the movement of the hands of the visitors affected both the visuals projected on the facing wall and the sounds being diffused by the loudspeakers.

I collaborated on this installation project with Michele Tadini, Paolo Solcia and Andrea Taglia from AGON.

Organised Sound article

Modernising musical works involving Yamaha DX-based synthesis: a case study

This article written in collaboration with Jamie Bullock has been published on Organised Sound, issue no. 5 2006.

We describe a new approach to performing musical works that use Yamaha DX7-based synthesis. We also present an implementation of this approach in a performance system for Madonna of Winter and Spring by Jonathan Harvey. The Integra Project, “A European Composition and Performance Environment for Sharing Live Music Technologies” (a three year co-operation agreement part financed by the European Commission, ref. 2005-849), is introduced as framework for reducing the difficulties with modernising and preserving works that use live electronics.

Download the Organised Sound article

ICMC 2005 Harvey paper

Modernising live electronics technology in the works of Jonathan Harvey

This paper was written together with Jamie Bullock and presented at the 2005 International Computer Music Conference in Barcelona. Here follows the abstract:

Many twentieth century works composed for instruments and live electronics are seldom performed due to their use of near obsolete technology. Some performing bodies avoid such works because the necessary technology is either unavailable or too expensive to hire. This paper describes the current status of a project to modernise the technical aspects of Jonathan Harvey’s works in order to increase the likelihood of performance and improve longevity. The technical and ideological implications of the project are discussed in the context of a broader need for the preservation of contemporary works involving technology. The use of open source software and standard protocols is proposed as a way of reducing technological obsolescence. New solutions for two of Harvey’s works are proposed, and discussed in relation to the problems encountered with the project so far. Conclusions are then drawn about the current status of the project and its implications for further work.

Download the ICMC paper.

Carnegie Hall lecture

Notes for a talk I gave on works by Berio and Maderna.

Pre-concert Lecture, The Luciano Berio – Tempo Reale Workshop
Carnegie Hall, New York, October 1997

The three compositions that are going to be played tonight are of a very different nature. A new work for two soloists and orchestra, Alternatim, a short piece for small instrumental ensemble, Serenata per un satellite; and Ofanim, a long composition for two children’s choirs, a female voice, two instrumental groups and live electronics. Those of you who are used to come to this hall will notice the differences: we don’t want to show off the technology, but you will nonetheless see a certain number of loudspeakers all around the place, plus a – quite well hidden, I must say – mixing desk in the parquet. I will explain to you in a short while what is the purpose of those devices.

Tonight we will take part, in fact, in three different musical experiences: one – listening to Serenata per un satellite by Bruno Maderna – where the musical gestures are fixed, and repeated at leisure by the performers. Another – Alternatim by Luciano Berio – where two solo instruments – a clarinet and a viola, generate the orchestral landscape from their melodies and figurations, amplifying and giving more resonance and greater scope to their solo discourse. And a third – Ofanim, again by Berio – where technology is employed to amplify and clarify the complex texture of sounds, and to place the bold musical gestures of the score in a completely new acoustical scenario.

Why am I talking of gestures, of musical gestures? This is a very important point, and one that I’d like to stress, because it’s very close to the heart of Luciano Berio’s musical thought. When an instrument plays, or a voice sings, it makes gestures. The physical act that translates the written page in sounds is heavy with meaning for us, and a deep knowledge of the meaning of a musical gesture is necessary if we want to control it, to master the rhetoric of the instrumental tradition without being mastered by it. Luciano Berio has always been well aware of the powerful meaning of our musical tradition, be it western classical, folk, ethnic, and knows how to cope with it, and how to use it for his artistic purposes. This is why so often his music reaches the audience with great, immediate force, without compromising a rich and complex musical language. It really works on two layers, one of strong communication through the subtle control of every possible musical gesture, the other of a composite musical fabric where both instrumental and formal experiments are carried out in depth. In other words, an abstract approach and a concrete one meet in a multi-layered musical experience.

This in part is true of the work of Bruno Maderna too, whose Serenata per un satellite, in the version realised by Paul Roberts, will be played tonight.
Serenade – it’s a rather uncommon term for a composition of contemporary music, especially in the Fifties, when the titles were more like Structures, Mutations, Kontrapunkte, very hard titles in a way, for a music who didn’t want to compromise with the emotions, and, for that matter, with the past. In fact, Berio too wrote a Serenade, during the Fifties, and Maderna too, before writing the one we’ll listen to tonight. Maderna wrote many Serenatas during his life, four, and the last one, called Juilliard Serenade was composed for the famous New York music school in 1971.

It is interesting to quote what Berio himself said on the subject of Serenade: “In the Fifties the composers were deeply involved with the search for structural references and a new serial order; the face of music was always grouchy. Bruno Maderna’s Serenata and mine were the first to come out after the war. They seem to me the first examples in which serial music becomes more relaxed and shows a less severe aspect.”

It was a sign that something had changed in the music of those years, something that gave way to a happier, less abstract approach to composition. These words portray both men, really, and allow us to understand one of the strong elements always present in Berio’s music: its lack of ideological “partis pris”, of prejudices of every sort. Music according to Berio and Maderna too (he was an enfant prodige, conducting his first concert at 8 and playing the violin at La Scala when he was 7 years old) can only be approached taking into proper account the fact that it has to be performed, and listened to. Too abstract an approach, too scientific, simply will not work. In music, more so than in any other art form, the abstract and the practical meet together.

In music the concrete is the idea, and vice versa. In the musical experience there’s always a drama hidden beneath the surface. The players, through their musical gestures, convey to the audience the ideas of the composer, adding a theatrical dimension to the music – that is to say, the performance. Not to be aware of this, as a composer, can impoverish, deplete the music of a fundamental dimension, that will always be there, even if the composer ignores it.

Going back to our serenades, it is clear then that such an old-fashioned title was a kind of provocation against the clichés of new music. Maderna was well aware of this. He was a witty man, as Serenata per un satellite shows. The idea of the piece is to have a set of different musical phrases or figures, that are to be played in any order, together, or divided by small groups, or one instrument at a time. There is complete freedom in the construction of the piece then, but the notes cannot be changed. This is typical of Maderna’s approach to what is called aleatoric technique in music, a technique that John Cage used intensively during his whole life. Maderna always wrote down the notes and the freedom he left the performers was always confined to the order of the events, their duration, or their superimposition.

Serenata per un satellite is also really a conductor’s game, a piece that Pierre Boulez would have liked. There are a number of musical figures that can be played freely by any instrument – ad libitum, as it were – and the conductor is a bit like the co-ordinator of this musical traffic, starting and stopping the players. The phrases to be played are all presented in a beautiful manuscript page written by Maderna himself, where they all interweave and bend in every direction. The interplay of these lines makes up the piece, a witty and intelligent musical joke, that in the hands of good and inspired performers like tonight’s can become a small masterwork full of humour.

Alternatim refers in its title to a technique of medieval music, commonly found in European tradition until the XVth century. Guillaume Dufay, among many others, employed it in its motet written for the opening of the church of Santa Maria del Fiore in Florence. The technique consisted in alternating polyphony and monody, soloists and organ. Here we have two solo players, a viola and a clarinet, and an orchestra with strings, brass and winds, but no percussion. Two questions jump to the mind immediately: the first – how this work relates with the tradition of the concerto, or double concerto to be precise, and what is the relationship between the two soloists and the orchestra?

The answer to the first question lies in the very nature of the classical concerto – seen as a display of instrumental virtuosity and intelligence, and always very homogeneous in its nature. As Berio himself says, there is no longer a way to establish homogeneity of meaning between one or more soloists and a mass of musicians of different density or nature – such as existed in Baroque, Classical, and Romantic concertos, when the “individual” and the “mass” could practically say the same thing despite their completely different densities and acoustic characters. Today the relationship between soloist and orchestra is a problem that must ever be solved anew, and the word concerto can be taken only as a metaphor.

These are bold statements, and not everybody will agree with them. Well, it is indeed possible to write a concerto today, but the composer has to take into account that the reassuring unity of the classical concerto is lost forever. The answer given to this dilemma by Berio is to make the soloist or the soloists – as it is the case in Alternatim – the starting point of the work, from where originates the whole musical journey. In other words, the musical lines played by the soloists engender, create in a way the musical functions of the whole orchestra.

The choice of the clarinet and the viola is a telling one: they are the real chameleons of the orchestra, and better than other instruments can act as a link between different instrumental families. Like many contemporary composers Berio has never been interested in instrumental families like the ones we find on the orchestration textbooks, but has always explored what we can call the sound families, the families that underline analogies between instruments that are normally very far one from the other. The clarinet and the viola are probably the most useful instruments for an exploration of sound families, given their different registers and their not too specific or confined sound. Think of a violin, or a piano, or even an oboe, and you’ll see how difficult it is to find similarities in other instruments of other families. It is possible, yes, but the viola and clarinet have many more choices for interacting with other members of the orchestra.

As I said before, the lines played by the soloists are the starting point of the piece. We could define those lines as melodies, but the term “line” is less charged with meaning, and probably explains better their role in generating different musical events for the orchestra. We could think of the line as a kind of complex melody – and a melody is rich and interesting when it implies many different musical functions.

Let’s consider Bach’s music for solo cello, solo violin or solo flute, as an example. In those works, a melody implies always a strong polyphonic texture, as the instrument jumps from one register to the other, carrying on different independent lines and at the same time merging them into a single one. On the other hand, think of the importance of the theme during the Classical and Romantic era. A musical theme shaped and ordered all the material of a sonata or a symphony movement. Melody in classical music has always hidden many powerful functions affecting all the elements of composition. That is why a melody written today needs to have the same range of different musical functions.

In Alternatim one of the basic areas of investigation is the relationship between the soloists and the orchestra, that is to say how a monodic line transforms itself into a polyphonic texture, into a complex musical fabric. If we take a quick look at the other works written by Berio for solo instruments and orchestra we always find out different solutions to this challenging problem. I’ll point out two examples: the series of the Chemins, where the original Sequenza for solo instrument is transcribed, projected, in the orchestral field, and the radical solution of Coro, where there are forty voices and forty instruments, and every single instrument is coupled with a different singer. In Alternatim we have this beautiful melodic line, that starts with a series of leaps of a fourth, both perfect fourth and augmented fourth, an interval that comes back all the time in the course of the work. Why this insistence?

The historical importance of the interval of fourth cannot be understated: it is like a bridge that links the oldest European music, the music of the Middle Ages, with the music of the beginning of our century, primarily Debussy and Schoenberg, but also Stravinsky, and Scrjabin. In between we have the supremacy of the classical tonal language, based on the interval of a third, like in a major or minor chord of the scale. The relationship with musical history in Berio is never an innocent one: if he chooses to work with certain elements it is because he wants to bring to the surface their hidden power, and make them react with other – rather different musical objects. In a way this is an approach very dear to another great composer of our century, Igor Stravinsky, but Berio extends the scope of this musical investigation further and reaches new territories.

This initial melodic line is an ever-present element of Alternatim, and comes back always different, but always recognisable. This gives me the opportunity to spend a few words over the fertile idea of redundancy in music, an idea that I’m sure Berio has spent some time investigating. The repetition, the coming back of the same element is a very strong feature of music of all times, and a fundamental way for communicating musical ideas. It has all to do with perception, and the way we listen to music. Contemporary composers shunned for a long time the very idea of repeating whatever. Without repetition, though, there can be no comprehension. This is especially true of a music that is not written using the tonal language, that powerful – still powerful – tool for giving the ear guidance. Redundancy, repetition of the same musical element, be it a line, a series of chords, a rhythmic pattern, becomes a way for helping the listener grasp the musical thought of a composer.

At the beginning of this discussion I mentioned the machines in the hall. I will now explain to you why we have filled Carnegie Hall with these big, black loudspeakers, and what is their purpose. But let me first say a word on the relation between technology and music. Music made with electro-acoustic machines and devices has been around now for more than forty years, and Berio himself, as he said to those of you that were taking part in the workshop this afternoon, started writing electronic music during the Fifties. Yet – I think that a fundamental difference exists between that period and today. In those years the composers were the ones that started experimenting and they created the demand for new machines to realise their musical experiments. In a way, musical thought guided the birth of new machines and their characteristics, so that there was a situation comparable to the introduction of new instruments during the preceding centuries. Nowadays, on the contrary, and for quite a long time now, probably from the end of the Seventies, electronic machines started to be an incredibly useful tool for commercial music and ceased to be under the direct influence of musical thought. In fact, it was musical thought that started to run after new technologies, trying to cope with the startling amount of new machines coming out every moment.

Structure of the piece, alternating great density and calmer moments, static, harmonically static.

Electronics in the piece: harmonizers, delay, spatialization.

Functions: amplifying, clarifying the harmonic structure, emphasising the structure, and amplifying the expressive range, through amplitude and density.

Ofanim is a piece that can be played in many different spaces, and every time we perform it in a different space, the music changes, according to many different factors. And every time we learn more on the relationship of sounds with space. It is really a work in progress, but one whose many faces, corresponding to its subsequent performances – we hope – convey always the same musical meaning. Because, I say it once again, live electronics technology should always be part of a wider musical vision, and it should really act as an amplifier – in every direction – of a musical meaning, that – even sketched – has to be already there, in the score.

© 1997 Lamberto Coccioli

Leverhulme application

We have just submitted to the Leverhulme Trust a funding application for a 5-year research plan at UCE Birmingham Conservatoire. Together with Jamie Bullock we have identified usability and sustainability as the key areas of development in the field of live electroacoustic music research in the foreseeable future.

The Dancing Pig

Shortly after arriving in Birmingham I was involved in the realisation of The Dancing Pig, in collaboration with Mark Lockett, a wonderfully polymorphous musician, and Roy Kwabena, then Birmingham’s poet laureate, and a brilliant story-teller originally from Trinidad.

The Dancing Pig is an Indonesian story about two curious children and a witch. I worked on a live electronics setup to transform Roy’s narrating voice and some of the gamelan instruments, to give the story and the music an eerie, outworldly quality.

« Older posts

© 2024 lamberto coccioli

Theme by Anders NorénUp ↑