on music and beauty

Tag: MaxMSP

Seminar on live electronics at the Royal Academy of Music

Philip Cashian, Head of Composition at the Royal Academy of Music, kindly invited me to give a seminar to the students of his department. The title I chose for the talk today was “A new approach to composing music with live electronics”. I gave an overview of live electronics in practice, and the challenges and frustration that often accompany performances involving technology. Referring to my experience with Luciano Berio’s musical actions with live electronics (Outis, Cronaca del Luogo), I remarked on the sad absence of these seminal works from the repertoire today and outlined the challenged posed by technology in performing works created only 15-20 years ago. I went on presenting the philosophy of the Integra project and its role in developing the Integra Live software, with the intention to address the caducity and unfriendliness of live electronic systems developed using programming languages like Max.

Showing Integra Live in action I was able to demonstrate how the software and its different views tried to mimic the creative process and the workflow of a composer. From an initial exploratory, imaginative phase (Module View), to a more structured stage where events start being organised in time (Arrange View), to a rehearsal and finally performance phase (Live View), where things are fixed and the most important thing is reliability and control of every relevant aspect of the performance.

I hope I conveyed to the students my salient point: always ask yourself why you should use technology, and if you do, make sure it is borne out of your musical ideas, and is an integral part of your musical thinking. I enjoyed very much the interaction with them, they were curious and lively, and asked interesting questions, among others, about the future of Integra Live in a hypothetical post-coding world, and – this one more technical – about using MIDI notes to control partials in the various spectral modules of the software, highlighting the need for a built-in MIDI note to frequency converter in all spectral modules. At the end of the seminar Philip took a straw poll among the students and the overwhelming majority voted in favour of trying Integra Live in their own music. Not bad!

Aluna electronics

This is the latest version of the live electronics of Aluna. It has been tested on a MacBook Pro with macOS High Sierra 10.13.6 and Max version 8.1.4

Download the Aluna electronics zip archive

Top-level patcher: Aluna_performance.maxpat
Electronics tested with Max 8.1.4

All files, including Aluna_performance.maxpat, contained in the Max8 folder.

Externals dependencies:

1. tap.vocoder~
from TapTools by Timothy Place, version 4 beta 2 26 April 2013
source: https://github.com/tap/TapTools/releases

2. analyzer~
by Tristan Jehan, 64 bit version by Volker Böhm 22 June 2018
source: http://vboehm.net/downloads/

3. Lcatch, Lscale, Ltocoll
by Peter Elsea, from the Lobjects collection, 24 February 2015
source: http://peterelsea.com/lobjects.html

4. vdb~ and envfollo-float~
abstractions by Benjamin Thigpen, part of the “bennies” collection, IRCAM Forum distribution
source: https://forum.ircam.fr

Audiofiles:

blood_orch.aif
blood_viola.aif
ocean_breakers.aif
ocean_calm.aif
ocean_waves.aif
oceanvocoder.aif
viola.aif (MIDI simulation of the solo viola part)

LC, 25 June 2020

The problem with MaxMSP

I had the first glimpse of what was then called simply Max in 1994, when my good friend Diego Dall’Osto introduced me to the software, but I only started using it in 1996, when working at Tempo Reale in Florence. At first, like so many other composers, I was completely taken by the power and beauty of a programming language that allowed me to work in a graphical environment and test the results on the fly, without having to compile the code. Moreover, I had two wonderful mentors, Nicola Bernardini and Alvise Vidolin. They gave me generous advice and help, so that I was soon able to develop my own patches without having prior programming skills.

Soon, though, a number of issues with Max started to emerge, and in various ways, they are still unresolved ten years later. To be fair, many of the issues depend on the way Max, now MaxMSP, is used, but I still find it surprising that David Zicarelli and his company have not acted more energetically to adapt the software to the needs of the growing Max community. I will look at MaxMSP from the two angles that interest me most, usability and sustainability, but first I will try to answer the question of whom this software is written for.

I think that the main problem with MaxMSP is the fact that it sits in a sort of no man’s land between programming languages and software applications. It is too cumbersome and prescriptive as a programming language, but it lacks the user interface and a consistent set of tools that we usually associate with commercial software packages. It may be retorted that this perceived weakness is in fact the main strength of MaxMSP: to give total freedom to artists and musicians so that they can develop their own interactive set-ups without the rigid constraints of commercial software but also without the need to become programmers. My opinion is that in the long term, and looking at the way MaxMSP is now the de facto standard in performing music with live electronics, the problem has become more acute.

Composers that go past MaxMSP’s rather steep learning curve greedily embrace the programme, and start developing their patches, either from scratch, or using existing objects or libraries by members of the community. In the first case they often end up with very inefficient and buggy patches, in the second they create many dependencies, limiting portability and sustainability of their work. Max is great at two things – experimenting with your ideas and prototyping virtual set-ups – but as soon as you enter production mode, it becomes quite unfriendly. There is a historical reason for this; Max was first developed at IRCAM, an institution characterised by a rather rigid separation between composers and music technology assistants. The idea was that composers dealt with the creative part, while the assistants provided a human interface to the technology tools. This meant that the code was looked after by the technologists, and composers didn’t need to engage directly with it. Also, a big institution like IRCAM ensured the long-term preservation of the works, by employing assistants to maintain and upgrade the patches as needed.

This initial dichotomy is part of MaxMSP’s genetic code: the software is used mainly by composers and artists, but is written for programmers. This is why I find difficult to identify the target audience of the software: it is too complex and difficult to learn to be mastered fully by artists, but its true potential is wasted in the hands of programmers, who will also complain that as a development platform MaxMSP lacks many important features. In fact, I haven’t found yet a good application built with MaxMSP. So it looks like the MaxMSP target user is either a highly talented composer-technologist, equally versed in computer programming and music composition, or a creative team supplying the required skills. Not surprisingly, MaxMSP is frequently found in higher education.

Let’s look now at MaxMSP from the usability perspective. MaxMSP provides out-of-the-box quite a number of graphic objects, and has recently added powerful new functions, like Java support, the ability to encapsulate/de-encapsulate patches and create/save prototypes [template patches that can be used everywhere]. Nevertheless, the actual user interface is entirely the responsibility of the user – there are no standard Graphic User Interface models or templates. The result is that a given patch – say a sound spatializer – can be realised in many different ways, each one providing a very different user experience. Testing and comparing of patches is thus made very difficult, as the same spatializer engine can be visualised omitting certain parameters altogether or hiding them in remote subpatches. Sharing of patches, or having your live electronics performed by someone else, is also compromised, since every user builds their patches according to their personal needs and taste. If you add the fact that MaxMSP has no easy way for commenting or documenting patches, you see how hard it can be sometimes to reconstruct signal and control flow in a complex patch, even for the person that wrote it!

Probably it is from the sustainability point of view that MaxMSP fares worse. The software gives artists and musicians the impression to be in control, but in fact locks them into a closed system, difficult to scale, adapt or maintain over time. I’m talking here mainly from the perspective of live electronics concert performance, the kind of mission-critical application where everything has to work as planned. My experience over the years is that in order to work properly a MaxMSP patch has to be tweaked or rewritten every year or so, especially if external dependencies are included in the patch. In some cases, objects and libraries are not upgraded when the software is, and an alternative must be found or developed from scratch. Conflicts between objects with the same name can also prevent patches from functioning properly.

As I said, MaxMSP is an invaluable tool for trying out ideas, experimenting and prototyping, but falls short of usability and sustainability requirements, the two areas that matter most for a creative, musical use of the software and for the long-term preservation and maintenance of patches and the artistic works that depend on them. MaxMSP remains the first choice for musicians working with live electronics, but I think I have identified a gap that needs to be filled if we really want to empower musicians and offer them more accessible tools for interacting with technology.

David’s Garden

Together with trombonist David Purser and composer-technologist Jonathan Green, we’ve been working since late 2005 on a user-friendly Max/MSP environment to let performers improvise with technology. To capture performance data we have been using a microphone and a flexion sensor, measuring the angle of the arm to track the position of the trombone slide.

Touch

Touch-for-piano-and-live-electronics


Touch represents an experiment in transparent electronics. I wanted to create a performing environment where technology expands, magnifies and projects the musical gestures of the performer. Four identical musical objects are observed from different points of view, each time using a different transformation tool in order to emphasise a different aspect of piano playing: touch, resonance, the harmonic and melodic dimensions.

The performer is completely in control of the technology and external intervention is kept to a minimum. I achieved this by creating an interface that reacts to the nuances of musical performance in a very subtle way. Specific attention has been given to the detection of the attack of the piano sound: the pianist’s touch triggers each time the whole transformation process.

Premiered by Laure Pinsmail in 2002, Recital Hall, Birmingham Conservatoire. Live recording of the performance given by Jonathan Powell in 2005.

***

This is the latest version of the live electronics of Touch. It has been tested on a MacBook Pro with macOS High Sierra 10.13.6 and Max version 8.1.4

Download the Touch electronics zip archive

Top-level patcher: Touch.maxpat
Electronics tested with Max 8.1.4
Created a Max collective: Touch.mxf

All files, including Touch.maxpat, contained in the Max8 folder.

External dependencies:

  1. tap.shift~
    from TapTools by Timothy Place, version 4 beta 2 26 April 2013
    source: https://github.com/tap/TapTools/releases
  2. iana~ and add_synth~
    by Todor Todoroff, ARTeM (Art, Recherche, Technologie et Musique), maintained by IRCAM, MaxSoundBox version 03-2018
    source: https://forum.ircam.fr/projects/detail/max-sound-box/
  3. bonk~ v1.5
    by Miller Puckette, port by Ted Apel and Barry Threw, 64 bit version by Volker Böhm 22 June 2018
    sources: https://github.com/v7b1/bonk_64bit-version, http://vboehm.net/downloads/
  4. vdb~
    abstractions by Benjamin Thigpen, part of the “bennies” collection, IRCAM Forum distribution
    source: https://forum.ircam.fr

Audiofiles:

piano_resonance.aif

LC, 7 June 2020

Flectar

flectar


Flectar, a Latin word meaning “to bend”, is dedicated to David Purser, whose help has been invaluable during both conception and writing of the work. In Flectar we set to explore how the physical gestures of the trombone player – and in particular the movements of the arm to change the slide position – can be made to control in a subtle and musical way the electronic transformations of the sound of the instrument. The trombone becomes a sort of hyper-instrument reverberating in space, with the performer in control of shaping and projecting the sound all around the audience.

Flectar is in four parts. In part 1 and 3 a series of cues correspond to individual electronic events. In part 2 and 4 a verbal description identifies the link between the performer’s gesture and the resulting sound. In most cases the position of the slide, combined or not with sound attacks, controls the triggering of electronic events or the nature of the transformation. Therefore, it is very important to use always the slide positions indicated in the score.

First performance by David Purser on 19 January 2005, Birmingham Conservatoire, Recital Hall.

* * *

Technical requirements for the performance

The performance of Flectar requires a person to operate the computer and control the sound diffusion.

Computer (Mac or PC) running Max software
2 in/8 out audio interface
Kroonde Gamma wireless UDP sensor interface with flexion sensor (now deprecated, equivalent systems may be used with minor modifications to the Max/MSP patch)
1 miniature microphone, DPA 4061 or equivalent
Reverb unit
6-point sound diffusion system with 6 speakers: front L/R 1-2, sides L/R 3-4, rear L/R 5-6.

© 2023 lamberto coccioli

Theme by Anders NorénUp ↑