Prototype Music | Saxophone & PRiSM Musical Gesture Recogniser (MGR)

12 December 2023

Introduction

Earlier in 2023, PRiSM and the RNCM School of Composition presented two exciting and experimental new works for saxophone and electronics, joining forces with guest saxophonist David Zucchi to showcase new musical gesture recognition technology that PRiSM has been working on (featuring software development from PRiSM Research Software Engineer Dr Hongshuo Fan). Led by Dr Robert Laidlow (PRiSM Associate, Jesus College University of Oxford) & Professor David De Roure (PRiSM Technical Director), the project extrapolates from an earlier PRiSM collaboration with Professor George Lewis (Columbia University), exploring possibilities of the ‘machine’ listening to live musicians and thus determining when it will be playing specific musical ideas.

In this PRiSM blog, Laidlow reflects upon his experience leading on the project and working alongside two RNCM student composers, Songhao Yao and Eve Vickers, as well as how working with this emerging technology has opened up new avenues for future research and creative practices.


Prototype Music | Saxophone & PRiSM Musical Gesture Recogniser (MGR)

By Robert Laidlow

I first saw this emerging PRiSM Music Gesture Recognising (MGR) technology in November 2022, in prototype form, used for George Lewis’ PRiSM Residency project Forager. In short, the software “listens” to live musicians and determines when they are playing specific musical ideas. It then responds to this in real-time. 

I was very keen to investigate its musical possibilities further and incorporate it into my own upcoming work for solo saxophonist David Zucchi. I see this technology as an important tool for making electronics more reactive, intuitive, and performer-like in a concert setting. After some discussion, we decided the best course of action would be to invite some RNCM composition students to explore the technology with me, resulting in new pieces of music and, crucially, a deeper understanding of this technology’s place within a compositional process.  

Team, Technology, Timeline 

One of my favourite parts of how PRiSM operates is how open and cross-disciplinary its projects end up being. This project was no different.

On the composing side, we had current PRiSM doctoral researcher Songhao Yao, and MMus student Eve Vickers and me, a former PRiSM doctoral researcher. On the technological side, we were working closely with PRiSM Research Software Engineer Dr Hongshuo Fan (also a composer) and PRiSM Technical Director Professor David de Roure. Finally, we had our performer David Zucchi, who has a wealth of experience in performing all kinds of contemporary music and improvisation. The team encompassed a very wide range of experience, expertise, and skill, and this range was a major contributor to the project’s ongoing success. 

With the team sorted, we arranged a relatively long timeline for the project. We first met in January, with a final concert planned for June 2023. This 6-month creative period allowed the RNCM composers to not only work extremely closely with David Zucchi, but also to feedback their thoughts and requests for the software to the technical side, forming an iterative collaboration. During this period, they came up with many important questions and ideas which were borne out of their own compositional process, but have gone on to become major considerations in this technology moving forward. This includes how the technology might recognise multiphonics (more than one note at once from an instrument that traditionally is monophonic) and the possibility of ensuring randomness and failure for creative purposes. 

The Pieces 

Songhao and Eve each presented a substantial, individual work for saxophone and electronics at the end of this process.

Songhao’s piece, “I have lost something”, creates and inhabits an eerie, twilight world for the soloist. The electronics part, triggered by musical gesture recognition, amplify and distort non-musical sounds emerging from the soloist such as breathing and air tones. 

The soloist employs a vast range of extended techniques alongside its gestures, resulting in what I felt was fascinating interplay between the electronic sounds and the human player – you are never sure what is imitating what, what is responding to what. 

Eve’s piece, “Variations on a Flickering Light” focusses on the grey areas within the software, where it’s not quite sure if a gesture has been played or not.

Composer Eve Vickers rehearsing with saxophonist David Zucchi and Hongshuo Fan.

It’s clear that this technological element should be considered a feature, not a bug, and should be preserved as the software develops. It has allowed Eve to create a quietly volatile piece of music, aided by her approach to solo writing which gave David Zucchi a large amount of interpretative freedom. 

What’s Next? 

Both works were a great success and contributed enormously to the development of the software, which was still in prototype form when the performances took place. I am currently working on completing my piece “Content” for David Zucchi and electronics, taking into account everything we have learned throughout this experience. In my own work, I am imagining the areas of the music being hidden behind locked doors. David Zucchi can perform musical “keys” to open these doors – and it is these “keys” that the software listens to, responding by unveiling what is behind. We’d also like to find out how this constantly evolving software can interact with ensembles, and with many musicians playing together – stay tuned for more updates on that!