What is PRiSM SampleRNN?
PRiSM SampleRNN is a project including development of the code prism-samplernn, a computer-assisted compositional tool released on GitHub in June 2020 as part of PRiSM Future Music #2, and PRiSM’s first major contribution to the field of Machine Learning. It generates new audio outputs by ‘learning’ the characteristics of an existing corpus of sound or music. Changing parameters of the algorithm and how the dataset is organised significantly changes the output, making these choices part of the creative process. The audio generated can be used directly in a composition or to inform notated work to be played by an instrumentalist. Development of the software is funded by Research England, Expanding Excellence in England (E3).
The software was developed in response to work by Dr Sam Salem, PRiSM Lecturer in Composition. For his piece Midlands (2019), Salem made field recordings whilst walking 120km of the River Derwent. These materials were used to synthesize new sounds with Wavenet, a deep-learning algorithm, but the speed of the workflow made it difficult to explore the full possibilities of the technique (documented in the PRiSM blog post A Psychogeography of Latent Space). An alternative, SampleRNN, represented an opportunity to generate sound more quickly but the code was broken. PRiSM Research Software Engineer Dr Christopher Melen, undertook a complete reimplementation of the original SampleRNN code¹, fixing broken dependencies and upgrading it to work with the latest versions of Python and Tensorflow. It constitutes a completely new implementation of the SampleRNN architecture (documented in the PRiSM blog post A Short History of Neural Synthesis).
The release of PRiSM SampleRNN was accompanied by a model developed by Salem using data from the RNCM’s world-class archive of choral music. Since then, it has developed into one of PRiSM’s major projects, bringing together practitioners across a diverse range of disciplines and fields of study, and illustrating PRiSM’s core research concerns of collaborative and interdisciplinary effort. It is currently being used in projects by composers, musicians and technologists across the globe. A free and open-source project, the latest release can be downloaded from the software development platform GitHub. The software is readily available (open source) and the technique has been made explicit through a number of online resources and performances demonstrating this creative application of AI; informing technology researchers, other arts practitioners, educators and the general public.
1 The original SampleRNN architecture was described in the paper SampleRNN: An Unconditional End-to-End Neural Audio Generation Model (2017). This version was based on Python 2 and Theano, a library for performing fast computations involving matrices. This version formed the basis for the Dadabots’ famous Relentless Doppelganger livestream on Youtube (https://www.youtube.com/watch?v=MwtVkPKx3RA). Since both Python 2 and Theano are officially deprecated it was decided that PRiSM would offer its own implementation, based on Google’s popular Machine Learning library TensorFlow 2.
prism-samplernn code Github
prism-samplernn Google Colab Notebook
prism-samplernn is open source and free to use under an MIT licence, copyright retained by RNCM PRiSM. We would ask that the software and funding is referenced in work as follows:
Led by Sam Salem and Christopher Melen
Initiated by Sam Salem
prism-samplernn code by Christopher Melen
A PRiSM Collaboration also involving David De Roure, Marcus du Sautoy, and Emily Howard.
The RNCM Centre for Practice & Research in Science & Music (PRiSM) is funded by the Research England fund Expanding Excellence in England (E3).
PRiSM Blog Posts
An Encounter with the Artificial by José del Avellanal Carreño 26 Feb 2021
José del Avellanal Carreño writes about his collaboration with BCMG clarinetist, Oliver Janes, ‘speak, sing…’ incorporating material created using the software PRiSM SampleRNN to manipulate improvisations provided by Oliver.
A Psychogeography of Latent Space by Dr Sam Salem 15 June 2020
An artistic reflection on composition in the age of Machine Learning, by Dr Sam Salem. Published to coincide with the release of PRiSM SampleRNN as part of PRiSM Future Music #2.
A Short History of Neural Synthesis by Dr Christopher Melen 22 May 2020
In this first instalment on AI and music, Dr Christopher Melen, PRiSM’s Research Software Engineer, introduces neural synthesis, an exciting technique that features in the artistic work of the PRiSM team. Published to coincide with the release of PRiSM SampleRNN as part of PRiSM Future Music #2.
THIS IS FINE by Sam Salem
Performed by Alice Purton. Mixed by Sam Salem. Mastered by Richard Scott.
Featured on The New Unusual by Distractfold, released March 12, 2021
shield by Emily Howard
Written for the Piatti String Quartet in 2021. Performance postponed due to COVID.
Speak Sing by José del Avellanal Carreño
A collaboration with clarinetist, Oliver Janes, using the PRiSM SampleRNN in the compositional process. This is the first work from an ongoing partnership between PRiSM and the Birmingham Contemporary Music Group (BCMG). See excerpts from the rehearsal, and the recorded premiere, in a PRiSM blog post by José del Avellanal Carreño about using PRiSM SampleRNN and the collaboration here.
Ireland: A Dataset by Jennifer Walshe
Walshe asked PRISM to use Irish traditional sean-nós singing to train a model which would produce AI-generated material. This was then learned by ear by the experimental vocal group Tonnta (Robbie Blake, Elizabeth Hilliard, Bláthnaid Conroy Murphy and Simon MacHale) for the performance, which also included AI interpretations of Enya, Riverdance and the Dubliners. Music Critic Alex Ross wrote “The results are at once nonsensical and oddly charming: Walshe seems to be suggesting that randomization can restore mystery to traditional material.” Jennifer Walshe’s Sublime Chaos, The New Yorker, October 19, 2020. See more here.