Infinite Remix

26 May 2023

PRiSM Writer in Residence Series: Abi Bliss

A headshot of a person with short dark hair wearing glasses.

In March 2021 we welcomed 2 PRiSM Writers in Residence Abi Bliss and Leo Mercer. We introduced them and their projects here 2021 PRiSM Writers and Scientists in Residence.

During the past two years, we have worked closely with each one of them, resulting in two series of creative and critical outputs exploring the collaborative relationships between humans and technology.

To wrap up her PRiSM Residency, Abi has recently released a podcast entitled Infinite Remix: A Journey into AI Music, expanding from her earlier PRiSM Blog Experiencing AI in Music (August 2021) and featuring herself in conversation with people who are working with cutting-edge machine learning technology to produce sounds never heard before.

This PRiSM Blog is therefore a reflective commentary from Abi about this journey as a whole, the making of the podcast, as well as her responses to the aesthetic and wider questions posed by AI music through embedding herself within the PRiSM community since 2021.


Infinite Remix

By Abi Bliss

If I were being topical – or the kind of ‘topical’ that’s already three weeks out of date as I write this and even more so by the time you read it – I would open this blog with the results of giving an AI chatbot (e.g. Bard by Google and ChatGPT by OpenAI) a prompt:

Write a 100-word introduction to a blog piece about Abi Bliss’s podcast Infinite Remix summarising its exploration of AI in music. Please praise the producer’s thoughtful approach, the eloquence of the interview subjects and include at least one original joke referencing a Kraftwerk song.

Bard – an open-access Chatbot released by Google in March 2023 – responding to the above prompt

But the quest for topicality is a dangerous one when writing about the ever-evolving world of AI and machine learning, so I’ll leave you to imagine whether the results would be precis or prevarication and whether ‘Computer Love’ would triumph over ‘We Are The Robots’.

The fact that I can mention GPT chatbots and not expect too many blank looks proves how much AI has advanced into the public consciousness since I started my PRiSM residency in 2021.

In the visual realm, too, there is much greater awareness of the startling realism – or enticing surrealism – of images generated by neural networks such as DALL-E and Stable Diffusion and the possibilities and complications they present.

The past few years have brought dadabots’ endless streams of generated death metal, free jazz and other frenetic styles, as well as Holly Herndon’s AI voice tool Holly+.

Yet to many people outside of academia or specific sub-genres, AI music remains mostly hypothetical, with the last advance in the field to really capture imaginations being the launch of OpenAI’s Jukebox in 2020.

AI generated image using DeepAI’s text-to-image generator, using the prompt: “Abi Bliss trying to make a podcast about AI music”

Dezgo (a text-to-image generator powered by StableDiffusion) responding to the prompt: “A zombie Frank Sinatra singing on a television pop show in front of an audience of cheering robots”

Despite the more excitable claims made for it, Jukebox’s use of neural synthesis to generate audio in the styles of popular artists hasn’t yet taken over the pop world. For now, it’s a cool party trick – or maybe more of a Halloween mask, given the zombie-like quality of its lo-fi renderings of the likes of Prince and Frank Sinatra.

As you’ll hear in the podcast, generating a static image or collections of letters and punctuation that humans can interpret as carrying meaning presents far less of a challenge than doing the same to sound moving through time.

For all the technological advances since 2016, when WaveNet and then SampleRNN made synthesising usable chunks of raw audio a possibility, AI music that uses neural synthesis is still a work in progress.

But then so once were tape music, or analogue synthesisers, or sampling in their infancy, and people still pushed against their limitations to create music no one had heard before.

Neural synthesis audio of the kind currently generated by PRiSM’s computers using Christopher Melen’s PRiSM SampleRNN code is messy, mercurial and has a haunted, distant quality that lends itself to all manner of psychoacoustic interpretations. The day when it can perfectly imitate existing sounds may come.

But for everyone who I spoke to, what it does now is arguably more exciting. I was keen for Infinite Remix to look at what creative possibilities machine learning offered beyond the human-shaped suit that so many are trying to fit it into, and in the podcast Sam Salem, Vicky Clarke and Melanie Wilson all provide different and insightful takes on how recognising the character and quirks of machine learning on its own terms has challenged and enhanced their individual artistic practices.

Away from the cutting edge of raw audio generation, AI is infiltrating music in other ways.

One area where a host of companies are keen to entice users is in using machine learning algorithms to efficiently analyse and generate royalty-free tracks to feed the ever-hungry online content creation industry. The implications for current composers of such off-the-shelf music are clear. But for others, maybe AI isn’t the adversary it might appear – at least, that’s the opinion of musician Neil Campbell, who reflects upon his experiences being commissioned by one such startup, Mubert, for a generative stream of his signature sounds.

Ultimately, among those I spoke to, anxieties about whether AI could replace musicians were outweighed by the feeling that its existence only reaffirmed the most human qualities of music – while at the same time offering creators and listeners alike a chance to interrogate deeply held but not always questioned beliefs about beauty, originality and the limits of our minds.

The ethics and economic implications of AI in music – from dataset bias to questions of copyright and consent – would fill several more podcasts alone; I explore a few of the issues in my 2021 Wire feature on the subject. As would a deeper dive into the theory underpinning machine learning and where the technology could still advance, which Christopher Melen ably summarises for listeners. To somewhat misuse the term, my time spent observing the happenings at PRiSM and encountering a range of unique and informed perspectives generated a kind of latent space of possible approaches.

With its focus upon the creative aspects of working with AI music, Infinite Remix is just one of the outputs that could have arisen. It’s also a very linear way to narrate a multifaceted story containing a plurality of issues and views; as a next step I’m keen to revisit the podcast format and explore ways that I could better reflect AI’s fragmented, unpredictable qualities while retaining my editorial voice and some degree of coherence.

But in the meantime, I hope you enjoy listening to what I learned about creating and experiencing AI music, by listening to the humans who make it.

Listen to Infinite Remix


Acknowledgement

This residency, and its resulting work, are supported by PRiSM, the Centre for Practice & Research in Science & Music at the Royal Northern College of Music, funded by the Research England fund Expanding Excellence in England (E3).

RNCM50 logo