Earlier this month, the second-ever AI Song Contest took place in Liège, Belgium. The competition, modeled on the Eurovision Song Contest, explores the potential of artificial intelligence in music production. And it begs the question: is this technology capable of making music with the same creativity as humans?
An AI Song
The winning entry at the AI Song Contest was Listen to Your Body Choir, produced by M.O.G.I.I.7.E.D., a California-based collaboration of musicians, scholars and AI researchers. The song begins with a soft female voice and slow piano music. But then the voice quickly becomes distorted and jerky as a trippy beat kicks in. The lyrics include phrases like, “I wanna be on time, up there in the clouds” and “Do the cars come with pushups?”
The song’s production team took inspiration from Daisy Bell, a song composed by Harry Dacre in 1892. It was the first song to be sung by a computer using speech synthesis in 1961. The team wrote, “We wanted to capture the sweetness and inmacy [sic] of Daisy Bell, along with the strange juxtaposition and tension between human and machine that we needed to embrace to be able to write this song.”
The juxtaposition in their case is the addition of AI to co-write Listen to Your Body Choir. The team used AI to generate lyrics that “continued” on from Daisy Bell. They also created a melody deriving from the song, too. The team then recorded their singer Brodie singing Daisy Bell and trained AI based solely on that audio recording. According to the producers, this didn’t provide enough data for the AI to actually learn the song. But the resulting sound bites and samples “sound kind of like Brodie and have a hint of the feeling of Daisy Bell”.
Why Use AI In Music Production?
At the AI Song Contest, co-organised by Wallifornia MusicTech, an entertainment and technology hub based in Belgium, and DeepMusic.ai, an American company fostering collaborations between musicians and AI researchers, M.O.G.I.I.7.E.D.’s entry impressed the judges. The panel praised the “rich and creative use of A.I. throughout the song.”
Listen to Your Body Choir was one of 38 entries in the song contest. These songs are all the fruit of AI technology intervention into the music-making process. Teams or individuals used deep-learning neural networks — computing systems loosely modeled on the workings of a human brain — to analyze vast amounts of music data. The AI subsequently generated melodies, chord sequences, drum beats, lyrics and, in some cases, vocals.
While AI has generated enthusiasm in the art world including for art restoration, it hasn’t hit the mainstream yet in music-making. (That said, AI does have a fundamental role in music discovery such as analyzing listeners’ preferences and curating recommendation playlists). Some previous uses include generating “new” tracks from artists like Amy Winehouse, Mozart and Nirvana, by feeding their back catalog into a deep-learning neural network.
One main limitation appears to be singing. AI’s “voice” is awkwardly robotic and auto-tuned. As such, the vocals in Listen to Your Body Choir, for example, were not produced with AI.
Instead, AI has been appreciated for generating serendipitous accidents. Justin Shave, creative director of the Australian music and technology company Uncanny Valley and winner of last year’s AI Song Contest said, “You can feed some things into an A.I. or machine-learning system and then what comes out actually sparks your own creativity. You go, ‘Oh my god, I would never have thought of that!’ And then you riff on that idea.”
The Emotional Range of AI
At the moment, the AI-assisted songs feel at an experimental stage. They are unlikely to become popular hits. But they are thought-provoking. They make us question whether it is possible for non-human entities to produce something as creative and emotional as music.
One of the AI Song Contest entries, produced by South Korean team H:Ai:N, honed in on this question. The ballad Han takes its name from a unique Korean word for sorrow, rage and regret. The team describes the word as “intertwined with the trials and sorrows during Korean history, such as the pain of division.” H:Ai:N trained AI with a broad range of data, from traditional ballads to ancient poetry to K-pop. Their aim was to “try to expand the emotions of ‘Han’ with AI, by reproducing the spirit of the Korean people.” As the team wrote, “Human creativity can also be said to [use] data driven thinking produced by personal experience. Therefore, artificial intelligence can mimic it, and provide more creative sources for humans.”
However, while AI was used to generate lyrics, the team “recomposed the lyrics that were thought to be meaningful and matched it with the storyline.” As such, it still required human sensitivity to induce emotion. Speaking to Science Focus, Professor Nick Bryan-Kinns, director of the Media and Arts Technology Centre at Queen Mary University of London, said, “lyrics for songs are typically based on people’s life experiences, what’s happened to them. People write about falling in love, things that have gone wrong in their life or something like watching the sunrise in the morning. AIs don’t do that.”
There have been some other AI music experiments, one of them imagining new songs from popular late musicians that got mixed reviews. However, for the moment, it looks like current musicians including Grimes can rest easy that we haven’t yet arrived at the day when AI will “make musicians obsolete”.
Featured image: Possessed Photography on Unsplash