Artificial intelligence creates a dilemma for musicians. On the one hand it could help them develop as artists; on the other, it could seriously damage their livelihoods.
Both possibilities are evident in the ways musicians are already using technology, says Martin Clancy. He is the founder of AI:OK, an Irish initiative to promote the ethical use of artificial intelligence in the music industry.
He splits AI tools into two categories. The first is generative, which through applications such as Suno and Udio, can create lyrics, melodies, vocals and even complete songs almost instantly, prompted by lyrical themes and music styles that users suggest. The second is complementary, which enhances musicians’ work through tools for mixing, mastering, session-player emulation and stem separation (which splits a recorded song into vocals, guitar, drums and so on, enabling users to remove individual components of the track).
These tools, Clancy says, “are now standard in the creative workflow, especially for younger or independent artists. Apple’s Logic Pro is a digital audio workstation that comes with four AI-powered session players and stem separation, and is free on all new Mac computers. BandLab, which is used by over 100 million people, opens with a Create a Song with AI button. Another tool, Voice-Swap, allows producers to legally re-sing demos using approved, royalty-sharing artist voice models.”
Suno and Udio have gained tens of millions of users in the past 18 months, Clancy says. “That’s because the subscription model is cheap – for about $10 per month, Suno offers the user the potential to create 500 complete songs.”
What does this fully AI-generated music sound like? One example is Carolina-O, an Udio-created homage to the writer Ernest Hemingway. Another is Verknallt in einen Talahon, which was the first AI-generated song to become a hit in Germany (where its problematic lyrics made a lot of people “feel somewhat queasy”, according to one report).
AI systems create music by automatically extracting vast amounts of musical data from websites and other online sources – known as scraping – then analysing and emulating it. Ethically speaking, they should emulate other people’s music only with consent from licensed or self-owned material.
“The artist or rights holder should be credited and paid, and the AI use should be disclosed to listeners,” Clancy says. “Unethical use of AI would be music which is patterned or trained on scraped catalogues and publicly available data without permission,” says Clancy, who began his long career in music as a member of the band In Tua Nua in the 1980s.
Artists now using generative AI in an ethical way include Holly Herndon, a Berlin-based American composer who creates music using Max, a visual programming language that lets users create customised instruments and vocal processes. Taryn Southern created her album I Am AI using several artificial-intelligence-based tools. The veteran musician and producer Brian Eno’s approach to creativity, Clancy says, is driven by curiosity and a commitment to experimentation.
Are any Irish musicians following Eno’s lead? “There is a noticeable gap in artists doing anything interesting with this,” Clancy says. “That’s surprising and concerning, but it could be a 1975 moment, like it was before punk rock came along to shake things up. So far I’m not seeing it happening, yet I sense people are beginning to realise the possibilities.”
Eno coined the term “generative music”, says Clancy. “But he wasn’t speaking about it in terms of AI systems – more in the areas of chance, randomness and order disarray. He views the recording studio as a musical instrument, as opposed to how most of us see it, as a technological processing plant.”
Eno, Herndon and Southern use AI in principled and intelligent ways, valuing consent, creativity and copyright, Clancy says.
Other creations have taken a different route, including Heart on My Sleeve, an AI-generated song from April 2023 that was written and produced by a TikTok user known only as Ghostwriter977 and features vocals that sound remarkably similar to Drake and the Weeknd.
Both artists are hugely popular: the former has sold more digital singles in the US than any other artist; the latter set a record in 2024 as the artist with the most songs to have more than a billion streams on Spotify.
Universal Music Group, to whose Republic Records label Drake and the Weeknd are signed, filed a takedown notice with multiple online platforms within two weeks of Heart on My Sleeve’s release – by then the song was already a sizeable viral success, with more than 600,000 streams on Spotify, 275,000 views on YouTube and 15 million views on TikTok.
For every use of technology that prompts a moral or legal dilemma, there is another with a more welcome outcome, such as the “Abbatars” that stand in for the Swedish pop stars at the Abba Voyage show in London.
The 3D projections of Agnetha Fältskog, Björn Ulvaeus, Benny Andersson and Anni-Frid Lyngstad are generated using motion-capture and machine-learning processes created by Industrial Light & Magic. The visual-effects company, which was founded by the film-maker George Lucas in 1975, put the four musicians in motion-capture suits, then used 160 cameras to film their movements and facial expressions. It fed its five weeks of data into a series of processing and modelling systems to create the digital (and de-aged) versions of the band in their 1979 heyday.
Abba Voyage, which cost about €165 million to create (and also involved the work of 140 animators at Industrial Light & Magic), is a playful and transparent development of the live-concert experience, Clancy says. “The technology dazzles but the event is firmly in service of nostalgia and showmanship, and it also employs a 10-piece live band. It’s a recent example of how the marriage of live music and AI can work.”
Clancy also points to virtual concerts by late artists such as Tupac Shakur, at Coachella in 2012, and the more recent hologram-generated shows featuring visualisations of Roy Orbison, Whitney Houston and Elvis Presley as further examples of AI establishing itself in popular culture.
Artificial intelligence is quickly becoming part of our daily lives, Clancy says. “AI isn’t just a new sound,” he says. “It’s a new infrastructure that is baked into pretty much all forms of our products and services, which makes it intuitively personal.”
Our smartphones are crammed with forms of AI that we already take for granted, such as Apple’s Siri, Amazon’s Alexa, Google Assistant, predictive text, facial recognition, customer-service chatbots, banking apps and Google Maps. (Possibly less usefully, Mercedes-Benz and will.i.am have developed Sound Drive, an AI-powered in-car entertainment system that will remix your tunes and create “musical expressions” of your acceleration, braking and steering.)
“The idea of human beings viewing AI technology as, possibly, an existential threat to their existing work, but also saying, ‘Let’s do something interesting with it,’ is important,” Clancy says. “That, however, takes an imaginative leap.”
This won’t happen of its own accord. Clancy hopes that AI:OK’s “literacy programme”, a first-step educational tool based on the recommendations of the Government’s Irish Artificial Intelligence Advisory Council, will, for example, help to create accelerator programmes to provide artificial-intelligence start-ups with funding, resources and mentoring.
Clancy understands why some people are apprehensive about artificial intelligence. “But the one thing you can’t do is to think you can stop it,” he says. “The positive argument, the positive message, is that AI is just a new technological development. It’s business as usual, so don’t worry.”