AI’s Emergence on Streaming

It’s become a fairly common occurrence for music listeners; you’re listening to a playlist on Spotify in the background while doing chores or working out and you hear a song that sounds oddly familiar yet alien. The realization is enough to get you to stop what you’re doing and check what artist it is, and of course it's one you've never heard of. You go to the artist’s page and they have three to four singles with generic covers, and maybe a few thousand monthly listeners. Why does this artist sound so similar to some of your favorites and appear alongside them in playlists, yet seems to have no identifiable personality or presence? You’ve most likely stumbled on a group of AI generated songs, and their hold on the streaming world is only growing.

Fakery is nothing new to streaming. Bots used to boost streaming numbers have been in use for a number of years now. Spotify has even been known to hire producers to create songs for them that they can then use to fill “chill” and “lofi” playlists as a way to retain 100% revenue on streams. In both of those instances however, there is at least some human artist involvement. Even though Spotify is cheating other artists out of revenue with songs they have created, they are at least paying a real artist to create them. The introduction of AI into the ecosystem allows for a scenario that is basically fully removed from humans, where AI generated songs can be streamed by bots for purely small passive profits.

Recently, Spotify removed tens of thousands of songs that were created with the AI service Boomy from their platform. Boomy has claimed its users have generated around 14 million songs, totalling nearly 14% of all recorded music, making the number of songs removed a small sliver of what is currently on streaming. The songs were removed not because of their origin, or even the fact that they may be in violation of copyright law, but because Universal Music Group flagged them for using bots to boost streaming numbers. AI songs occupy a gray area of copyright law, where there is little precedent for what is deemed plagiarism. When a song is generated to sound extremely similar to an existing artist, but doesn’t use their exact voice or exact chord patterns, how can you legally prove it is plagiarizing an existing work?

Although a music industry giant like UMG is able to get some songs removed, independent artists may find it next to impossible to do the same when their songs are plagiarized. In a related series of incidents, some independent artists recently found their songs reuploaded to Spotify by another user after slightly altering the speed or pitch. Local Milwaukee band Social Caterpillar went through this scenario earlier this year, and it took weeks to get Spotify to take the stolen tracks down. If blatantly stolen songs are such a hassle to remove, how difficult will it be for independent artists to remove songs created with AI to sound like them? With only vague legal protection in place, and without the massive law team of a big label, where will they go to even start the removal process, and what happens if Spotify just says no to their removal?

Though AI is still a ways away from having the ability to match the creativity of a human artist, it is good enough to create songs at a high enough level to fill in the gaps of streaming playlists. Streaming profits for artists are already laughable as it is, and AI is slicing those profits even further for next to nothing by cutting into streaming time that would be filled by those artist’s music. We are entering dangerous territory, where adequate, generated songs can slowly start to replace interesting and creative work from real artists. It's important to recognize this issue and to start working to provide protections for human artists before too much irreversible damage has occurred.

Previous
Previous

The Tucker 48: An Influential Flop

Next
Next

Less is More?