Introducing Endel: the First-Ever Algorithm to Sign a Record Label Contract

There was a bit of news last week that may have been overlooked. Warner Music Group announced its roster of newly signed artists for the coming year. On there already were the usual staples: Coldplay, Madonna, etc. But there was one signee that trailed off from the others – an algorithm generator. But as far as news goes, no one batted an eye.

That’s right. Warner music has signed a 20 album distribution and publishing contract with Endel, a tech startup whose core algorithms collect sounds and noises based on user settings, time, and location, and generate ambient soundscapes to accompany you at any time of day.

Not only that, its ambitions stretch even farther. Already having the ability to gather data based on heart rate, its plans are to take note of driving patterns and the rhythm of other daily activities to generate a soundscape playlist based on how your day was.

However, Endel has already confirmed that they do not have malicious intent – they don’t plan on stealing jobs from songwriters or musicians. But this is a debate that comes along every few years: will entertainment eventually replace humans with generative algorithms? Not just in music, but in TV and film as well?

Nonetheless, it’s always fun (yet scary) to talk about. The idea is nothing new, people have been throwing around the idea of algorithm based entertainment for decades now. But each time the topic’s brought up, it’s always a matter of “when” it might happen. (Last time I remember this topic coming up was when the Tron remake announced it’ll feature an artificially generated younger Jeff Bridges.)

The financial reasons are pretty clear – if a film calls for the next Marlon Brando, why go looking for the next Brando when you can code artificial intelligence that can mimic Brando’s every move and inflection? Or if an algorithm can map out the data of what goes into a “successful” song, why not just let it churn out several songs just like that?

So, for the purposes of this article, let’s just play along. Yes, A.I. will have (or has already achieved) the ability to craft what we consider a “good” song based on data it analyzes in other “good” songs. However, when we introduce the idea of algorithm generated music, we are implying that the creative abilities of humans are therefore limited. And that’s a scary thought to think, that humans are unable to transcend above their own limitations, instead relying on a machine to create art for them. However, that’s not what art is.

A machine will soon be able to map out an emotional song and generate the moods and pallets made by other such emotional songs. But, that’s not the reason we love such particular songs. Because when we think of the power of “Smells Like Teen Spirit,” we’re not moved by how well it’s made or crafted, but because it’s Kurt Cobain snarling the lyrics with his crude voice. When I hear the star-spangled banner, I think of Hendrix shredding and setting his guitar on fire. I think of Nick Cave running atop the hands of his audience during Stagger Lee, balancing himself among them. Such songs, no matter how well crafted or arranged, will never be able to match the prowess of artists ascending above their capabilities to make something truly remarkable.

So, in the end, yes, artificial intelligence will eventually be able to make a good song. But it will never make a great one. It won’t have the balls.

Leave a Reply