7 March 2024 to 13 December 2023
¶ Talking to Robots About Songs and Memory and Death (PopCon 2024) · 7 March 2024 essay/listen
Talking to Robots About Songs and Memory and Death
Infinite Archives, Fluctuating Access and Flickering Nostalgia at the Dawn of the Age of Streaming Music
(delivered at the 2024 Pop Conference)
Let me tell you how it used to be. Songs were written and sung and recorded, but then they were encased in finite increments of plastic, and our control over our ability to hear them relied on each of us, sometimes in competition, acquiring and retaining these tokens. The scarcity of particular plastic could shroud songs in selective silence. A basement flood could wash away music.
Imagine, instead, a shared and living archive. Music, instead of being carved into inert plastic, is infused into the frenetic dreams of angelic synapses. Every song is sung at once in waiting, and needs only your curious attention to summon it back into the air. Nothing, once heard, need ever be lost. The rising seas might drive us to higher ground, but our songs watch over us from above.
When I proposed this talk, Spotify held 368,660,954 tracks from 61,096,319 releases, and I could know that because I worked there. The servers of streaming music services are unprecedented cultural repositories, diligently maintained and fairly well annotated. We pour our love into them, and in return we can get it back any time we want.
That's the techno-utopian version, at least. In the real-life version, the angels are only robots, and the robots aren't even actual robots. The infinite generosity of technology is constrained by relentless pragmatic contingencies: corporations, laws, contracts, stockholders, greed. All those songs are there, technically poised, but whether we are allowed to hear them depends on layers of human abstraction and distraction. This is what people mean when they object to streaming as renting the things that you love. The erratic logistics of music licensing control whether any given song is permitted to escape from the streaming servers. Licensing, in turn, is permuted by artists and labels and distributors and streaming services, and then again by the borders of countries and the passage of time. The song you want to hear again is still there. But that may not be enough.
"Renting the things you love" sounds bad. But so too, I think, does "purchasing the things you love". I don't philosophically need or want my love to be materialized in a form for which I have to transact, and which I then have to store. I want to be able to recall joys effortlessly. The system model of instant magical recall, which is an illusion that streaming can sustain under conducive network conditions, is what I think we want, what music wants. If renting is reliable, maybe it's fine. But how reliable is it?
If you don't work for a streaming service, you can only really assess this by anecdote. Joni Mitchell objected to Spotify's podcast deal with Joe Rogan, and revoked its rights to her whole catalog. Because rights are complicated, though, it didn't entirely work. When I proposed this talk, there was one Joni Mitchell song still accessible on Spotify in the US, a stray copy of "A Case of You" from a random compilation released in roundabout evasion by some label other than hers. If you didn't know this context, you would have no immediate way to tell Joni wasn't an emerging artist with just the one complicated, hopeful first single so far. A complicated hopeful first single with 103,102,704 plays, apparently, so you might wonder a little bit. Promising, I think. I'd like to hear more.
Since then, the license police caught up to that rogue compilation, and "A Case of You" is gone again. As of my drafting of this talk, Joni Mitchell's Spotify catalog was a 10-song 1970 BBC live album, and a single pointlessly overbearing cover of "River" by somebody else that was gamed onto Joni's Spotify page by the trick of labeling it as a classical composition, which causes Spotify to treat its composer as one of its primary artists. If the only artists with the power to withhold their songs were ones of Joni's stature, that would actually be fairly manageable. The plastic tokens of Blue are not scarce or expensive. If only artists had the power to withhold songs, actually, this would be a conversation about art and the limits of authorial control, and whether Joni is allowed to come take your copy of Blue away from you if you listen to Joe Rogan.
If you do work for a streaming service, though, and you can manage not to resign in protest of anything it does that you disagree with, then you don't have to rely on annecdote, you can use data. So I did. I ran the historical analysis of all post-release licensing gaps in song availability from 2015 to 2023, both overall and aggregated by licensor. In practice, in turns out, almost all songs available today have been available for streaming continuously since release. There are a handful of licensors whose tracks are routinely retracted, and there are good reasons for this, which I'm not allowed to tell you but I can reassure you that those are not the tracks we're worried about. Actual licensing gaps for actual songs with actual audiences are, statistically speaking, vanishingly rare. I made a nice graph of this.
If you work for a streaming service, however, you can also get laid off by that streaming service, which I also did. When this happens you have a surprise 10-minute call with an HR rep you've never seen before at 9:15 on a Monday morning, and then your laptop is remote-locked and you don't have those graphs any more. The robots are not allowed to talk to me now. Who will sit with them when they are sad? The problem with externalizing our memories and our note-taking into the cloud isn't technological reliability, it's control. The problem with renting the things you love is not the fragility of the things, it's the morally unregulated fragility of the relationship between you and the corporate angels.
We'll be OK without that graph. It was not, shall we say, the "A Case of You" of data graphics. The things that really belonged to Spotify, Spotify can keep. The goverance models for modern corporations are still painfully primitive. We understand that local democracies and a little bit of international law are a better model than crusader feudalism for communities of place, and I feel like it's morally apparent that corporations, as communities of purpose, ultimately deserve the same models and protections. If you move away from a city, you're still allowed to come visit. I should probably be allowed to visit my graphs. I like to imagine Joni ripping copies of her own CDs and adding them to Spotify as local files just as a jurisdiction flex.
My listening, on the other hand, is my own. Consumer protections are slightly more advanced than employee protections, so you can request your complete listening history from Spotify any time you want. For much of the decade I spent working at Spotify, though, I also maintained an abstruse weekly annotated-playlist series called New Particles, so I have my own record, not just of what I heard, but what I cared about. Over the course of 454 weeks, I cared about 35,900 tracks by 13,951 different artists. This is small for data, but big for annecdotes. What I find, going through it, is that almost every week beyond the recent past has at least one song that is now, or at least currently, unavailable. Some of the earliest lists from 2015 are missing 3 or 4, but by 2017 and 2018 it's usually 1, plus or minus 1.
Counting is not an emotional exercise, though, and all interesting music-data experiments begin with some kind of counting but don't stop there. So I went through the playlists I was listening to in my birthday week each year, cross-checking the specific tracks that had gone gray in Spotify, to see if I could tell a) how missing they really are, and b) how much I care. This is mostly what my job at Spotify was like, too: short bursts of math, and then the long curious process of trying to understand the significance of the resulting numbers. And I did consistently say that I would be doing this even if they weren't paying me.
From my March 31st 2015 list I am missing the song "Kranichstil" by the Ukrainian/German rapper Olexesh. His albums before and after seem to all be available for streaming still. This one isn't, but the song is easily found on YouTube. It's still sinuous and boomy and great.
2016: I'm missing "Rolling Stone" by the Pennsylvania emocore duo I Am King. They're still putting out sporadic emo covers of pop songs, which is one of my numerous weaknesses. "Rolling Stone" was an original, and I admit I don't remember it super-well, so maybe the version that is currently available on Spotify is different from the unavailable one I liked in 2016, but it's definitely close enough.
2017: "Por Amor" by the Chilean modern-rock band Lucybell. I had the single of this on my playlist, and you can't play that any more, but the slightly longer version is still the first track on the readily-available album Magnético, and still sounds like a stern Spanish arena-rock transformation of a New Order song.
2018: The whole album MASSIVE by the K-Pop boy-band B.A.P. is unavailable, but the song I liked, the cartoonish rap-rock rant "Young, Wild and Free", was originally on a 2015 EP, which is still available.
2019: The trap-metal noise-blast "HeavyMetal!" (no space between the words, exclamation point at the end) by 7xvn (spelled with the number seven, then x-v-n) is off of Spotify, but you can still find it on Soundcloud, which in this case feels about right.
2020: A gothic metalish song called "Menneisyyden Haamut" by the Finnish band Alter Noir. Their Spotify page is empty now, and if you Google this song, the results are the orphaned Spotify page, two links from their own Facebook page to that empty Spotify page, and then my playlist that I put the song on. I sent myself an email to see if I knew what the story is with this, but I haven't heard back from myself yet.
2021: "Fuck You Nnb" by lieu. I am old and do not know what "Nnb" stood for, but I do know that lieu was supposedly a 15-year-old kid deliberately switching between distributors so their songs would end up strewn across disconnected artist identities. Perfect public memory of what we thought was a good idea when we were 15 is not necessarily a civic virtue. In some cases forgetting is probably the right way to remember.
2022: ANISFLE were an ornate Japanese rock band, or at least a heavily embroidered impression of one. Their Spotify profile is blank, their web store is empty, I guess something catastrophic happened to them. But there are still a few of their videos on YouTube, and they're still ridiculous and magnificent.
2023: The only new thing I loved last April 4th that didn't even survive for a year was a maskandi song called "UYASANGANA YINI" by uMjikelwa. It seems to still be available on Apple Music. One of his other albums is on Spotify, and I will be completely honest that although I adore maskandi and follow hundreds of maskandi artists to make sure I have a constant supply of new maskandi to listen to, I usually pick one random song from each album and I do not pretend I can really tell them apart. If you snuck into my web archives and swapped this for anything similar, I would almost certainly not notice.
I think I can live with that much loss. Individual human obstruction occludes individual archives, but the network of archives, from the well-regulated to the unruly, tends to route around suppression. It's hard to make everybody forget.
And meanwhile, my database memory is far, far better than my brain memory. How many of those 13,951 artists could I list without looking? Some. Lots, but not most. But this is how I live, now. How old was my kid when we had the birthday party where their best friend's brother fell on a brick and had to be taken to the ER? I don't remember, but I can look through Google Photos and find it by the pictures we took before the panic. Which China Mieville book did I read first? I don't remember, but I bet I can find the email I wrote you right afterwards. Or maybe I sent it from a work address and so I can't.
So yes, our technically perfect externalized memories are imperfected by our insistence on staging them behind our contentious and fluctuating rules. We produce a compromised projection of our archives by fighting over their access controls. Our human systems hold back our information systems.
But I think we'd rather have that than the other way around. If my record store, in 1989, had made a ridiculous deal with Joe Rogan and Joni had pulled her whole divider out of the M bins, we had no collective recourse. We could check the used stores, but who gets rid of Joni Mitchell albums? Recovering from this, later, would require re-shipping a case of Blue to, oh, Canada? And everywhere else. The grayed-out tracks on Spotify playlists are more like the coy ropes across the wine shelves in Whole Foods on Sunday in blue-law states. Not only are we ready when the laws and processes finally relent, but we are reminded, every moment until then, how arbitrary and ridiculous it is that they still have not.
What would better laws and processes involve? What we need here, I think, is a legal and syntactical structure for asserting music rights as layers, starting with the artist. Right now, each licensor of a recording makes a deal with its artist, with terms and dates, but then turns around and sends the streaming services only enough data to assert that licensor's own isolated claims. If licensors were required to pass along both the span of their claim, and the underlying artist ownership to which the rights will subsequently revert, then royalty attribution could fluctuate without affecting availability. And by the way, while we're building that, we could also use the same structure to embed the composition rights with the recording rights, eliminating the completely insane indirection in which the publishing rights for streaming songs have to be re-asserted separately by writers and then rediscovered separately by collecting societies.
If this last idea appeals to you so much that you would like to read it again in print, it also appears in my upcoming book, titled You Have Not Yet Heard Your Favorite Song: How Streaming Changes Music, which comes out in June on Canbury Press. A book is another kind of externalized memory. It's good to remember how we thought things were. In my case I wrote this one while I was working at Spotify, but not because I was working at Spotify, and at least I got laid off in time to edit a bunch of present- and future tenses into the past before they were printed on paper. Memory, too, is a system: of layers and contingencies and adaptation and revelation. Underneath, somewhere, there's always love. We fall in love three minutes at a time, and we might forget the songs but we won't forget the falling.
Meanwhile, we improve the world when we can, with whatever tools and influence we are currently allowed. When we can't, we try to preserve it's potential in hiding, if not in angelic invulnerability, then at least above the water line. We leave the robots on guard, not because we trust them, but because it makes them feel useful and we don't have the heart to tell them that they aren't real. We let new songs invoke the ones we're not supposed to hear. We name our loss, and we try to not die before the day when we're allowed to remember everything again.
Infinite Archives, Fluctuating Access and Flickering Nostalgia at the Dawn of the Age of Streaming Music
(delivered at the 2024 Pop Conference)
Let me tell you how it used to be. Songs were written and sung and recorded, but then they were encased in finite increments of plastic, and our control over our ability to hear them relied on each of us, sometimes in competition, acquiring and retaining these tokens. The scarcity of particular plastic could shroud songs in selective silence. A basement flood could wash away music.
Imagine, instead, a shared and living archive. Music, instead of being carved into inert plastic, is infused into the frenetic dreams of angelic synapses. Every song is sung at once in waiting, and needs only your curious attention to summon it back into the air. Nothing, once heard, need ever be lost. The rising seas might drive us to higher ground, but our songs watch over us from above.
When I proposed this talk, Spotify held 368,660,954 tracks from 61,096,319 releases, and I could know that because I worked there. The servers of streaming music services are unprecedented cultural repositories, diligently maintained and fairly well annotated. We pour our love into them, and in return we can get it back any time we want.
That's the techno-utopian version, at least. In the real-life version, the angels are only robots, and the robots aren't even actual robots. The infinite generosity of technology is constrained by relentless pragmatic contingencies: corporations, laws, contracts, stockholders, greed. All those songs are there, technically poised, but whether we are allowed to hear them depends on layers of human abstraction and distraction. This is what people mean when they object to streaming as renting the things that you love. The erratic logistics of music licensing control whether any given song is permitted to escape from the streaming servers. Licensing, in turn, is permuted by artists and labels and distributors and streaming services, and then again by the borders of countries and the passage of time. The song you want to hear again is still there. But that may not be enough.
"Renting the things you love" sounds bad. But so too, I think, does "purchasing the things you love". I don't philosophically need or want my love to be materialized in a form for which I have to transact, and which I then have to store. I want to be able to recall joys effortlessly. The system model of instant magical recall, which is an illusion that streaming can sustain under conducive network conditions, is what I think we want, what music wants. If renting is reliable, maybe it's fine. But how reliable is it?
If you don't work for a streaming service, you can only really assess this by anecdote. Joni Mitchell objected to Spotify's podcast deal with Joe Rogan, and revoked its rights to her whole catalog. Because rights are complicated, though, it didn't entirely work. When I proposed this talk, there was one Joni Mitchell song still accessible on Spotify in the US, a stray copy of "A Case of You" from a random compilation released in roundabout evasion by some label other than hers. If you didn't know this context, you would have no immediate way to tell Joni wasn't an emerging artist with just the one complicated, hopeful first single so far. A complicated hopeful first single with 103,102,704 plays, apparently, so you might wonder a little bit. Promising, I think. I'd like to hear more.
Since then, the license police caught up to that rogue compilation, and "A Case of You" is gone again. As of my drafting of this talk, Joni Mitchell's Spotify catalog was a 10-song 1970 BBC live album, and a single pointlessly overbearing cover of "River" by somebody else that was gamed onto Joni's Spotify page by the trick of labeling it as a classical composition, which causes Spotify to treat its composer as one of its primary artists. If the only artists with the power to withhold their songs were ones of Joni's stature, that would actually be fairly manageable. The plastic tokens of Blue are not scarce or expensive. If only artists had the power to withhold songs, actually, this would be a conversation about art and the limits of authorial control, and whether Joni is allowed to come take your copy of Blue away from you if you listen to Joe Rogan.
If you do work for a streaming service, though, and you can manage not to resign in protest of anything it does that you disagree with, then you don't have to rely on annecdote, you can use data. So I did. I ran the historical analysis of all post-release licensing gaps in song availability from 2015 to 2023, both overall and aggregated by licensor. In practice, in turns out, almost all songs available today have been available for streaming continuously since release. There are a handful of licensors whose tracks are routinely retracted, and there are good reasons for this, which I'm not allowed to tell you but I can reassure you that those are not the tracks we're worried about. Actual licensing gaps for actual songs with actual audiences are, statistically speaking, vanishingly rare. I made a nice graph of this.
If you work for a streaming service, however, you can also get laid off by that streaming service, which I also did. When this happens you have a surprise 10-minute call with an HR rep you've never seen before at 9:15 on a Monday morning, and then your laptop is remote-locked and you don't have those graphs any more. The robots are not allowed to talk to me now. Who will sit with them when they are sad? The problem with externalizing our memories and our note-taking into the cloud isn't technological reliability, it's control. The problem with renting the things you love is not the fragility of the things, it's the morally unregulated fragility of the relationship between you and the corporate angels.
We'll be OK without that graph. It was not, shall we say, the "A Case of You" of data graphics. The things that really belonged to Spotify, Spotify can keep. The goverance models for modern corporations are still painfully primitive. We understand that local democracies and a little bit of international law are a better model than crusader feudalism for communities of place, and I feel like it's morally apparent that corporations, as communities of purpose, ultimately deserve the same models and protections. If you move away from a city, you're still allowed to come visit. I should probably be allowed to visit my graphs. I like to imagine Joni ripping copies of her own CDs and adding them to Spotify as local files just as a jurisdiction flex.
My listening, on the other hand, is my own. Consumer protections are slightly more advanced than employee protections, so you can request your complete listening history from Spotify any time you want. For much of the decade I spent working at Spotify, though, I also maintained an abstruse weekly annotated-playlist series called New Particles, so I have my own record, not just of what I heard, but what I cared about. Over the course of 454 weeks, I cared about 35,900 tracks by 13,951 different artists. This is small for data, but big for annecdotes. What I find, going through it, is that almost every week beyond the recent past has at least one song that is now, or at least currently, unavailable. Some of the earliest lists from 2015 are missing 3 or 4, but by 2017 and 2018 it's usually 1, plus or minus 1.
Counting is not an emotional exercise, though, and all interesting music-data experiments begin with some kind of counting but don't stop there. So I went through the playlists I was listening to in my birthday week each year, cross-checking the specific tracks that had gone gray in Spotify, to see if I could tell a) how missing they really are, and b) how much I care. This is mostly what my job at Spotify was like, too: short bursts of math, and then the long curious process of trying to understand the significance of the resulting numbers. And I did consistently say that I would be doing this even if they weren't paying me.
From my March 31st 2015 list I am missing the song "Kranichstil" by the Ukrainian/German rapper Olexesh. His albums before and after seem to all be available for streaming still. This one isn't, but the song is easily found on YouTube. It's still sinuous and boomy and great.
2016: I'm missing "Rolling Stone" by the Pennsylvania emocore duo I Am King. They're still putting out sporadic emo covers of pop songs, which is one of my numerous weaknesses. "Rolling Stone" was an original, and I admit I don't remember it super-well, so maybe the version that is currently available on Spotify is different from the unavailable one I liked in 2016, but it's definitely close enough.
2017: "Por Amor" by the Chilean modern-rock band Lucybell. I had the single of this on my playlist, and you can't play that any more, but the slightly longer version is still the first track on the readily-available album Magnético, and still sounds like a stern Spanish arena-rock transformation of a New Order song.
2018: The whole album MASSIVE by the K-Pop boy-band B.A.P. is unavailable, but the song I liked, the cartoonish rap-rock rant "Young, Wild and Free", was originally on a 2015 EP, which is still available.
2019: The trap-metal noise-blast "HeavyMetal!" (no space between the words, exclamation point at the end) by 7xvn (spelled with the number seven, then x-v-n) is off of Spotify, but you can still find it on Soundcloud, which in this case feels about right.
2020: A gothic metalish song called "Menneisyyden Haamut" by the Finnish band Alter Noir. Their Spotify page is empty now, and if you Google this song, the results are the orphaned Spotify page, two links from their own Facebook page to that empty Spotify page, and then my playlist that I put the song on. I sent myself an email to see if I knew what the story is with this, but I haven't heard back from myself yet.
2021: "Fuck You Nnb" by lieu. I am old and do not know what "Nnb" stood for, but I do know that lieu was supposedly a 15-year-old kid deliberately switching between distributors so their songs would end up strewn across disconnected artist identities. Perfect public memory of what we thought was a good idea when we were 15 is not necessarily a civic virtue. In some cases forgetting is probably the right way to remember.
2022: ANISFLE were an ornate Japanese rock band, or at least a heavily embroidered impression of one. Their Spotify profile is blank, their web store is empty, I guess something catastrophic happened to them. But there are still a few of their videos on YouTube, and they're still ridiculous and magnificent.
2023: The only new thing I loved last April 4th that didn't even survive for a year was a maskandi song called "UYASANGANA YINI" by uMjikelwa. It seems to still be available on Apple Music. One of his other albums is on Spotify, and I will be completely honest that although I adore maskandi and follow hundreds of maskandi artists to make sure I have a constant supply of new maskandi to listen to, I usually pick one random song from each album and I do not pretend I can really tell them apart. If you snuck into my web archives and swapped this for anything similar, I would almost certainly not notice.
I think I can live with that much loss. Individual human obstruction occludes individual archives, but the network of archives, from the well-regulated to the unruly, tends to route around suppression. It's hard to make everybody forget.
And meanwhile, my database memory is far, far better than my brain memory. How many of those 13,951 artists could I list without looking? Some. Lots, but not most. But this is how I live, now. How old was my kid when we had the birthday party where their best friend's brother fell on a brick and had to be taken to the ER? I don't remember, but I can look through Google Photos and find it by the pictures we took before the panic. Which China Mieville book did I read first? I don't remember, but I bet I can find the email I wrote you right afterwards. Or maybe I sent it from a work address and so I can't.
So yes, our technically perfect externalized memories are imperfected by our insistence on staging them behind our contentious and fluctuating rules. We produce a compromised projection of our archives by fighting over their access controls. Our human systems hold back our information systems.
But I think we'd rather have that than the other way around. If my record store, in 1989, had made a ridiculous deal with Joe Rogan and Joni had pulled her whole divider out of the M bins, we had no collective recourse. We could check the used stores, but who gets rid of Joni Mitchell albums? Recovering from this, later, would require re-shipping a case of Blue to, oh, Canada? And everywhere else. The grayed-out tracks on Spotify playlists are more like the coy ropes across the wine shelves in Whole Foods on Sunday in blue-law states. Not only are we ready when the laws and processes finally relent, but we are reminded, every moment until then, how arbitrary and ridiculous it is that they still have not.
What would better laws and processes involve? What we need here, I think, is a legal and syntactical structure for asserting music rights as layers, starting with the artist. Right now, each licensor of a recording makes a deal with its artist, with terms and dates, but then turns around and sends the streaming services only enough data to assert that licensor's own isolated claims. If licensors were required to pass along both the span of their claim, and the underlying artist ownership to which the rights will subsequently revert, then royalty attribution could fluctuate without affecting availability. And by the way, while we're building that, we could also use the same structure to embed the composition rights with the recording rights, eliminating the completely insane indirection in which the publishing rights for streaming songs have to be re-asserted separately by writers and then rediscovered separately by collecting societies.
If this last idea appeals to you so much that you would like to read it again in print, it also appears in my upcoming book, titled You Have Not Yet Heard Your Favorite Song: How Streaming Changes Music, which comes out in June on Canbury Press. A book is another kind of externalized memory. It's good to remember how we thought things were. In my case I wrote this one while I was working at Spotify, but not because I was working at Spotify, and at least I got laid off in time to edit a bunch of present- and future tenses into the past before they were printed on paper. Memory, too, is a system: of layers and contingencies and adaptation and revelation. Underneath, somewhere, there's always love. We fall in love three minutes at a time, and we might forget the songs but we won't forget the falling.
Meanwhile, we improve the world when we can, with whatever tools and influence we are currently allowed. When we can't, we try to preserve it's potential in hiding, if not in angelic invulnerability, then at least above the water line. We leave the robots on guard, not because we trust them, but because it makes them feel useful and we don't have the heart to tell them that they aren't real. We let new songs invoke the ones we're not supposed to hear. We name our loss, and we try to not die before the day when we're allowed to remember everything again.
If you are interested in hearing me speak, in person, and you live in LA, you have two opportunities coming up.
On Friday, March 8, I will be at UCLA speaking about human complications of simple numbers in music data, as part of a day-long Music + Data Symposium. The event is free, but seating is limited and you should register in advance.
On Saturday, March 9, at 9:00am, I will be at USC speaking about infinite archives, fluctuating access and flickering nostalgia at the dawn of the age of streaming music, as part of the three-day Pop Conference. This too is free but limited, and you should register in advance.
If you aren't near LA and just want to hear my voice, I was also on an episode of the new NeuralZen Venture Podcast. I don't know that anything I said there qualifies as Neural, Zen or relevant to venture capital, but it's a good medley of a bunch of things I have been saying over and over to many people in the last few weeks as I talk to them about the state of music and what my next steps in it might be.
On Friday, March 8, I will be at UCLA speaking about human complications of simple numbers in music data, as part of a day-long Music + Data Symposium. The event is free, but seating is limited and you should register in advance.
On Saturday, March 9, at 9:00am, I will be at USC speaking about infinite archives, fluctuating access and flickering nostalgia at the dawn of the age of streaming music, as part of the three-day Pop Conference. This too is free but limited, and you should register in advance.
If you aren't near LA and just want to hear my voice, I was also on an episode of the new NeuralZen Venture Podcast. I don't know that anything I said there qualifies as Neural, Zen or relevant to venture capital, but it's a good medley of a bunch of things I have been saying over and over to many people in the last few weeks as I talk to them about the state of music and what my next steps in it might be.
¶ Lotteries We All Lose · 11 February 2024 listen/tech
The systemic moral imperative seeks the distribution of power over its concentration, and thus the reduction of inequities of power. Money is usually a good proxy for power, so it's tempting to regard any redirection of money to the preexistingly unwealthy as moral. But this is both a dangerous conflation of cause and effect, and an attractive nuisance of potentially misleading measurement.
In fact, the most common nominal redistributions of money in a functionally self-defending power-structure are likely to be ones that specifically do not meaningfully distribute power. Capitalism's idea of charity is billionaires bestowing heroically magnanimous gifts. The recipients of this benevolence do benefit from it, but they do not generally become independently powerful themselves as result. And one of capitalism's favorites forms of structural redistributions of money is the lottery. Lotteries, by which I mean all general systems that assign selective benefits to a minority of the disempowered via processes that are either literally random or effectively random because they are out of the recipients' control, transfer money without conferring agency. Government lotteries usually compound this flaw by appealing to the disempowered and thus acting as a regressive tax, as well.
Jackpot-weighted lotteries, like Mega Millions and Powerball, have one more trick, which is that their biggest prizes can only be portrayed as redirecting money to the unwealthy by disingenuously selective definitions. Any individual jackpot winner is almost certain to have been among the unwealthy before their windfall, so any economic metrics that attribute the win to the collective unwealthy will look superficially progressive. But of course the actual effect is that the winner is moved from the category of the unwealthy to the ranks of the wealthy, at least nominally. The collective state of the unwealthy is unchanged. The power of billionaires is not threatened by the annointment of one more, particularly if the new one gets money without any of the other entitlements that usually help the rich stay rich, and is thus likely to either fall back out of the category of the wealthy by their own mismanagement, or at least spend their money on predictable signifiers of wealth and thus offer no systemic disruption.
A lottery is an algorithm, and of course the same moral calculus applies to all algorithms, particularly ones that operate directly as social or cultural systems. A music-recommendation algorithm is systemically moral if it reduces inequities of power among listeners and artists. Disproportionately concentrating streams among the most popular artists is straightforwardly regressive, but distributing streams to less popular artists is not itself necessarily progressive. A morally progressive algorithm distributes agency: it gives listeners more control, or it encourages and facilitates their curiosity; it helps artists find and build community and thus career sustainability. Holistically, it rewards cultural validation, and thus shifts systemic effects from privilege and lotteries towards accessibility and meritocracies.
The algorithms I wrote to generate playlists for the genre system I used to run at Spotify were not explicitly conceived as moral machines, but they inevitably expressed things I believed by virtue of my involvement, and thus were sometimes part of how I came to understand aspects of my own beliefs. They were proximally motivated by curiosity, but curiosity encodes an underlying faith in the distribution of value, so systems designed to reflect and magnify curiosity will tend towards decentralization, towards resistance against the gravity of power even if they aren't consciously counterposed, ideologically, against the power itself. The premise of the genre system was that genres are communities, and so most of its algorithms tried to use fairly simple math to capture the collective tastes of particular communities of music fans.
The algorithm for generating 2023 in Maskandi, for example, compared the listening of Maskandi fans to global totals in order to find the new 2023 songs that were most disproportionately played by those people.
Or, to phrase this from the world into streaming data, rather than vice versa, there is a thing in the world called Maskandi, a fabulously fluttery and buoyant Zulu folk-pop style, and there is an audience of people for whom that is what they mean when they say "music", and their collective listening contains culturally unique collective knowledge. Using math to collate that collective knowledge can allow us to discover the self-organization of music that it represents. If we do this right, we do not need to rely on individual experts approximating collective love with subjective opinions. If we do this right, we support a real human community's self-awareness and power of identity in a way that it cannot easily support itself. There's no magic source of truth about what "right" consists of, which is the challenge of the exercise but also exactly why it's worthwhile to attempt. For 12 years I spent most of my work life devising algorithms like this, running them, learning how to cross-check the cultural implications of the results, and then iterating in search of more and better revealed wisdom.
In general, I found that collective listening knowledge is not especially elusive or cryptic. Streaming is not inherently performative, so most people listen in ways that seem likely to be earnest expressions of their love. That love can be collated with very simple math. Simple math that produces specific results is good because it's easy to adjust and evaluate. You might argue, I suppose, that simple math, by virtue of its simplicity, does not establish competitive advantages. If music services all have the same music, and music players all have the same basic controls, then services are differentiated by their algorithms, and more complex algorithms are harder for competitors to replicate.
I offer, conversely, the rueful observation that in the last 12 years no other major music service has developed a cultural taxonomy of even remotely the same scale as the genre system we built at the Echo Nest and Spotify, while all of them have implemented versions of opaque personalization based on machine learning. ML recommendations are an arms-race with only temporary advantages. The machines don't actually learn, they always start over from nothing. ML engineers, too, can be trained from nothing or bought from other industries, without needing special love. But machines that do not run on love will not produce it.
In particular, ML algorithms tend to drift towards lottery effects. Vector embeddings, even if they are trained on human cultural input like playlist co-occurence, tend to introduce non-cultural computational artifacts by their nature. And thus we get things like this set of music my Spotify daylist recently gave me:
You don't need to hear the music behind these images to guess that it's mostly aggressive metalcore, but if you happen to know a lot about metalcore you could also notice that you probably have not heard of most of these bands. I am not a big fan of this very specific niche of metal, personally, which is the first thing wrong with this set as a personalized result for me. Bad results aren't disturbing because they're bad. Algorithms don't always work, for many reasons.
But as I scanned through these songs, I couldn't help noticing that they all sounded very similar. And as I poked through the artist links, trying to understand what this set of bands represents, I quickly realized that it doesn't. These bands are not all from any one place, they do not appear together on any particular playlists, their fans do not also like each other. They are not collectively part of a real-world community. Many of them have fewer than 100 monthly listeners, sometimes a lot fewer, and thus probably do not even individually represent real-world communities. They do appear to be real bands, rather than opportunistic constructs or AI interpolations, and in general they aren't bad examples of this kind of thing.
But they didn't end up on my list by merit or effort. They ended up here because Spotify uses ML techniques to group songs by acoustic characteristics, and this is one of the inputs into the vector embeddings that produce recommendations for daylist, Discover Weekly and other ML-driven personalized playlists. Acoustic similarity isn't completely random on the level of Powerball, but it's not a cultural meritocracy, and it's not a model for giving artists or listeners agency. Picking unknown artists out of the vast unheard tiers of streaming music is not an act of cultural incubation or stewardship, it's a mechanism of control. There are thousands of bands who sound like this. If you are one of the almost-thousands who are not randomly on my list, there's no action you can take to change this. If any one band ever gets famous this way, and statistically this is bound to happen rarely but eventually, you can be pretty sure we'll hear about it in self-congratulatory press releases that do not feature everyone else left behind. One exception doesn't change the rules. Lottery exposure offers a fleeting illusion of access, but if you didn't build it, you can't sustain it, either. You might hope, if you are in one of these lucky bands that reached me, that millions of not quite metalcore fans also got sets like this on a Friday afternoon, but two Friday afternoons later these bands are still obscure, still isolated. Losing lottery tickets do not make you luckier, but worse, lucking into more listeners this way doesn't give you an audience with any unifying rationale or presence, or a community to join. You can't learn from randomness, you can only hold still and hope it somehow picks you again.
This is exactly what the power-structure wants: listeners holding still to see what daylist tells them to listen to on Friday afternoon, artists holding still hoping to be chosen. Measure this control by money and it looks virtuous, taking a few streams from the most saturated songs and sprinkling them sparingly across the thirstiest. Measure it by alleviated thirst, though, and it evaporates. Or, rather, it condenses, but only into the reservoirs of the machine itself. Audit the beneficiaries and you might find that they aren't even random. ML's idea of the distribution of power is enough unpredictability to distract from its own motivations. My idea of the future of music is not a chaos engine printing rigged lottery tickets that mostly don't even pay for themselves. It's a future that we build. It's a future we could build faster with better tools, and algorithms can be those tools. But only if they are handed to us, with intelligible instructions, as we are in productive motion. Only if they are designed not to give us each little jolts of seemingly new power for which we can yearn, but to give all of us, together, currents of shared power with which our yearning can be expressed and redeemed.
In fact, the most common nominal redistributions of money in a functionally self-defending power-structure are likely to be ones that specifically do not meaningfully distribute power. Capitalism's idea of charity is billionaires bestowing heroically magnanimous gifts. The recipients of this benevolence do benefit from it, but they do not generally become independently powerful themselves as result. And one of capitalism's favorites forms of structural redistributions of money is the lottery. Lotteries, by which I mean all general systems that assign selective benefits to a minority of the disempowered via processes that are either literally random or effectively random because they are out of the recipients' control, transfer money without conferring agency. Government lotteries usually compound this flaw by appealing to the disempowered and thus acting as a regressive tax, as well.
Jackpot-weighted lotteries, like Mega Millions and Powerball, have one more trick, which is that their biggest prizes can only be portrayed as redirecting money to the unwealthy by disingenuously selective definitions. Any individual jackpot winner is almost certain to have been among the unwealthy before their windfall, so any economic metrics that attribute the win to the collective unwealthy will look superficially progressive. But of course the actual effect is that the winner is moved from the category of the unwealthy to the ranks of the wealthy, at least nominally. The collective state of the unwealthy is unchanged. The power of billionaires is not threatened by the annointment of one more, particularly if the new one gets money without any of the other entitlements that usually help the rich stay rich, and is thus likely to either fall back out of the category of the wealthy by their own mismanagement, or at least spend their money on predictable signifiers of wealth and thus offer no systemic disruption.
A lottery is an algorithm, and of course the same moral calculus applies to all algorithms, particularly ones that operate directly as social or cultural systems. A music-recommendation algorithm is systemically moral if it reduces inequities of power among listeners and artists. Disproportionately concentrating streams among the most popular artists is straightforwardly regressive, but distributing streams to less popular artists is not itself necessarily progressive. A morally progressive algorithm distributes agency: it gives listeners more control, or it encourages and facilitates their curiosity; it helps artists find and build community and thus career sustainability. Holistically, it rewards cultural validation, and thus shifts systemic effects from privilege and lotteries towards accessibility and meritocracies.
The algorithms I wrote to generate playlists for the genre system I used to run at Spotify were not explicitly conceived as moral machines, but they inevitably expressed things I believed by virtue of my involvement, and thus were sometimes part of how I came to understand aspects of my own beliefs. They were proximally motivated by curiosity, but curiosity encodes an underlying faith in the distribution of value, so systems designed to reflect and magnify curiosity will tend towards decentralization, towards resistance against the gravity of power even if they aren't consciously counterposed, ideologically, against the power itself. The premise of the genre system was that genres are communities, and so most of its algorithms tried to use fairly simple math to capture the collective tastes of particular communities of music fans.
The algorithm for generating 2023 in Maskandi, for example, compared the listening of Maskandi fans to global totals in order to find the new 2023 songs that were most disproportionately played by those people.
Or, to phrase this from the world into streaming data, rather than vice versa, there is a thing in the world called Maskandi, a fabulously fluttery and buoyant Zulu folk-pop style, and there is an audience of people for whom that is what they mean when they say "music", and their collective listening contains culturally unique collective knowledge. Using math to collate that collective knowledge can allow us to discover the self-organization of music that it represents. If we do this right, we do not need to rely on individual experts approximating collective love with subjective opinions. If we do this right, we support a real human community's self-awareness and power of identity in a way that it cannot easily support itself. There's no magic source of truth about what "right" consists of, which is the challenge of the exercise but also exactly why it's worthwhile to attempt. For 12 years I spent most of my work life devising algorithms like this, running them, learning how to cross-check the cultural implications of the results, and then iterating in search of more and better revealed wisdom.
In general, I found that collective listening knowledge is not especially elusive or cryptic. Streaming is not inherently performative, so most people listen in ways that seem likely to be earnest expressions of their love. That love can be collated with very simple math. Simple math that produces specific results is good because it's easy to adjust and evaluate. You might argue, I suppose, that simple math, by virtue of its simplicity, does not establish competitive advantages. If music services all have the same music, and music players all have the same basic controls, then services are differentiated by their algorithms, and more complex algorithms are harder for competitors to replicate.
I offer, conversely, the rueful observation that in the last 12 years no other major music service has developed a cultural taxonomy of even remotely the same scale as the genre system we built at the Echo Nest and Spotify, while all of them have implemented versions of opaque personalization based on machine learning. ML recommendations are an arms-race with only temporary advantages. The machines don't actually learn, they always start over from nothing. ML engineers, too, can be trained from nothing or bought from other industries, without needing special love. But machines that do not run on love will not produce it.
In particular, ML algorithms tend to drift towards lottery effects. Vector embeddings, even if they are trained on human cultural input like playlist co-occurence, tend to introduce non-cultural computational artifacts by their nature. And thus we get things like this set of music my Spotify daylist recently gave me:
You don't need to hear the music behind these images to guess that it's mostly aggressive metalcore, but if you happen to know a lot about metalcore you could also notice that you probably have not heard of most of these bands. I am not a big fan of this very specific niche of metal, personally, which is the first thing wrong with this set as a personalized result for me. Bad results aren't disturbing because they're bad. Algorithms don't always work, for many reasons.
But as I scanned through these songs, I couldn't help noticing that they all sounded very similar. And as I poked through the artist links, trying to understand what this set of bands represents, I quickly realized that it doesn't. These bands are not all from any one place, they do not appear together on any particular playlists, their fans do not also like each other. They are not collectively part of a real-world community. Many of them have fewer than 100 monthly listeners, sometimes a lot fewer, and thus probably do not even individually represent real-world communities. They do appear to be real bands, rather than opportunistic constructs or AI interpolations, and in general they aren't bad examples of this kind of thing.
But they didn't end up on my list by merit or effort. They ended up here because Spotify uses ML techniques to group songs by acoustic characteristics, and this is one of the inputs into the vector embeddings that produce recommendations for daylist, Discover Weekly and other ML-driven personalized playlists. Acoustic similarity isn't completely random on the level of Powerball, but it's not a cultural meritocracy, and it's not a model for giving artists or listeners agency. Picking unknown artists out of the vast unheard tiers of streaming music is not an act of cultural incubation or stewardship, it's a mechanism of control. There are thousands of bands who sound like this. If you are one of the almost-thousands who are not randomly on my list, there's no action you can take to change this. If any one band ever gets famous this way, and statistically this is bound to happen rarely but eventually, you can be pretty sure we'll hear about it in self-congratulatory press releases that do not feature everyone else left behind. One exception doesn't change the rules. Lottery exposure offers a fleeting illusion of access, but if you didn't build it, you can't sustain it, either. You might hope, if you are in one of these lucky bands that reached me, that millions of not quite metalcore fans also got sets like this on a Friday afternoon, but two Friday afternoons later these bands are still obscure, still isolated. Losing lottery tickets do not make you luckier, but worse, lucking into more listeners this way doesn't give you an audience with any unifying rationale or presence, or a community to join. You can't learn from randomness, you can only hold still and hope it somehow picks you again.
This is exactly what the power-structure wants: listeners holding still to see what daylist tells them to listen to on Friday afternoon, artists holding still hoping to be chosen. Measure this control by money and it looks virtuous, taking a few streams from the most saturated songs and sprinkling them sparingly across the thirstiest. Measure it by alleviated thirst, though, and it evaporates. Or, rather, it condenses, but only into the reservoirs of the machine itself. Audit the beneficiaries and you might find that they aren't even random. ML's idea of the distribution of power is enough unpredictability to distract from its own motivations. My idea of the future of music is not a chaos engine printing rigged lottery tickets that mostly don't even pay for themselves. It's a future that we build. It's a future we could build faster with better tools, and algorithms can be those tools. But only if they are handed to us, with intelligible instructions, as we are in productive motion. Only if they are designed not to give us each little jolts of seemingly new power for which we can yearn, but to give all of us, together, currents of shared power with which our yearning can be expressed and redeemed.
¶ Algorithms and Humility (and All the Days the Music Doesn't Die) · 3 February 2024 listen/tech
When you go to an artist's page on Spotify, there's a big Play button at the top. This seems reasonable enough. Playing their music isn't necessarily what you want to do, but it's one of the most likely things. What does it mean, exactly, to "Play an artist", as opposed to playing a particular release? Hit the button and pay attention to the track-sequence you get, and you can quickly figure out what Spotify has chosen to make it do, which is that it plays the artist's 10 Popular tracks in descending popularity order. After that it gets a tiny bit trickier to follow, because it goes through the artist's releases, and those releases are listed right there on the artist page, but the playback order usually doesn't match the display order. But poke around and you'll find that the playback order matches the Discography order (what you get to via the "Show all" link next to the list of "Popular releases"), which is reverse-chronological in principle, although release-dates are a contentious data-field so good luck with that.
This is reasonable behavior, not least because it's explainable, but it's not always the greatest listening experience. What you probably want, I think, if you just hit Play without picking your own starting point, is a sampler of the artist's songs. Their 10 most popular songs are a subset, but not always a great sample. They might all come from the same album, they might include multiple versions of the same song, they might include intros or interstitial tracks that don't actually make sense on their own. And a reverse-chron trudge through literally all the artist's releases, after those first 10 popular tracks, is not a "sample" at all.
This bothered me, so at one point pretty early in my long time at Spotify I spent a little while seeing if I could devise an algorithm to create a better sample-order. It wasn't especially complicated, but it tried to diversify the selection by album, and to group song-versions in order to understand singles as part of their album's eras, and not play the same real-world song over and over due to minor variations. It rarely produced the same summary of an artist's career that a knowledgeable human fan would have, because it didn't have any real cultural insight to work with, but it did a decently non-idiotic job for most artists. I felt pretty good about claiming that it was a better default introduction to an artist than playing the 10 most popular tracks and then every single release.
That wasn't what we ended up doing with the idea, though. Invisible improvements are unglamorous. Instead, it became A Product. That product was the "This Is" artist-playlist series. And because Products make Claims, this new playlist series got an ambitious tagline: "This is [artist name]. The essential tracks, all in one playlist."
Here, apropos of today's anniversary of The Big Bopper's untimely death, is the contents of This Is The Big Bopper:
You can see, I think, that the execution has not quite lived up to the premise. The algorithm has done its best to vary the order of nominal source albums, but The Big Bopper didn't make any albums while he was alive, so all of these are actually posthumous compilations. He didn't record very much, period, so in an attempt to make a playlist that isn't just his two hits, the rules have picked a bunch of tracks that aren't even available for streaming, including a couple of sub-1:00 news clips that we are probably happy to be forced to skip, and a very dubiously misspelled "It's the Thruth, Ruth" that probably shouldn't have been released in the first place. But even without those, it makes little musical sense to describe this set as "all the essential tracks". Most of these are no more "essential" than the others, and the official a-side of his first single, "Purple People Eater Meets the Witch Doctor" ("Chantilly Lace" was the nominal b-side of this), is missing.
As an unseen track-order for a sampler, though, this isn't terrible. It improves on the default play-button behavior by not playing the same songs 3 or 4 times each, at least. I'm pretty sure my original version of this algorithm had a duration-filter that would have eliminated the news clips, and an availability filter that would have blocked the Thruth. The algorithm, itself, was a small useful thing that improved the world a little bit. That's all, as its author, I ever claimed about it.
The claims we make, about our algorithms, are a different thing from what they are. I was not in charge of the claims Spotify ended up attaching to this one. I believe that algorithmic intermediation of culture should be done with relentless humility and care. This is not the attitude generally adopted by tech-product marketing. "All the essential tracks" is a more compelling premise than "a slightly better sample-order", for sure. I wouldn't have used it, because the algorithm doesn't deliver it. Marketing doesn't care.
Does it matter? In this case, maybe it doesn't matter a lot. In truth there's probably only one "essential" Big Bopper song, and it's "American Pie". You've achieved a minimally acceptable cultural literacy if you know what Don McLean's memorial is about, and extra credit if you can hum "La Bamba" and any Buddy Holly song that isn't actually by Weezer. The Big Bopper is, sadly, a lot more famous for dying in a plane crash than he is for anything he sang. If you hit his Play button and get "Chantilly Lace", that's already more than most people know.
The This Is series has gone on to be pretty popular. It's exciting to get a This Is playlist, as an artist, because it suggests that you have "essential" tracks. But that, too, is a marketing claim with no inherent grounding. The criteria for generating them are logistic, not cultural, and the thresholds have been adjusted downwards over time. I have one, and my music is as non-essential as you can get without employing AI. Illusory validation caters to vanity, and subtly devalues actual validation.
Taken in collective aggregate, these tech-marketing tendencies to oversell the significance of algorithms, and in particular the hubris in making cultural claims about the results of mostly-uncultural computation, are a sort of pervasive reverse-gaslighting, substituting brightly confident light where it should be modestly dim. And every little cognitive dissonance like this that we accept erodes either our actual awareness of misrepresented reality, or our trust in systems, or both.
But here's the thing. At the end, there's still music. The algorithms have no soul for music to save. We do. Our machines can only gaslight us if we grant them authority. So don't. They serve at our pleasure, but sometimes they work. You don't have to trust them to cherish them when they help. The past doesn't always organize itself, and math and patterns of our listening can tell us things we only almost already knew. Here's another of my attempts to put songs in algorithmic order:
This one tries to re-center the universe of music on any individual artist of your choosing, and then follow a vague spiralish pattern outwards in every direction at once. If we start with The Big Bopper, does it reconstruct the Music that Died that day? I don't know, I wasn't even born yet. But this math started from the The Big Bopper and rediscovered Buddy Holly and Ritchie Valens without knowing it should, so that's an interesting start. Is it "canonical"? No, of course not, the title is my rueful joke, and there's a note at the bottom that explains what I'm attempting. If you think algorithms themselves are the problem, I'm definitely part of it. I believe in attempts. If I had written the blurb for This Is, it would probably have said "An earnest algorithmic attempt at finding the maybe-essential tracks." Marketing doesn't talk that way. It isn't earnest, and it certainly isn't self-aware of how earnest it isn't.
But where self-awareness is systemically missing, we can sometimes reintroduce it ourselves. Not always, but sometimes. We don't have to let overselling trick us into thinking every oversold thing underperforms. We don't have to let premature marketing hubris scare us away from experimentation and helpful progress. Defuse their claim of essentiality with a now-knowing smirk, and those This Is playlists can be interesting. This may not be a canonical path, but it might still take us somewhere. That's something. We can let it be enough. Let algorithms work when they work for us, and fail cheerfully when they don't, and this will be yet another of all the days that music doesn't die.
This is reasonable behavior, not least because it's explainable, but it's not always the greatest listening experience. What you probably want, I think, if you just hit Play without picking your own starting point, is a sampler of the artist's songs. Their 10 most popular songs are a subset, but not always a great sample. They might all come from the same album, they might include multiple versions of the same song, they might include intros or interstitial tracks that don't actually make sense on their own. And a reverse-chron trudge through literally all the artist's releases, after those first 10 popular tracks, is not a "sample" at all.
This bothered me, so at one point pretty early in my long time at Spotify I spent a little while seeing if I could devise an algorithm to create a better sample-order. It wasn't especially complicated, but it tried to diversify the selection by album, and to group song-versions in order to understand singles as part of their album's eras, and not play the same real-world song over and over due to minor variations. It rarely produced the same summary of an artist's career that a knowledgeable human fan would have, because it didn't have any real cultural insight to work with, but it did a decently non-idiotic job for most artists. I felt pretty good about claiming that it was a better default introduction to an artist than playing the 10 most popular tracks and then every single release.
That wasn't what we ended up doing with the idea, though. Invisible improvements are unglamorous. Instead, it became A Product. That product was the "This Is" artist-playlist series. And because Products make Claims, this new playlist series got an ambitious tagline: "This is [artist name]. The essential tracks, all in one playlist."
Here, apropos of today's anniversary of The Big Bopper's untimely death, is the contents of This Is The Big Bopper:
You can see, I think, that the execution has not quite lived up to the premise. The algorithm has done its best to vary the order of nominal source albums, but The Big Bopper didn't make any albums while he was alive, so all of these are actually posthumous compilations. He didn't record very much, period, so in an attempt to make a playlist that isn't just his two hits, the rules have picked a bunch of tracks that aren't even available for streaming, including a couple of sub-1:00 news clips that we are probably happy to be forced to skip, and a very dubiously misspelled "It's the Thruth, Ruth" that probably shouldn't have been released in the first place. But even without those, it makes little musical sense to describe this set as "all the essential tracks". Most of these are no more "essential" than the others, and the official a-side of his first single, "Purple People Eater Meets the Witch Doctor" ("Chantilly Lace" was the nominal b-side of this), is missing.
As an unseen track-order for a sampler, though, this isn't terrible. It improves on the default play-button behavior by not playing the same songs 3 or 4 times each, at least. I'm pretty sure my original version of this algorithm had a duration-filter that would have eliminated the news clips, and an availability filter that would have blocked the Thruth. The algorithm, itself, was a small useful thing that improved the world a little bit. That's all, as its author, I ever claimed about it.
The claims we make, about our algorithms, are a different thing from what they are. I was not in charge of the claims Spotify ended up attaching to this one. I believe that algorithmic intermediation of culture should be done with relentless humility and care. This is not the attitude generally adopted by tech-product marketing. "All the essential tracks" is a more compelling premise than "a slightly better sample-order", for sure. I wouldn't have used it, because the algorithm doesn't deliver it. Marketing doesn't care.
Does it matter? In this case, maybe it doesn't matter a lot. In truth there's probably only one "essential" Big Bopper song, and it's "American Pie". You've achieved a minimally acceptable cultural literacy if you know what Don McLean's memorial is about, and extra credit if you can hum "La Bamba" and any Buddy Holly song that isn't actually by Weezer. The Big Bopper is, sadly, a lot more famous for dying in a plane crash than he is for anything he sang. If you hit his Play button and get "Chantilly Lace", that's already more than most people know.
The This Is series has gone on to be pretty popular. It's exciting to get a This Is playlist, as an artist, because it suggests that you have "essential" tracks. But that, too, is a marketing claim with no inherent grounding. The criteria for generating them are logistic, not cultural, and the thresholds have been adjusted downwards over time. I have one, and my music is as non-essential as you can get without employing AI. Illusory validation caters to vanity, and subtly devalues actual validation.
Taken in collective aggregate, these tech-marketing tendencies to oversell the significance of algorithms, and in particular the hubris in making cultural claims about the results of mostly-uncultural computation, are a sort of pervasive reverse-gaslighting, substituting brightly confident light where it should be modestly dim. And every little cognitive dissonance like this that we accept erodes either our actual awareness of misrepresented reality, or our trust in systems, or both.
But here's the thing. At the end, there's still music. The algorithms have no soul for music to save. We do. Our machines can only gaslight us if we grant them authority. So don't. They serve at our pleasure, but sometimes they work. You don't have to trust them to cherish them when they help. The past doesn't always organize itself, and math and patterns of our listening can tell us things we only almost already knew. Here's another of my attempts to put songs in algorithmic order:
This one tries to re-center the universe of music on any individual artist of your choosing, and then follow a vague spiralish pattern outwards in every direction at once. If we start with The Big Bopper, does it reconstruct the Music that Died that day? I don't know, I wasn't even born yet. But this math started from the The Big Bopper and rediscovered Buddy Holly and Ritchie Valens without knowing it should, so that's an interesting start. Is it "canonical"? No, of course not, the title is my rueful joke, and there's a note at the bottom that explains what I'm attempting. If you think algorithms themselves are the problem, I'm definitely part of it. I believe in attempts. If I had written the blurb for This Is, it would probably have said "An earnest algorithmic attempt at finding the maybe-essential tracks." Marketing doesn't talk that way. It isn't earnest, and it certainly isn't self-aware of how earnest it isn't.
But where self-awareness is systemically missing, we can sometimes reintroduce it ourselves. Not always, but sometimes. We don't have to let overselling trick us into thinking every oversold thing underperforms. We don't have to let premature marketing hubris scare us away from experimentation and helpful progress. Defuse their claim of essentiality with a now-knowing smirk, and those This Is playlists can be interesting. This may not be a canonical path, but it might still take us somewhere. That's something. We can let it be enough. Let algorithms work when they work for us, and fail cheerfully when they don't, and this will be yet another of all the days that music doesn't die.
¶ We Will Know Ourselves by Our Love; We Will Know You By What You Let Go · 1 February 2024 listen/tech
Collective listening is a cultural investment. Collected listening data can be valuable for music streaming services' selfish business purposes, of course, but it's generated by music and listeners, and should be valuable to the world and to music first.
It was my job, for a while, to try to turn music-listening data into cultural knowledge. My opinion, from doing that, is that there are four fundamental kinds of socially valuable music-cultural knowledge that can be learned, with a little attentive work but no need for inscrutable magic, from listening.
The first is popularity. The most fundamental change in our knowledge about music and love, from the physical era to the streaming era, is that we now know what every listener plays, instead of only what they buy. In its simplest form this produces playcounts, and thus the most basic form of streaming transparency and accountability is showing those playcounts. Streaming services have to track plays for royalty purposes, obviously, but music accounting is done by track, and cultural accounting is done by recording and song. At a minimum, we consider the single and the reappearance of that same exact audio on the subsequent album to be one cultural unit, not two, and thus want to see the total plays for both tracks combined in both places. Most major current services do this adequately, albeit at different levels of precision (and one major service glaringly does not display playcounts at all). But really, as people we know that the live version of a song is the same song as the studio version, and if we ask each other what the most popular song on a live album is, we do not mean which of those literal live recordings has been played the most, we mean which of those compositions has been conjured into the air the most across all its minor variations. So far no service has attempted to show this human version of popularity in public, although probably all of them have some internal representation of the idea for their own purposes. (I have worked on various logistical and cultural issues around song identity and disambiguation over the course of my time in music data, but never on the actual mechanics of music recognition, ala Shazam.)
The second kind of knowledge, derived from the first, is currency. We would like to know, I think, what music people are playing "now". Ariana Grande's new song is currently hotter than her old ones, even though it is nowhere near the total playcount of the old ones yet. This can be calculated with windows of data-eligibility, or by prorating plays by age, and most major services do some version of this, but only share it selectively. Spotify, for example, uses an internal version of currency to select and rank an artist's 10 most "Popular" tracks, but only those 10, and the only numbers you actually see there and elsewhere in the app are the all-time playcounts. I worked on a currency algorithm at the Echo Nest, before we were acquired by Spotify, but it's hard to do this very well without actual listening data, and the one Spotify had already devised from better data, without us, produced better results without being any more complicated.
The third kind of knowledge, moving a big step beyond basic transparency, is similarity. Humans listen to music non-randomly, and thus the patterns of our listening encode relationships between songs and between artists. Most current services have some notion of song similarity for use in song radio and other song-level recommendations, and also some notion of artist similarity for behind-the-scenes use in artist radio and more explicit use as some kind of exploratory artist-level navigation ("Related Artists", "Similar Artists", "Fans Also Like", etc.).
I worked on multiple generations of these algorithms in my 12 years at the Echo Nest and then Spotify, and as of my departure in December 2023 the dataset for the "Fans Also Like" lists you see on artists pages in the Spotify app was my personal work. In my time there I had many occasions to compare competing similarity algorithms, both in and out of music, and in a better world less encumbered by petty confidentiality clauses, I would cheerfully bore you with the tradeoffs between them at great length. In my experience simple methods can always beat complicated methods because they're so much easier to evaluate and improve, and time spent refining the inputs is usually at least as productive as tweaking the algorithms themselves, but much less appealing in engineering terms. I consider the calculated similarity network of ~3 million Spotify artists, as I left it, to be a historically monumental achievement of collective listening made mostly possible by streaming itself, but having had to do a lot of internal lobbying on behalf of the musical cogency of similarity results over the years, I am forced to concede that my personal stubbornness is more relevant than any one individual ought to be in this process. Spotify still has my code, but stripped of my will and belief I'm not sure it will thrive or even survive. My individual layoff doesn't necessarily express a Spotify corporate opinion on any larger subject, but it's hard to deny that if Spotify cared, organizationally, about giving the assisted self-organization of the world's listening back to the world, my individual production role in this specific form of it would have been a trivial and uncontroversial excuse for not letting me go. If they give up on this whole feature as a result of one person's absence, it will be a tragic and unforced loss for everybody.
The fourth key form of music knowledge, moving up one more level of abstraction from pairwise similarity, is genre. Genres are the vocabulary by which we understand and discuss music, and genres as communities are the way in which music clusters together in the world. Genres are communities of artists and/or listeners and/or practice, and usually some combination of all three. AI music will be meaningless and inherently point-missing if it attempts to apply sonic criteria without any references to communities of creation or reception, and it will turn out be just one more non-scary new tool in the long history of creative tools if it ends up rooted in how communities sing to themselves about their love. There is no "post-genre" music future, or at least no non-nihilistic one, because music creates genres as it goes.
There are three ecosystemic ways to approach the data-modeling of musical genres: you can let artists self-identify, you can crowd-source categorization from listeners, or you can moderate some combination of those inputs with human expertise.
Two of those ways don't work. Artists self-identify aspirationally, not categorically. If you try to make a radio station of all the rappers who describe themselves as simply "hip hop", you will get a useless pool of 75,000 artists from which most will never be selected. Listeners, conversely, describe music contextually, so two different listeners' "indie pop" playlists may be using the phrase "indie pop" in totally unrelated ways, and thus may have no cultural connection at all. But motivated humans, especially if they know some things about music and are willing to learn more, can mediate these difficulties and channel noisy signals into guided and supervised extrapolations.
You might expect that a global music-streaming service, in recognition of its dependence on music and thus its responsibility to steward music culture, would have a large dedicated team working constantly on systematic, culturally-attuned genre-modeling. Spotify did not. It had editors making playlists, which is sometimes a form of genre curation and sometimes is not. It had ML engineers trying to find correlations between words in playlist titles and tracks, despite playlist titles very much not being a track-tagging interface at all, never mind a genre-categorization tool. It had a handful of people doing specific genre-curation work, mostly on our own initiative because we knew it was worthwhile. And it had me maintaining the genre system, with all its algorithms and all its curation tools. I invented the system (at the Echo Nest, before we were even acquired), I ran it, I supervised it, I tweaked it, I defended it, I believed in it, I helped people apply it to other music and business problems. I had a Slack trigger on the word "genre", so you could summon me from anywhere in Spotify by just typing it. The system grew from hundreds of genres to thousands. My own personal site, everynoise.com (which also predated the Spotify acquisition), was a way to share a sprawling holistic view of it that would never have made sense inside a black-and-green Spotify window or even a white Rdio window before that. I never managed, in ten years of trying, to get genres integrated into the actual daily Spotify music experience (I wanted there to be a list of Fans Also Like genres on artist pages right under the list of Fans Also Like artists; both of these are forms of cultural context and collective knowledge), but I know, from years of emails and stories and other people's independent enthusiasm (including, only shortly before the layoffs, this one in The Pudding, which said "an always-updating catalog of 6,000 genre is groundbreaking" with unfortunate foreshadowing) that I wasn't the only person who understood the value of this whole earnest and unruly and seemingly-endless project.
Will I be proven wrong about the "endless" part? Here, again, we cannot simply conclude that Spotify does not care about genres and music culture because I got laid off. The code remains. Some of the other people who did genre-curation work are still there. Spotify could just keep the internal system running, even if nobody but me would have the inclination or expertise to improve it any further. And maybe they will. I hope they will. It doesn't cost much in computing terms. Spotify is the world's most dominant music-streaming service and genres are how music evolves and exists. Surely one cares about the other.
But if they cared, and one person in a still-8000-person company is basically the smallest practical unit of care, keeping me around would have been self-evidently worthwhile. The genre system wasn't even the only thing I did. The genre system and Fans Also Like weren't even the only things I did. The genre system and Fans Also Like and Wrapped weren't even the only things I did. The public toys I made were the tiniest fraction of my work. If everything I did do wasn't enough, maybe they don't care, and maybe all these things will be unceremoniously abandoned.
But what comes from us, and is made out of our love, of course we can and will rebuild over and over. Spotify is not the only collector of collective listening. These were not the first attempts to connect artists through their shared fans, or to model the genres into which we assemble, and they were never going to be the last. Maybe we will look back on these meager, patchwork networks of only 3 million artists, and only 6000 genres, like we keep the absurdly self-important book reports our kids wrote when they were 9. We are proud of their care and their ambition, not their page-counts. We remember what they dreamed of becoming, and then we hug the people they are in the midst of becoming, and then we think about what we are going to do and become tomorrow.
It was my job, for a while, to try to turn music-listening data into cultural knowledge. My opinion, from doing that, is that there are four fundamental kinds of socially valuable music-cultural knowledge that can be learned, with a little attentive work but no need for inscrutable magic, from listening.
The first is popularity. The most fundamental change in our knowledge about music and love, from the physical era to the streaming era, is that we now know what every listener plays, instead of only what they buy. In its simplest form this produces playcounts, and thus the most basic form of streaming transparency and accountability is showing those playcounts. Streaming services have to track plays for royalty purposes, obviously, but music accounting is done by track, and cultural accounting is done by recording and song. At a minimum, we consider the single and the reappearance of that same exact audio on the subsequent album to be one cultural unit, not two, and thus want to see the total plays for both tracks combined in both places. Most major current services do this adequately, albeit at different levels of precision (and one major service glaringly does not display playcounts at all). But really, as people we know that the live version of a song is the same song as the studio version, and if we ask each other what the most popular song on a live album is, we do not mean which of those literal live recordings has been played the most, we mean which of those compositions has been conjured into the air the most across all its minor variations. So far no service has attempted to show this human version of popularity in public, although probably all of them have some internal representation of the idea for their own purposes. (I have worked on various logistical and cultural issues around song identity and disambiguation over the course of my time in music data, but never on the actual mechanics of music recognition, ala Shazam.)
The second kind of knowledge, derived from the first, is currency. We would like to know, I think, what music people are playing "now". Ariana Grande's new song is currently hotter than her old ones, even though it is nowhere near the total playcount of the old ones yet. This can be calculated with windows of data-eligibility, or by prorating plays by age, and most major services do some version of this, but only share it selectively. Spotify, for example, uses an internal version of currency to select and rank an artist's 10 most "Popular" tracks, but only those 10, and the only numbers you actually see there and elsewhere in the app are the all-time playcounts. I worked on a currency algorithm at the Echo Nest, before we were acquired by Spotify, but it's hard to do this very well without actual listening data, and the one Spotify had already devised from better data, without us, produced better results without being any more complicated.
The third kind of knowledge, moving a big step beyond basic transparency, is similarity. Humans listen to music non-randomly, and thus the patterns of our listening encode relationships between songs and between artists. Most current services have some notion of song similarity for use in song radio and other song-level recommendations, and also some notion of artist similarity for behind-the-scenes use in artist radio and more explicit use as some kind of exploratory artist-level navigation ("Related Artists", "Similar Artists", "Fans Also Like", etc.).
I worked on multiple generations of these algorithms in my 12 years at the Echo Nest and then Spotify, and as of my departure in December 2023 the dataset for the "Fans Also Like" lists you see on artists pages in the Spotify app was my personal work. In my time there I had many occasions to compare competing similarity algorithms, both in and out of music, and in a better world less encumbered by petty confidentiality clauses, I would cheerfully bore you with the tradeoffs between them at great length. In my experience simple methods can always beat complicated methods because they're so much easier to evaluate and improve, and time spent refining the inputs is usually at least as productive as tweaking the algorithms themselves, but much less appealing in engineering terms. I consider the calculated similarity network of ~3 million Spotify artists, as I left it, to be a historically monumental achievement of collective listening made mostly possible by streaming itself, but having had to do a lot of internal lobbying on behalf of the musical cogency of similarity results over the years, I am forced to concede that my personal stubbornness is more relevant than any one individual ought to be in this process. Spotify still has my code, but stripped of my will and belief I'm not sure it will thrive or even survive. My individual layoff doesn't necessarily express a Spotify corporate opinion on any larger subject, but it's hard to deny that if Spotify cared, organizationally, about giving the assisted self-organization of the world's listening back to the world, my individual production role in this specific form of it would have been a trivial and uncontroversial excuse for not letting me go. If they give up on this whole feature as a result of one person's absence, it will be a tragic and unforced loss for everybody.
The fourth key form of music knowledge, moving up one more level of abstraction from pairwise similarity, is genre. Genres are the vocabulary by which we understand and discuss music, and genres as communities are the way in which music clusters together in the world. Genres are communities of artists and/or listeners and/or practice, and usually some combination of all three. AI music will be meaningless and inherently point-missing if it attempts to apply sonic criteria without any references to communities of creation or reception, and it will turn out be just one more non-scary new tool in the long history of creative tools if it ends up rooted in how communities sing to themselves about their love. There is no "post-genre" music future, or at least no non-nihilistic one, because music creates genres as it goes.
There are three ecosystemic ways to approach the data-modeling of musical genres: you can let artists self-identify, you can crowd-source categorization from listeners, or you can moderate some combination of those inputs with human expertise.
Two of those ways don't work. Artists self-identify aspirationally, not categorically. If you try to make a radio station of all the rappers who describe themselves as simply "hip hop", you will get a useless pool of 75,000 artists from which most will never be selected. Listeners, conversely, describe music contextually, so two different listeners' "indie pop" playlists may be using the phrase "indie pop" in totally unrelated ways, and thus may have no cultural connection at all. But motivated humans, especially if they know some things about music and are willing to learn more, can mediate these difficulties and channel noisy signals into guided and supervised extrapolations.
You might expect that a global music-streaming service, in recognition of its dependence on music and thus its responsibility to steward music culture, would have a large dedicated team working constantly on systematic, culturally-attuned genre-modeling. Spotify did not. It had editors making playlists, which is sometimes a form of genre curation and sometimes is not. It had ML engineers trying to find correlations between words in playlist titles and tracks, despite playlist titles very much not being a track-tagging interface at all, never mind a genre-categorization tool. It had a handful of people doing specific genre-curation work, mostly on our own initiative because we knew it was worthwhile. And it had me maintaining the genre system, with all its algorithms and all its curation tools. I invented the system (at the Echo Nest, before we were even acquired), I ran it, I supervised it, I tweaked it, I defended it, I believed in it, I helped people apply it to other music and business problems. I had a Slack trigger on the word "genre", so you could summon me from anywhere in Spotify by just typing it. The system grew from hundreds of genres to thousands. My own personal site, everynoise.com (which also predated the Spotify acquisition), was a way to share a sprawling holistic view of it that would never have made sense inside a black-and-green Spotify window or even a white Rdio window before that. I never managed, in ten years of trying, to get genres integrated into the actual daily Spotify music experience (I wanted there to be a list of Fans Also Like genres on artist pages right under the list of Fans Also Like artists; both of these are forms of cultural context and collective knowledge), but I know, from years of emails and stories and other people's independent enthusiasm (including, only shortly before the layoffs, this one in The Pudding, which said "an always-updating catalog of 6,000 genre is groundbreaking" with unfortunate foreshadowing) that I wasn't the only person who understood the value of this whole earnest and unruly and seemingly-endless project.
Will I be proven wrong about the "endless" part? Here, again, we cannot simply conclude that Spotify does not care about genres and music culture because I got laid off. The code remains. Some of the other people who did genre-curation work are still there. Spotify could just keep the internal system running, even if nobody but me would have the inclination or expertise to improve it any further. And maybe they will. I hope they will. It doesn't cost much in computing terms. Spotify is the world's most dominant music-streaming service and genres are how music evolves and exists. Surely one cares about the other.
But if they cared, and one person in a still-8000-person company is basically the smallest practical unit of care, keeping me around would have been self-evidently worthwhile. The genre system wasn't even the only thing I did. The genre system and Fans Also Like weren't even the only things I did. The genre system and Fans Also Like and Wrapped weren't even the only things I did. The public toys I made were the tiniest fraction of my work. If everything I did do wasn't enough, maybe they don't care, and maybe all these things will be unceremoniously abandoned.
But what comes from us, and is made out of our love, of course we can and will rebuild over and over. Spotify is not the only collector of collective listening. These were not the first attempts to connect artists through their shared fans, or to model the genres into which we assemble, and they were never going to be the last. Maybe we will look back on these meager, patchwork networks of only 3 million artists, and only 6000 genres, like we keep the absurdly self-important book reports our kids wrote when they were 9. We are proud of their care and their ambition, not their page-counts. We remember what they dreamed of becoming, and then we hug the people they are in the midst of becoming, and then we think about what we are going to do and become tomorrow.
¶ bemused complicated partial credit layoff friday morning · 26 January 2024 listen/tech
The glum Digital Music News headline reads "Spotify Daylist is Blowing UpToo Bad the Creator Was Laid Off", and although I haven't specifically talked to the person who came up with Daylist since the layoffs, I don't think they were affected in this round. The explanation in the body of the story is a little more specific:
This is mostly true in what it actually says. I wasn't the only person working on the Spotify genre-categorization project, but I started it, I ran it, I wrote all of its tools and algorithms, and I worked on many applications of it to internal problems and app features. Without me it probably will not survive. And that genre system is one of the ingredients that feeds into Daylist.
The DMN piece is derived from an earlier article at TechCrunch, where the assertion is more carefully phrased: "Spotifys astrology-like Daylists go viral, but the companys micro-genre mastermind was let go last month". And more carefully reported:
...
The "look no further" flourish is misguided, since I didn't curate every individual genre myself, and maybe didn't personally configure any of the ones they cite. We did not make up the name "egg punk", either.
USA Today, drawing from both of these stories, kept the plot twist out of the headline ("How to find your Spotify Daylist: Changing playlists that capture 'every version of you'") and saved it for a rueful final paragraph:
The judicious "help" there is fair enough. And as none of these say, in addition to working on genres I was also a prolific source of this kind of internal personalization experiment, and thus part of an environment that encouraged it.
Daylist itself was absolutely not my doing, though. You'd have to ask its creator about their influences, but so far I haven't seen Spotify give public named credit for the feature, and in a period of sweeping layoffs, in particular, I encourage you to take note of the general corporate reluctance to acknowledge individual work. But while we're at it, I did not have anything to do with Discover Weekly, nor did anybody from the Echo Nest, which was the startup whose acquisition brought me to Spotify and which I did not found. These are not secret details, and a reporter could easily discover them by asking questions. None of three people who wrote those three articles about Daylist talked to me before publishing them.
And although the Daylist feature itself is charming and viral, and I support its existence, it also demonstrates three recurring biases in music personalization that are worth noting for their wider implications.
The most obvious one is that Daylist is based explicitly on the premise that listening is organized by, or at least varies according to, weekdays and dayparts. It is not the first Spotify feature to stipulate this idea, and clearly there are listeners for which it is relevant. But I think both schedule-driven and the similar activity-driven models of listening (workout music, study music, dinner music..) tend to encourage a functional disengagement from music itself. Daylist mitigates this by describing its daypart modes in mostly non-functional terms, including sometimes genres and other musical terminology, and of course you aren't required to listen to nothing but Daylist and thus it isn't obliged to provide all important cultural nutrients. But the eager every-few-hours updating does make a more active bid for constant attention than most other personalization features. Discover Weekly and Release Radar are only weekly, and short. Daily Mix is only (roughly) daily, although it's both endless and multiple. I don't think the cultural potential of having all the world's music online is exactly maximized by encouraging you to spend every Tuesday afternoon the same way you supposedly always have.
The second common personalization bias in Daylist is that it manifestly draws from a large internal catalog of ideas, but you have no control over which subset you are allowed to see, and there is no way to explore the whole idea-space yourself. This parsimonious control-model is not at all unique to Spotify, but it's certainly pervasive in Spotify personalization features, from the type and details of recommendations you see on the Home page to the Mixes you get to the genre and mood filters in your Library. Daylist's decisions about your identity are friendly but unilateral. It's not a conversation. To its credit, Daylist is the first of these features that explains its judgments in interactive form, so you can tap a genre or adjective and see what that individual idea attempts to represent. But this enables only shallow exploration of the local neighborhood of the space. There's still no way to see a complete list of available terms or jump to a particular one even if you somehow know it exists. Obviously everynoise.com demonstrates my strong counterbias towards expansive openness and unrestricted exploration, but one might note that even after 10 years of me working on this genre project at Spotify, there's no place other than my own personal site to see the whole list of genres.
And the third common personalization bias demonstrated unapologetically by Daylist is the endemic tech-company fondness for unsupervised machine learning over explicit human curation. As you can see for yourself by comparing the "genre" mixes you find through Daylists with the corresponding genre pages on everynoise, the genre system is only one of Daylist's inputs. All the non-genre moods and vibes in Daylist obviously come from a different system, but even the genre terms are also filtered through other influences. I did help with those other systems, too, creditwise, but I didn't invent and wasn't running them.
Nor, honestly, do I trust them. You will learn to trust or distrust your own Daylists, if you spend time listening to them or even just inspecting them, but if you follow conversations about them online to get a wider sample than just your own, you will quickly find that they do not always make sense. Mine, right now, claims to be giving me japanese metal and visual kei, but much of it is actually idol rock and a mysterious number of <100-listener Russian metalcore bands that I have never played and which have no evident connection to bands I have. The "Japanese Metal" mix is mostly Japanese, but only sporadically metal. The "Visual Kei" mix is mostly Japanese, and does contain some visual kei, but you'd have to already know what visual kei is to pick those songs out. The "Laptop" mix opens with Morbid Angel's "Visions from the Dark Side", a song that not only was not made on a laptop (to put it mildly), but which narrowly predates the commercial availability of laptops entirely.
The genre system was not error-proof, either. But it was built on intelligible math, it was overseen by humans, and those humans had both the technical tools and moral motivation to fix errors. We did not have a "laptop" genre because "laptop" is not a community of artists or listeners or practice, but if we had, and the system had put Morbid Angel on it, I would have stopped all other work until I was 100% confident I understood why such an egregious error had happened and had taken actions to both prevent that error from recurring and improve the monitoring processes to instill programmatic vigilance against that kind of error.
But once you commit to machine learning, instead of explicit math, you mostly give up on predictability. This doesn't prevent you from detecting errors, but it means you will generally find it hard to correct errors when you detect them, and even harder to prevent new ones from happening. The more complicated your systems, the weirder their failure modes, and the weirder the failures get, the harder it is to anticipate them or their consequences. If you delegate "learning" to machines, what you really mean is that you have given up on the humans learning. The real peril of LLM AI is not that ChatGPT hallucinates, it's that ChatGPT appears to be generating new ideas in such a way that it's tempting to think you don't need to pay people to do that any more. But people having written is why ChatGPT works at all. If generative AI arrives at human truths, sometimes or ever, it's because humans discovered those truths first, and wrote them down. Every problem you turn over to interpolative machines is a problem that will never thereafter be solved in a new way, that will never produce any new truths.
The problem with a music service laying off its genre curator is not the pettiness of firing the person responsible for a shiny new brand-moment. I was responsible for some previous shiny new brand-moments, too, as recently as less than a week before the layoff, but not this one and mere ungratefulness is sad but not systemically destabilizing. Daylist was made by other people, and will be maintained by other people. The problem is that I insisted on putting human judgment and obstinate stewardship in the path of demand-generation, and if that isn't enough to keep you from getting laid off from a music-streaming company, it's hard to imagine anybody else having the idiotic courage to keep trying it.
This is mostly true in what it actually says. I wasn't the only person working on the Spotify genre-categorization project, but I started it, I ran it, I wrote all of its tools and algorithms, and I worked on many applications of it to internal problems and app features. Without me it probably will not survive. And that genre system is one of the ingredients that feeds into Daylist.
The DMN piece is derived from an earlier article at TechCrunch, where the assertion is more carefully phrased: "Spotifys astrology-like Daylists go viral, but the companys micro-genre mastermind was let go last month". And more carefully reported:
...
The "look no further" flourish is misguided, since I didn't curate every individual genre myself, and maybe didn't personally configure any of the ones they cite. We did not make up the name "egg punk", either.
USA Today, drawing from both of these stories, kept the plot twist out of the headline ("How to find your Spotify Daylist: Changing playlists that capture 'every version of you'") and saved it for a rueful final paragraph:
The judicious "help" there is fair enough. And as none of these say, in addition to working on genres I was also a prolific source of this kind of internal personalization experiment, and thus part of an environment that encouraged it.
Daylist itself was absolutely not my doing, though. You'd have to ask its creator about their influences, but so far I haven't seen Spotify give public named credit for the feature, and in a period of sweeping layoffs, in particular, I encourage you to take note of the general corporate reluctance to acknowledge individual work. But while we're at it, I did not have anything to do with Discover Weekly, nor did anybody from the Echo Nest, which was the startup whose acquisition brought me to Spotify and which I did not found. These are not secret details, and a reporter could easily discover them by asking questions. None of three people who wrote those three articles about Daylist talked to me before publishing them.
And although the Daylist feature itself is charming and viral, and I support its existence, it also demonstrates three recurring biases in music personalization that are worth noting for their wider implications.
The most obvious one is that Daylist is based explicitly on the premise that listening is organized by, or at least varies according to, weekdays and dayparts. It is not the first Spotify feature to stipulate this idea, and clearly there are listeners for which it is relevant. But I think both schedule-driven and the similar activity-driven models of listening (workout music, study music, dinner music..) tend to encourage a functional disengagement from music itself. Daylist mitigates this by describing its daypart modes in mostly non-functional terms, including sometimes genres and other musical terminology, and of course you aren't required to listen to nothing but Daylist and thus it isn't obliged to provide all important cultural nutrients. But the eager every-few-hours updating does make a more active bid for constant attention than most other personalization features. Discover Weekly and Release Radar are only weekly, and short. Daily Mix is only (roughly) daily, although it's both endless and multiple. I don't think the cultural potential of having all the world's music online is exactly maximized by encouraging you to spend every Tuesday afternoon the same way you supposedly always have.
The second common personalization bias in Daylist is that it manifestly draws from a large internal catalog of ideas, but you have no control over which subset you are allowed to see, and there is no way to explore the whole idea-space yourself. This parsimonious control-model is not at all unique to Spotify, but it's certainly pervasive in Spotify personalization features, from the type and details of recommendations you see on the Home page to the Mixes you get to the genre and mood filters in your Library. Daylist's decisions about your identity are friendly but unilateral. It's not a conversation. To its credit, Daylist is the first of these features that explains its judgments in interactive form, so you can tap a genre or adjective and see what that individual idea attempts to represent. But this enables only shallow exploration of the local neighborhood of the space. There's still no way to see a complete list of available terms or jump to a particular one even if you somehow know it exists. Obviously everynoise.com demonstrates my strong counterbias towards expansive openness and unrestricted exploration, but one might note that even after 10 years of me working on this genre project at Spotify, there's no place other than my own personal site to see the whole list of genres.
And the third common personalization bias demonstrated unapologetically by Daylist is the endemic tech-company fondness for unsupervised machine learning over explicit human curation. As you can see for yourself by comparing the "genre" mixes you find through Daylists with the corresponding genre pages on everynoise, the genre system is only one of Daylist's inputs. All the non-genre moods and vibes in Daylist obviously come from a different system, but even the genre terms are also filtered through other influences. I did help with those other systems, too, creditwise, but I didn't invent and wasn't running them.
Nor, honestly, do I trust them. You will learn to trust or distrust your own Daylists, if you spend time listening to them or even just inspecting them, but if you follow conversations about them online to get a wider sample than just your own, you will quickly find that they do not always make sense. Mine, right now, claims to be giving me japanese metal and visual kei, but much of it is actually idol rock and a mysterious number of <100-listener Russian metalcore bands that I have never played and which have no evident connection to bands I have. The "Japanese Metal" mix is mostly Japanese, but only sporadically metal. The "Visual Kei" mix is mostly Japanese, and does contain some visual kei, but you'd have to already know what visual kei is to pick those songs out. The "Laptop" mix opens with Morbid Angel's "Visions from the Dark Side", a song that not only was not made on a laptop (to put it mildly), but which narrowly predates the commercial availability of laptops entirely.
The genre system was not error-proof, either. But it was built on intelligible math, it was overseen by humans, and those humans had both the technical tools and moral motivation to fix errors. We did not have a "laptop" genre because "laptop" is not a community of artists or listeners or practice, but if we had, and the system had put Morbid Angel on it, I would have stopped all other work until I was 100% confident I understood why such an egregious error had happened and had taken actions to both prevent that error from recurring and improve the monitoring processes to instill programmatic vigilance against that kind of error.
But once you commit to machine learning, instead of explicit math, you mostly give up on predictability. This doesn't prevent you from detecting errors, but it means you will generally find it hard to correct errors when you detect them, and even harder to prevent new ones from happening. The more complicated your systems, the weirder their failure modes, and the weirder the failures get, the harder it is to anticipate them or their consequences. If you delegate "learning" to machines, what you really mean is that you have given up on the humans learning. The real peril of LLM AI is not that ChatGPT hallucinates, it's that ChatGPT appears to be generating new ideas in such a way that it's tempting to think you don't need to pay people to do that any more. But people having written is why ChatGPT works at all. If generative AI arrives at human truths, sometimes or ever, it's because humans discovered those truths first, and wrote them down. Every problem you turn over to interpolative machines is a problem that will never thereafter be solved in a new way, that will never produce any new truths.
The problem with a music service laying off its genre curator is not the pettiness of firing the person responsible for a shiny new brand-moment. I was responsible for some previous shiny new brand-moments, too, as recently as less than a week before the layoff, but not this one and mere ungratefulness is sad but not systemically destabilizing. Daylist was made by other people, and will be maintained by other people. The problem is that I insisted on putting human judgment and obstinate stewardship in the path of demand-generation, and if that isn't enough to keep you from getting laid off from a music-streaming company, it's hard to imagine anybody else having the idiotic courage to keep trying it.
¶ New Releases by...Something · 25 January 2024 listen/tech
Whatever my job is or, currently, isn't, I still really want to know about new music, and there were exactly two excellent methods for this and I made them both and I'm not allowed to use either one any more. So I keep poking at approaches to this problem using public tools, and the Spotify API is still the best of these. I've gone so far as to implement a version of the obviously-intractable brute-force approach of getting a genre's artists, expanding to all those artists' Fans Also Like artists (refcounting along the way), and then getting the catalog for each artist in the resulting longer list in order to filter their releases by release-date.
This does sort of work, eventually. It's not especially convincing for coverage, because the public API only exposes 20 Fans Also Like artists, where the internal Spotify datasource behind that (at least as long as they keep maintaining the similar-artist system I wrote) had up to 250 for each seed artist. And it takes just about forever, because it requires thousands of individual queries, and even with only one instance of it running my API key quickly hits its rate-limit and gets throttled to wait in between calls.
As I have noted, Spotify could mostly fix this problem by enabling genre: filtering in the /search API when searching for "albums" (which actually means releases despite my 10 years of trying to persuade a nominal music service to take the difference between singles and albums seriously), since this API already has a tag:new filter for getting new releases (from the last 2 weeks, which is also kind of arbitrary but at least means the last full release-week is always completely covered). There's already internal data for artists' "extended" genres, which is the extrapolated version using collective artist similarity. Or at least there is if they keep maintaining the genre system I wrote.
You can see exactly how viable this is, if you're curious and not unmanageably triggered by a thing that takes the shape of our loss without salving it, because the search API does already allow release searches to be filtered by label: the same way it could allow searching by genre. Any API app could do this, there's nothing special about my access or techniques here. But I looked and didn't find one, so I made it. This is what my job was like, too, and apparently I was literally correct when I used to say that I'd be doing it even if they weren't paying me.
Thus: New Releases by Label for, e.g., a list of 58 metal-related labels.
The chances are decent that if too many people try this at once it will slow down or die, too, but for each label it requires as few as two queries: one to get that label's new releases (in pages of 50 if a single label has more than 50 new releases from the last two weeks), and then at least one follow-up query (in pages of 20) to get those albums' tracks. This is reasonable overhead.
Labels are no direct substitute for genres, obviously, not least because if you care about music you need not also care about labels or whether artists are even on one.
And even if you do care about labels, label data is messy. It's something of a stretch to call it "label data", in the current music-distribution ecosystem. There's a text field for "label", and humans type stuff into it. If the humans doing a given bit of typing are diligent, and none of the other undiligent humans stray into the diligent namespaces by accident or nefarity, then you can kind of pretend it's data. I spent a while in my former job trying to do slightly better than this by aggressively normalizing name-variations and algorithmically distinguishing between actual labels and whatever words people who aren't on labels would type into that box, with some success:
I see there that my past-job self could have combined "Hell's Headbangers" and "Hells Headbangers Records" by removing apostrophes, which either didn't occur to me or caused more problems than it solved, and I no longer remember which and can't check.
There are, though, many labels that exist to release a certain kind of music according to some kind of unifying principle, and those principles tend to align with genres, or more accurately tend to be part of the social structure that builds music-based communities, which are what I usually mean when I talk about genres. So this approach is wildly incomplete, but seems at least potentially helpful to me. You can try it with some labels you like, and see if it helps you, too.
The one small catch with this is that the API label filter is very literal. You have to know the exact way the label you're looking for is typed in the "label" fields. And, inconveniently, that label field does not actually appear in the Spotify app.** What you see instead are the copyright and publishing credits, at the bottom of an album's tracklist like this:
You might guess that the "label" here could be either "AFM Records" or "Soulfood Music Distribution GmbH", and as it turns out "AFM Records" is right, but it didn't have to be either of those and guessing is tedious anyway.
But I have a thing for exercising my petty annoyances about how to display albums, already, so if you look up an artist in the everynoise research tools, you can now see each release's label next to its release date.
But looking labels up album by album is tedious, too. The one automated tool left in my new-release workflow is Release Radar, which provides a subsistence level of new-release awareness if you take the time to follow all the artists you know you care about. And I have a thing for exercising my petty annoyances about how to display playlists, too, so I added a label column to it, which you can even click on to sort or group a whole playlist by label:
And if your playlist happens to represent a subset of new releases you know you care about, look at the bottom of that page for a little helpful link to feed all the labels from a given playlist back into New Releases by Label.
Obviously it would be better if there was also a link to find new releases from this playlist's genres, instead of just its labels, and of course that's what this link used to do. And could do, again, in a better future.
We will get better futures. That is, we'll get them if we build them, and we will build them, one way or another, because it's too annoying not having them.
** 1/30: An alert reader notes that the label actually is available in the Spotify app, not on the release itself but in the Song Credits dialog for any of its songs, at the bottom labeled "Source".
This does sort of work, eventually. It's not especially convincing for coverage, because the public API only exposes 20 Fans Also Like artists, where the internal Spotify datasource behind that (at least as long as they keep maintaining the similar-artist system I wrote) had up to 250 for each seed artist. And it takes just about forever, because it requires thousands of individual queries, and even with only one instance of it running my API key quickly hits its rate-limit and gets throttled to wait in between calls.
As I have noted, Spotify could mostly fix this problem by enabling genre: filtering in the /search API when searching for "albums" (which actually means releases despite my 10 years of trying to persuade a nominal music service to take the difference between singles and albums seriously), since this API already has a tag:new filter for getting new releases (from the last 2 weeks, which is also kind of arbitrary but at least means the last full release-week is always completely covered). There's already internal data for artists' "extended" genres, which is the extrapolated version using collective artist similarity. Or at least there is if they keep maintaining the genre system I wrote.
You can see exactly how viable this is, if you're curious and not unmanageably triggered by a thing that takes the shape of our loss without salving it, because the search API does already allow release searches to be filtered by label: the same way it could allow searching by genre. Any API app could do this, there's nothing special about my access or techniques here. But I looked and didn't find one, so I made it. This is what my job was like, too, and apparently I was literally correct when I used to say that I'd be doing it even if they weren't paying me.
Thus: New Releases by Label for, e.g., a list of 58 metal-related labels.
The chances are decent that if too many people try this at once it will slow down or die, too, but for each label it requires as few as two queries: one to get that label's new releases (in pages of 50 if a single label has more than 50 new releases from the last two weeks), and then at least one follow-up query (in pages of 20) to get those albums' tracks. This is reasonable overhead.
Labels are no direct substitute for genres, obviously, not least because if you care about music you need not also care about labels or whether artists are even on one.
And even if you do care about labels, label data is messy. It's something of a stretch to call it "label data", in the current music-distribution ecosystem. There's a text field for "label", and humans type stuff into it. If the humans doing a given bit of typing are diligent, and none of the other undiligent humans stray into the diligent namespaces by accident or nefarity, then you can kind of pretend it's data. I spent a while in my former job trying to do slightly better than this by aggressively normalizing name-variations and algorithmically distinguishing between actual labels and whatever words people who aren't on labels would type into that box, with some success:
I see there that my past-job self could have combined "Hell's Headbangers" and "Hells Headbangers Records" by removing apostrophes, which either didn't occur to me or caused more problems than it solved, and I no longer remember which and can't check.
There are, though, many labels that exist to release a certain kind of music according to some kind of unifying principle, and those principles tend to align with genres, or more accurately tend to be part of the social structure that builds music-based communities, which are what I usually mean when I talk about genres. So this approach is wildly incomplete, but seems at least potentially helpful to me. You can try it with some labels you like, and see if it helps you, too.
The one small catch with this is that the API label filter is very literal. You have to know the exact way the label you're looking for is typed in the "label" fields. And, inconveniently, that label field does not actually appear in the Spotify app.** What you see instead are the copyright and publishing credits, at the bottom of an album's tracklist like this:
You might guess that the "label" here could be either "AFM Records" or "Soulfood Music Distribution GmbH", and as it turns out "AFM Records" is right, but it didn't have to be either of those and guessing is tedious anyway.
But I have a thing for exercising my petty annoyances about how to display albums, already, so if you look up an artist in the everynoise research tools, you can now see each release's label next to its release date.
But looking labels up album by album is tedious, too. The one automated tool left in my new-release workflow is Release Radar, which provides a subsistence level of new-release awareness if you take the time to follow all the artists you know you care about. And I have a thing for exercising my petty annoyances about how to display playlists, too, so I added a label column to it, which you can even click on to sort or group a whole playlist by label:
And if your playlist happens to represent a subset of new releases you know you care about, look at the bottom of that page for a little helpful link to feed all the labels from a given playlist back into New Releases by Label.
Obviously it would be better if there was also a link to find new releases from this playlist's genres, instead of just its labels, and of course that's what this link used to do. And could do, again, in a better future.
We will get better futures. That is, we'll get them if we build them, and we will build them, one way or another, because it's too annoying not having them.
** 1/30: An alert reader notes that the label actually is available in the Spotify app, not on the release itself but in the Song Credits dialog for any of its songs, at the bottom labeled "Source".
¶ Individual initiative, corporate inertia and good bad advice · 17 December 2023
I did my job with love and belief. This was always obviously risky. I had no illusions that the love was returned, or that it is even possible for a public corporation of non-trivial scale to behave in human ways at all.
It's easy to find cogent advice for how to modulate your emotional attachment to your job, so that losing it is not like losing a part of yourself. I cheerfully recommend ignoring this. The world is better if everybody does their jobs with love and commitment. You definitely want everybody who does a job that affects you to do it with love, so lead by example. This method will result in sporadic stabs of excruciating pain, but the parts of you you lose to jobs regrow quickly, even if you sometimes have to rub some numbing ointment on the wounds for a while.
The advice industry, like most modern industries, encodes a sneaky bias to sustain corporatism by implicitly casting individual adaptation as the only medium for change. Good luck finding books that somberly advise corporations on how to encourage a dangerous reliance on unsupervised individual inspiration. I have to get a new job now, or something of the sort, but that's only one small problem to solve. The company I no longer work at has to sort through a thousand things I used to do, almost all of which I did because I thought they ought to get done, and then other people came to depend on them because I was right. A large company would probably never hire someone into a role with this little structure, but the startup where I began was inherently based on the individual efforts of its founders and other early employees, and once we were acquired I just kept doing the job my way and it took 10 years for somebody to stop me.
This is problematic, clearly. A company needs to be able to treat its employees as interchangeable and expendable, both individually and collectively. It needs to be able to periodically layoff 17% of its workforce to cut its margin overhead by 1% and temporarily boost its stock price by 5%, without having to endure existential upheaval to its ongoing business processes. It needs to be able to double and redouble recklessly in size for the same dubious market reasons, without those people all piling up in the lobby where their chaos is visible from the street. Both expansions and contractions are actions of corporate musculature, flexed as much for show as for motive.
The key to these flexibilities, as we have understood at least since Henry Ford, is to formalize the operational roles so that their function in the overall system is symbolic and anonymous. As long as people are just units inserted into well-defined slots, the machinery doesn't need to care who they are.
But the resulting machine, because it operates symbolically on abstract definitions, cannot readily adapt. It needs to "innovate", we know, but if you bring a new idea to a well-organized unit in a corporate machine, it will efficiently reroute you into a well-defined pipeline for queuing up potential future input for consideration two quarters from now, because the defining quality of a well-organized unit in a well-organized organization is that it already has its next six months of work fully prioritized in alignment with established corporate goals.
Whereas if you came to me with a new idea, or an unanswered question whose answer might suggest new ideas, or a problem that might be solved by something I already figured out, I would listen to you, and ask some questions, and gradually pivot my body towards my keyboard as we talked until eventually I started typing. Sometimes, after a minute of this, I would say "Sorry, I'm still listening, but give me five minutes and let me see what I can figure out." Occasionally I'd have to say "This is interesting, but it's kind of complicated. Can I poke at it a bit and get back to you tomorrow?" I could do new things because my inquiries weren't prescribed. I was prepared to solve unexpected problems because I spent most of my time unexpecting things and seeing where that took me.
There are, of course, books about corporate agility. There are ways to keep the latency for change to smaller increments than quarters or even months. But none of them advise you to find individual people who happen to be able to do pertinent unique work on the fly, just because they have the right combination of skills and knowledge and stubbornness. You can't sell a book of methodology in which a crucial step is "Luck into anomalous contributors". Anomalies are exactly what prudent processes attempt to preclude.
But everybody is better off if companies ignore this caution with the same exuberant disregard as people doing their jobs with inadvisable devotion. The most transformational human ideas begin in individual hearts, whatever gantlets of brainstorming and strategic opportunity-analysis they subsequently have to run. Spotify was more right, I think, to tolerate my curiosities and experiments for 10 years than they were to finally give up on them out of exasperation or ignorance. Spotify, like probably every other interesting company, only exists because a few people once had unruly unsupervised impulses that the better-organized status quo couldn't accommodate. The secret truth of business advice is that it's mostly about how to grimly extract residual value from the luck you already had, and the unearned love you were already unguardedly given, because there's really no method for making more of it.
It's easy to find cogent advice for how to modulate your emotional attachment to your job, so that losing it is not like losing a part of yourself. I cheerfully recommend ignoring this. The world is better if everybody does their jobs with love and commitment. You definitely want everybody who does a job that affects you to do it with love, so lead by example. This method will result in sporadic stabs of excruciating pain, but the parts of you you lose to jobs regrow quickly, even if you sometimes have to rub some numbing ointment on the wounds for a while.
The advice industry, like most modern industries, encodes a sneaky bias to sustain corporatism by implicitly casting individual adaptation as the only medium for change. Good luck finding books that somberly advise corporations on how to encourage a dangerous reliance on unsupervised individual inspiration. I have to get a new job now, or something of the sort, but that's only one small problem to solve. The company I no longer work at has to sort through a thousand things I used to do, almost all of which I did because I thought they ought to get done, and then other people came to depend on them because I was right. A large company would probably never hire someone into a role with this little structure, but the startup where I began was inherently based on the individual efforts of its founders and other early employees, and once we were acquired I just kept doing the job my way and it took 10 years for somebody to stop me.
This is problematic, clearly. A company needs to be able to treat its employees as interchangeable and expendable, both individually and collectively. It needs to be able to periodically layoff 17% of its workforce to cut its margin overhead by 1% and temporarily boost its stock price by 5%, without having to endure existential upheaval to its ongoing business processes. It needs to be able to double and redouble recklessly in size for the same dubious market reasons, without those people all piling up in the lobby where their chaos is visible from the street. Both expansions and contractions are actions of corporate musculature, flexed as much for show as for motive.
The key to these flexibilities, as we have understood at least since Henry Ford, is to formalize the operational roles so that their function in the overall system is symbolic and anonymous. As long as people are just units inserted into well-defined slots, the machinery doesn't need to care who they are.
But the resulting machine, because it operates symbolically on abstract definitions, cannot readily adapt. It needs to "innovate", we know, but if you bring a new idea to a well-organized unit in a corporate machine, it will efficiently reroute you into a well-defined pipeline for queuing up potential future input for consideration two quarters from now, because the defining quality of a well-organized unit in a well-organized organization is that it already has its next six months of work fully prioritized in alignment with established corporate goals.
Whereas if you came to me with a new idea, or an unanswered question whose answer might suggest new ideas, or a problem that might be solved by something I already figured out, I would listen to you, and ask some questions, and gradually pivot my body towards my keyboard as we talked until eventually I started typing. Sometimes, after a minute of this, I would say "Sorry, I'm still listening, but give me five minutes and let me see what I can figure out." Occasionally I'd have to say "This is interesting, but it's kind of complicated. Can I poke at it a bit and get back to you tomorrow?" I could do new things because my inquiries weren't prescribed. I was prepared to solve unexpected problems because I spent most of my time unexpecting things and seeing where that took me.
There are, of course, books about corporate agility. There are ways to keep the latency for change to smaller increments than quarters or even months. But none of them advise you to find individual people who happen to be able to do pertinent unique work on the fly, just because they have the right combination of skills and knowledge and stubbornness. You can't sell a book of methodology in which a crucial step is "Luck into anomalous contributors". Anomalies are exactly what prudent processes attempt to preclude.
But everybody is better off if companies ignore this caution with the same exuberant disregard as people doing their jobs with inadvisable devotion. The most transformational human ideas begin in individual hearts, whatever gantlets of brainstorming and strategic opportunity-analysis they subsequently have to run. Spotify was more right, I think, to tolerate my curiosities and experiments for 10 years than they were to finally give up on them out of exasperation or ignorance. Spotify, like probably every other interesting company, only exists because a few people once had unruly unsupervised impulses that the better-organized status quo couldn't accommodate. The secret truth of business advice is that it's mostly about how to grimly extract residual value from the luck you already had, and the unearned love you were already unguardedly given, because there's really no method for making more of it.
¶ The problem with new releases · 14 December 2023 listen/tech
There were a lot of things on Every Noise at Once that updated daily or weekly, and some of them will lose value only slowly now that they can't be updated. The one that loses almost all of its value at once is the weekly New Releases by Genre.
There were no new-release features in Every Noise at Once at its literal outset, pre-Spotify when it was powered by second-hand Echo Nest data and the Rdio API. I found a 2014-07-30 message from me on I Love Music sharing the URL of the first single-page version of a Spotify new-release list, but the earliest capture in the Internet Archive is from 2014-09-24, when Maroon 5's "Maps" was the top single of the week, and the righthand column listed "all 3304 releases this week". I called this version the Spotify Sorting Hat, because certain things hadn't happened yet.
By 2019 that version became untenable, and I rewrote it from scratch to separate the genre collation from the raw list, and introduce more control so you had a prayer of finding the subset of music you actually cared about. That version is dead now without the internal Spotify feeds that provided its data.
You might imagine that there would be alternatives by now, but there aren't any on the same scale. Spotify has by far the best API for this kind of idea, but it isn't quite set up to provide any of the three things that a serious new-release tool should offer:
- "all" new releases: in truth I started imposing thresholds on what the Sorting Hat would include before switching to NRbG, and NRbG never showed literally everything, either. But it showed a lot. The Spotify API for searching allows you to specify "tag:new" and get only things released in the last two weeks, but you can only get 50 of them at a time, and only 1000 total, sorted by popularity. Most weeks there are more than 100,000.
- new releases by genre: you can filter by genre in the Spotify API, but only in artist searches. And you can only use "tag:new" in album searches. So currently they can't be combined. Unlike the all-release issue, this one would be a reasonable feature request for the API, as it fits the existing usage-models and wouldn't be particularly onerous to support. Assigning artist-level genres to albums can get existential if you think too hard about it, but if you stick to the idea that genres are communities, then calling an album "atmospheric black metal" is shorthand for saying that it's an album made by a band that is part of the atmospheric black metal community, which makes sense even if the particular album is acoustic folk pastiches, and in the new-release case gets it to the audience that wants to know about that album, so it's fine.
- discovery: the more complicated thing NRbG did was to try to distribute new releases by bands who aren't really the canonical representatives of any genre to the genres whose fans would be their most likely audiences. This absolutely can't be done using the existing API, for the same reasons that you can't extract a "full" list. You can get the 20 most similar artists for any artist, but for matching unknown artists to genres you need to go in the other direction, finding the 100s of artists whose Fans Also Like lists includes the known artists in a genre. But adding this feature to the API wouldn't be much harder than adding the genre filter itself.
Absent those features, the Spotify API can't be used to build this, and so far all the other services are even farther from being able to provide the tools for it. NRbG worked because the end results weren't confidential, just inaccessible, and I could solve the inaccessibility by running a set of carefully interlocking internal queries. Now I can't.
I don't really know how we can do new-release discovery now, without this. We can go back to the human, community-based modes of knowledge we used to use, of course: mailing lists, discussion forums, blogs, playlists maintained by individual experts. One genre at a time, these ways are usually better than queries, detailed and contextual and exultory. But they can't be aggregated the way data can. You can keep track of a genre or two or five this way, but not 20. Not 100. I've been monitoring hundreds of genres every week, for years. Now I am as lost as you.
There were no new-release features in Every Noise at Once at its literal outset, pre-Spotify when it was powered by second-hand Echo Nest data and the Rdio API. I found a 2014-07-30 message from me on I Love Music sharing the URL of the first single-page version of a Spotify new-release list, but the earliest capture in the Internet Archive is from 2014-09-24, when Maroon 5's "Maps" was the top single of the week, and the righthand column listed "all 3304 releases this week". I called this version the Spotify Sorting Hat, because certain things hadn't happened yet.
By 2019 that version became untenable, and I rewrote it from scratch to separate the genre collation from the raw list, and introduce more control so you had a prayer of finding the subset of music you actually cared about. That version is dead now without the internal Spotify feeds that provided its data.
You might imagine that there would be alternatives by now, but there aren't any on the same scale. Spotify has by far the best API for this kind of idea, but it isn't quite set up to provide any of the three things that a serious new-release tool should offer:
- "all" new releases: in truth I started imposing thresholds on what the Sorting Hat would include before switching to NRbG, and NRbG never showed literally everything, either. But it showed a lot. The Spotify API for searching allows you to specify "tag:new" and get only things released in the last two weeks, but you can only get 50 of them at a time, and only 1000 total, sorted by popularity. Most weeks there are more than 100,000.
- new releases by genre: you can filter by genre in the Spotify API, but only in artist searches. And you can only use "tag:new" in album searches. So currently they can't be combined. Unlike the all-release issue, this one would be a reasonable feature request for the API, as it fits the existing usage-models and wouldn't be particularly onerous to support. Assigning artist-level genres to albums can get existential if you think too hard about it, but if you stick to the idea that genres are communities, then calling an album "atmospheric black metal" is shorthand for saying that it's an album made by a band that is part of the atmospheric black metal community, which makes sense even if the particular album is acoustic folk pastiches, and in the new-release case gets it to the audience that wants to know about that album, so it's fine.
- discovery: the more complicated thing NRbG did was to try to distribute new releases by bands who aren't really the canonical representatives of any genre to the genres whose fans would be their most likely audiences. This absolutely can't be done using the existing API, for the same reasons that you can't extract a "full" list. You can get the 20 most similar artists for any artist, but for matching unknown artists to genres you need to go in the other direction, finding the 100s of artists whose Fans Also Like lists includes the known artists in a genre. But adding this feature to the API wouldn't be much harder than adding the genre filter itself.
Absent those features, the Spotify API can't be used to build this, and so far all the other services are even farther from being able to provide the tools for it. NRbG worked because the end results weren't confidential, just inaccessible, and I could solve the inaccessibility by running a set of carefully interlocking internal queries. Now I can't.
I don't really know how we can do new-release discovery now, without this. We can go back to the human, community-based modes of knowledge we used to use, of course: mailing lists, discussion forums, blogs, playlists maintained by individual experts. One genre at a time, these ways are usually better than queries, detailed and contextual and exultory. But they can't be aggregated the way data can. You can keep track of a genre or two or five this way, but not 20. Not 100. I've been monitoring hundreds of genres every week, for years. Now I am as lost as you.
¶ The beginning of the past · 13 December 2023
Between the Echo Nest and then, via acquisition, Spotify, I spent 12 years doing a slowly mutating job of trying to use data and math and computers to help all the world's music self-organize. It seems to be the unanimous opinion of people who send me nice notes on email and Twitter and LinkedIn that I did valuable things at Spotify and from Spotify, and that laying me off was some combination of corporate error and public tragedy. I don't think this is merely kindness. Over that time I created or improved a lot of things by direct individual effort, including Daily Mix, This Is artist playlists, Fans Also Like, a genre system, fraud and abuse detection, many pieces of Spotify Wrapped, more internal tools and analytics and prototypes than you can probably imagine, and a public web-temple to music exploration and the discovery of joy.
I am aware, of course, that people telling me they appreciate what I did is a clear and heartening demonstration of empathetic selection bias. If you didn't care about my work, then it isn't news that I'm not going to be doing it, and doesn't require your comment. It's tempting to imagine that there's somebody at Spotify who actually disagrees with this, and has been waiting for years for an opportunity to replace my uncooperative insistence on using math to make musical sense with something more acquiescent, willing to say "content" instead of "music" and celebrate 0.05% average-metric nudges without asking to see the distributions under the averages and stop posing moral objections to profit-margin KPIs.
But probably it's far worse than that: There was no enemy, there was no purpose. I didn't lose a heroic battle, I lost a meaningless lottery. A no-warning 1500-person layoff probably cannot be done "well". I see co-workers who were also laid off that had been at Spotify for 12, 13, 14 years, and who thus must have been there in the basement with Daniel and Martin at the beginning. If there is anybody who can take a big company back to its resourceful small-company past-life, it's the people who were literally part of it. Surely you don't lay off the people with the very qualities you're supposedly trying to recapture unless you genuinely can't help it. I did a lot more things inside Spotify than things you could see from outside, and the pragmatic corporate arguments against laying me off needn't have invoked the public good at all. Public loss is collateral damage from capitalism operating for capital's sake.
Meanwhile, here is the situation: everynoise.com is cut off from data updates, and I expect this will not change. The processes I left running are still running, so the missing data is probably all waiting in dark staging servers, wondering when it will finally be summoned into the light. It won't. The Approaching Worms of Xmas will never reach it this year. 2023 Around the World, my deliberate celebration of full calendar years, will have to be gallingly content with 11-month provisional results. Anything static will remain, in its current state.
My automated playlists, on the other hand, get updated through Spotify-internal systems, and are still operating. I think it's likely that they'll be spared for the holidays, but if you care about any of those, you should take any further updates as gifts. At best, nobody at Spotify will bother to figure out how my automation actually functions, and everything will be left running until they're ready to turn the whole system off again with one big switch. At worst, tomorrow something will break that nobody knows how to fix or even debug, and that will be it. I don't normally claim that fault-tolerant engineering is one of my core competencies, so it will be a minor triumph if my automation survives long enough to get killed.
I have some time to find a new job, or at least a plan for ongoing health-insurance coverage. My belief in the promise of streaming music is a function of music and humanity, not of Spotify, so certainly my first inclination is to find another way of contributing to its expanding fulfillment of that promise. But of course there's also a part of my brain that occasionally mutters "Um, climate change?" I also have an idea for a second book, which I was going to work on over the holidays, except that I didn't anticipate having to spend some of that time changing present tenses to past in my first book, which still has to survive the next six months of routine cosmic weirdness before it finally exists.
The job I've been doing, because I did it with personal goals, affected a lot more than my nominal work-hours, and getting myself to stop trying to do it is harder than remote-locking my work laptop, and a lot more complicated. Urges will have to be channeled somewhere. I will probably need a new way to think about my music-listening, and maybe new tools to replace the ones I lost, and I've never been able to listen to music without also writing about it for very long, so I imagine there might be a new form of that, too. But probably not this week. For now I'm going to put Hitsujibungaku on repeat, and try to let the blurry futures resolve a little. I feel basically OK about the last 12 years, I think. They are not invalidated by their sudden end. But I want the next 12 to be better.
A few of those nice notes that were written (or ranted) in public:
Every Noise at Once Shuts Down? at Kill the DJ
The Day Music Neutrality Died (a bit) at flyctory.
The 6000 Musical Tribes at The Limited Times, which is a translation of Las 6.000 tribus musicales at El País.
Spotify Fired the Wrong Person at Venture Music.
Continue Everynoise at community.spotify.com.
I am aware, of course, that people telling me they appreciate what I did is a clear and heartening demonstration of empathetic selection bias. If you didn't care about my work, then it isn't news that I'm not going to be doing it, and doesn't require your comment. It's tempting to imagine that there's somebody at Spotify who actually disagrees with this, and has been waiting for years for an opportunity to replace my uncooperative insistence on using math to make musical sense with something more acquiescent, willing to say "content" instead of "music" and celebrate 0.05% average-metric nudges without asking to see the distributions under the averages and stop posing moral objections to profit-margin KPIs.
But probably it's far worse than that: There was no enemy, there was no purpose. I didn't lose a heroic battle, I lost a meaningless lottery. A no-warning 1500-person layoff probably cannot be done "well". I see co-workers who were also laid off that had been at Spotify for 12, 13, 14 years, and who thus must have been there in the basement with Daniel and Martin at the beginning. If there is anybody who can take a big company back to its resourceful small-company past-life, it's the people who were literally part of it. Surely you don't lay off the people with the very qualities you're supposedly trying to recapture unless you genuinely can't help it. I did a lot more things inside Spotify than things you could see from outside, and the pragmatic corporate arguments against laying me off needn't have invoked the public good at all. Public loss is collateral damage from capitalism operating for capital's sake.
Meanwhile, here is the situation: everynoise.com is cut off from data updates, and I expect this will not change. The processes I left running are still running, so the missing data is probably all waiting in dark staging servers, wondering when it will finally be summoned into the light. It won't. The Approaching Worms of Xmas will never reach it this year. 2023 Around the World, my deliberate celebration of full calendar years, will have to be gallingly content with 11-month provisional results. Anything static will remain, in its current state.
My automated playlists, on the other hand, get updated through Spotify-internal systems, and are still operating. I think it's likely that they'll be spared for the holidays, but if you care about any of those, you should take any further updates as gifts. At best, nobody at Spotify will bother to figure out how my automation actually functions, and everything will be left running until they're ready to turn the whole system off again with one big switch. At worst, tomorrow something will break that nobody knows how to fix or even debug, and that will be it. I don't normally claim that fault-tolerant engineering is one of my core competencies, so it will be a minor triumph if my automation survives long enough to get killed.
I have some time to find a new job, or at least a plan for ongoing health-insurance coverage. My belief in the promise of streaming music is a function of music and humanity, not of Spotify, so certainly my first inclination is to find another way of contributing to its expanding fulfillment of that promise. But of course there's also a part of my brain that occasionally mutters "Um, climate change?" I also have an idea for a second book, which I was going to work on over the holidays, except that I didn't anticipate having to spend some of that time changing present tenses to past in my first book, which still has to survive the next six months of routine cosmic weirdness before it finally exists.
The job I've been doing, because I did it with personal goals, affected a lot more than my nominal work-hours, and getting myself to stop trying to do it is harder than remote-locking my work laptop, and a lot more complicated. Urges will have to be channeled somewhere. I will probably need a new way to think about my music-listening, and maybe new tools to replace the ones I lost, and I've never been able to listen to music without also writing about it for very long, so I imagine there might be a new form of that, too. But probably not this week. For now I'm going to put Hitsujibungaku on repeat, and try to let the blurry futures resolve a little. I feel basically OK about the last 12 years, I think. They are not invalidated by their sudden end. But I want the next 12 to be better.
A few of those nice notes that were written (or ranted) in public:
Every Noise at Once Shuts Down? at Kill the DJ
The Day Music Neutrality Died (a bit) at flyctory.
The 6000 Musical Tribes at The Limited Times, which is a translation of Las 6.000 tribus musicales at El País.
Spotify Fired the Wrong Person at Venture Music.
Continue Everynoise at community.spotify.com.