4 June 2006 to 20 April 2006
One of the BarCamp principles is that if you aren't learning or contributing, you should get up and go somewhere else. So today, day 2 of BarCampBoston, I have gotten up and not gone anywhere, and am sitting at home in Cambridge with an IRC window open just in case something mind-blowing happens in Maynard and somebody there thinks to type about it. I'd guess there were something like 150 people who showed up for day 1, and IRC reports estimate more like 30 who returned for day 2, so I don't appear to be alone in my decision.
Actually, there's probably nothing really un about this unreport, and for me there wasn't enough un about this nominal unconference. The reason I don't go to a lot of regular technology conferences is that I find them too often to consist of a series of sessions driven by over-specific presentations that are, at best, distantly related to some topic in which I am interested. My best hope is usually that they will inspire some discussion that wanders closer, but the structure usually almost guarantees that the presentations will be too long, and the discussions (if they happen at all) too short.
BarCamp was basically just like this. I went to some sessions I liked, but all of these ended just when they were getting going, if not sooner. I went to some I didn't, and these went on too long, or without viable alternatives. At least on day 1, the meeting spaces were far from the staging center, and even farther from each other, making it hard to do anything but pick a room and hope for the best. Day 1 was over-scheduled, day 2 under-scheduled, and in the absence of any compensating plan, this became self-reinforcing: I'm sure I wasn't the only person who moved their presentation from day 2 to day 1 in the fear that if I gave it on day 2 there'd be no audience, and then once I'd given it there was one less reason to come back on day 2 myself.
I can think of two obvious ways to combat this structural problem. One is to have better presentations. I don't know any shortcut to this. The longcut is to have the presentations solicited, proposed, submitted, judged and endorsed ahead of time. This could probably be done in a democratic, ad hoc, self-organizing, non-authoritarian, grassrooted, BarCampy way, but it would be hard, and I don't think that approach takes any fundamental advantage of the unique nature of BarCamp.
The other approach is to acknowledge explicitly that unless the presentations are exceptional, the real value is much more likely in the discussions. This, conversely, is exactly in keeping with the ad hoc, participatory nature of BarCamp. So my proposal for the next BarCampBoston is that it be mostly, or perhaps entirely, about discussions. Forget 30-minute presentations, which usually produce the kind of information quantity and density you could just as well read on the web anyway, and produce way too much scheduling churn and running around.
Instead, organize everything into 90-minute discussion groups with 30-minute blocks in between for logistics. So 10-11:30, 12-1:30 (over lunch), 2-3:30 and 4-5:30. For each discussion group there should be:
- A framing topic, ideally in the form of a theoretically answerable question, like "What is the killer app for the semantic web?", or "What is the next step towards practical distributed identity-management?"
- 3-6 participants prepared to contribute 5-minute demos of (or talks about) directly relevant work, so that the discussion has both concrete references and personal investments.
- A moderator, who is in charge of nudging the conversation out of ruts and back from digressions when necessary, and helping spot the right places to insert the demos. One of the contributors can also serve as moderator, or the moderator can be somebody else.
- Some idea, explicitly stated if it isn't adequately implied by the topic and the list of demos, of the expected context or background of discussion participants. It's particularly worthwhile to distinguish between introductions ("What is the semantic web and why might I care?: An invitation for web developers discovering data structure.") and working groups ("From DTDs to RDFS to OWL: How do you decide how much ontology is worth modeling?")
- Few enough other people that all of you can actually have a discussion. No more than 12 total people who plan on talking, probably, including the moderator and the contributors. And additional silent observers only if the space can still accommodate everyone comfortably for 90 minutes.
- Comfortable, discussion-suited spaces, ideally with net connections and presentation screens, and double-ideally arranged so that they feel connected, and there's a central common area from which everything else can be staged. Nobody should end up sitting somewhere bored because it's too hard to figure out where else they could be.
This can all be totally self-organizing, but most (if not all) of it should be self-organizing in advance: at least the topics, demos, moderators and contexts; and any amount of schedule- and space-assignment will only help. Among other virtues, laying out the options in advance allows participants to anticipate a good experience by actually planning it, and allows the group to potentially consolidate less popular topics and split (or clone) more popular ones. Clone with impunity, in fact. In a self-organizing conference, the participants are generally going to be self-motivated and self-filtered, so the chance of a large group having too much to say is far higher than the chance of splitting it in two and finding that either half runs out of ideas.
And if you can manage to have the unconference be self-reorganizing on the fly, then fantastic. Whether you plan for this or not is a question of your optimism. It sounds cool to say that you'll leave the 4:00 time-slot open until 3:30, for example, so that sessions inspired by earlier developments can spontaneously materialize. I bet it generally won't work, though. At most I might leave some of the spaces in a time-slot unbooked so that a planned session that overflows can be split or cloned on the fly. But ideally I'd state even this possibility ahead of time, so that the potential second session can be provisionally self-organized with a moderator and a share of the contributors. Remember that it's always easier to not do something you were prepared to do than it is to do something you weren't prepared to do.
The one other hope I have for future BarCamps and related gatherings (Geek Dinners, WebInno, RSS Alley experiments...) is that we find a way to get a little more participation from non-startups. I don't want to crowd out the self-employed, the unemployed and the aspiringly employed, but there are times when I feel like this is the Boston Technology Underdogs Club, rather than any kind of representative sample of the real community of people interested in new technology. Credit to Monster for hosting, but the people in the orange shirts weren't acting so much as ushering, and even if they had been participating, when the next largest concern represented is Tabblo, you know we're missing some people.
Actually, there's probably nothing really un about this unreport, and for me there wasn't enough un about this nominal unconference. The reason I don't go to a lot of regular technology conferences is that I find them too often to consist of a series of sessions driven by over-specific presentations that are, at best, distantly related to some topic in which I am interested. My best hope is usually that they will inspire some discussion that wanders closer, but the structure usually almost guarantees that the presentations will be too long, and the discussions (if they happen at all) too short.
BarCamp was basically just like this. I went to some sessions I liked, but all of these ended just when they were getting going, if not sooner. I went to some I didn't, and these went on too long, or without viable alternatives. At least on day 1, the meeting spaces were far from the staging center, and even farther from each other, making it hard to do anything but pick a room and hope for the best. Day 1 was over-scheduled, day 2 under-scheduled, and in the absence of any compensating plan, this became self-reinforcing: I'm sure I wasn't the only person who moved their presentation from day 2 to day 1 in the fear that if I gave it on day 2 there'd be no audience, and then once I'd given it there was one less reason to come back on day 2 myself.
I can think of two obvious ways to combat this structural problem. One is to have better presentations. I don't know any shortcut to this. The longcut is to have the presentations solicited, proposed, submitted, judged and endorsed ahead of time. This could probably be done in a democratic, ad hoc, self-organizing, non-authoritarian, grassrooted, BarCampy way, but it would be hard, and I don't think that approach takes any fundamental advantage of the unique nature of BarCamp.
The other approach is to acknowledge explicitly that unless the presentations are exceptional, the real value is much more likely in the discussions. This, conversely, is exactly in keeping with the ad hoc, participatory nature of BarCamp. So my proposal for the next BarCampBoston is that it be mostly, or perhaps entirely, about discussions. Forget 30-minute presentations, which usually produce the kind of information quantity and density you could just as well read on the web anyway, and produce way too much scheduling churn and running around.
Instead, organize everything into 90-minute discussion groups with 30-minute blocks in between for logistics. So 10-11:30, 12-1:30 (over lunch), 2-3:30 and 4-5:30. For each discussion group there should be:
- A framing topic, ideally in the form of a theoretically answerable question, like "What is the killer app for the semantic web?", or "What is the next step towards practical distributed identity-management?"
- 3-6 participants prepared to contribute 5-minute demos of (or talks about) directly relevant work, so that the discussion has both concrete references and personal investments.
- A moderator, who is in charge of nudging the conversation out of ruts and back from digressions when necessary, and helping spot the right places to insert the demos. One of the contributors can also serve as moderator, or the moderator can be somebody else.
- Some idea, explicitly stated if it isn't adequately implied by the topic and the list of demos, of the expected context or background of discussion participants. It's particularly worthwhile to distinguish between introductions ("What is the semantic web and why might I care?: An invitation for web developers discovering data structure.") and working groups ("From DTDs to RDFS to OWL: How do you decide how much ontology is worth modeling?")
- Few enough other people that all of you can actually have a discussion. No more than 12 total people who plan on talking, probably, including the moderator and the contributors. And additional silent observers only if the space can still accommodate everyone comfortably for 90 minutes.
- Comfortable, discussion-suited spaces, ideally with net connections and presentation screens, and double-ideally arranged so that they feel connected, and there's a central common area from which everything else can be staged. Nobody should end up sitting somewhere bored because it's too hard to figure out where else they could be.
This can all be totally self-organizing, but most (if not all) of it should be self-organizing in advance: at least the topics, demos, moderators and contexts; and any amount of schedule- and space-assignment will only help. Among other virtues, laying out the options in advance allows participants to anticipate a good experience by actually planning it, and allows the group to potentially consolidate less popular topics and split (or clone) more popular ones. Clone with impunity, in fact. In a self-organizing conference, the participants are generally going to be self-motivated and self-filtered, so the chance of a large group having too much to say is far higher than the chance of splitting it in two and finding that either half runs out of ideas.
And if you can manage to have the unconference be self-reorganizing on the fly, then fantastic. Whether you plan for this or not is a question of your optimism. It sounds cool to say that you'll leave the 4:00 time-slot open until 3:30, for example, so that sessions inspired by earlier developments can spontaneously materialize. I bet it generally won't work, though. At most I might leave some of the spaces in a time-slot unbooked so that a planned session that overflows can be split or cloned on the fly. But ideally I'd state even this possibility ahead of time, so that the potential second session can be provisionally self-organized with a moderator and a share of the contributors. Remember that it's always easier to not do something you were prepared to do than it is to do something you weren't prepared to do.
The one other hope I have for future BarCamps and related gatherings (Geek Dinners, WebInno, RSS Alley experiments...) is that we find a way to get a little more participation from non-startups. I don't want to crowd out the self-employed, the unemployed and the aspiringly employed, but there are times when I feel like this is the Boston Technology Underdogs Club, rather than any kind of representative sample of the real community of people interested in new technology. Credit to Monster for hosting, but the people in the orange shirts weren't acting so much as ushering, and even if they had been participating, when the next largest concern represented is Tabblo, you know we're missing some people.
¶ the islands and bays are for paddlers · 29 May 2006
¶ public and private · 24 May 2006
Two notes of rather different kinds:
- I'll be at BarCampBoston in Maynard on June 3/4. I haven't been to any unconferences before, so I don't have any guess at how successful this one might be, nor even exactly what the criteria for success are, but it's free and it's experimental, and if it turns out to be interesting I'll be sure to brag about having been at the first one.
- Bethany has finally given in and started a blog, called rantum scoot, and her desire for feedback has overcome her public reticence, so she said it's OK for me to mention it now.
- I'll be at BarCampBoston in Maynard on June 3/4. I haven't been to any unconferences before, so I don't have any guess at how successful this one might be, nor even exactly what the criteria for success are, but it's free and it's experimental, and if it turns out to be interesting I'll be sure to brag about having been at the first one.
- Bethany has finally given in and started a blog, called rantum scoot, and her desire for feedback has overcome her public reticence, so she said it's OK for me to mention it now.
¶ Aspects and Orders · 20 May 2006 essay/tech
Although I'm not at all sure this is factually fair, I have begun to mentally, and maybe emotionally, blame Flickr for what feels to me like a plague of subject-oblivious square photo-cropping.
I should admit, I guess, that when it's me operating the camera, I'm a pretty extreme horizontalist. I'm happiest at about 3:1. In a tool universe built around 4:3, though, this is kind of a pain in the ass. I could mask my camera's LCD for 3:1 feedback, but then the picture is really too small to work from. For online display I have to assume 4:3-ish frame spaces, so 3:1 images end up in practice being shorter instead of wider, which is unsatisfying. And digital cameras will have to pack a lot more pixels into 4:3 sensors before I'll be informationally content to throw away more than half of them. And my obsessive preference for aspect-ratio consistency in exhibition sets means that I would usually rather stick to 4:3 for everything than mix in the occasional 3:1 where I spot an opportunity despite the obstacles.
So I understand, of course, the value of square-cropping in any content-neutral photo-showing application. It's possible to do an attractive job of mixing aspect-ratios, but it's exponentially easier to do an even more attractive job of displaying consistent aspect-ratios. Cropping 4:3s and 3:4s to 1:1 symmetrically is technically trivial, and although it's aesthetically unreliable in the abstract, the vast majority of amateur photographs are center-weighted, so it usually turns out OK. Actually, the vast majority of amateur photographs are also probably framed too widely, so a little universal symmetrical cropping almost certainly improves more Flickr pictures than it damages.
So cropping all pictures to squares for thumbnailing makes perfect sense as a Flickr design decision. It simplifies away arguably the biggest visual design problem in mass photo display. If you're looking at somebody else's photographs, it's easy to fall into assuming they are square, so any weirdness in framing you're likely to implicitly attribute to the photographer. The same applies to your own photographs unless you've spent some time seriously considering the originals, and the more you use Flickr, the more it is the way you consider your own photographs en masse.
But if your exhibited photographs are usually going to be approached through thumbnail galleries (the prevalence of which Flickr has also hugely influenced), and the thumbnails are usually going to be squares, it will simplify the rest of the experience if your photographs actually are square. I don't know if any digital cameras are already shipping with built-in square-cropping modes, but I expect those to start appearing very soon if they haven't already. The more square photos people have, the more display-tools will cheat and optimize for them, and the more incentive there will be to be square.
But square is a bad base ratio for photography, at least if by "photography" we mean people taking pictures of things people see, for other people to later share (or imagine) the experience of seeing. We see our world horizontally. Our eyes are side-by-side, our lives are gravity-flattened, our emotional landscapes are literal landscapes as often as metaphorical. My 3:1 fetish may be extreme, but I'm pretty sure that if you take photographs on their own terms, humans instictively respond more positively to wide aspect-ratios. The standard terms are actually telling: "landscape" refers to the subject of the picture, "portrait" to the act of picturing it. We can appreciate photographs in all sorts of shapes, but we can empathize with seeing most readily when the shared vision is the shape of experienced vision.
So this self-reinforcing dependent vogue for square photography is, I think, a machine gain and a human loss. Worse, it's a sparkly machine-gain that humans are lining up to lose. Machine gains are almost always sparkly, if only because it's far easier to polish a working machine than it is to figure out the machine you should have built instead, or admit that it was better, even if it was harder, to do something by hand. And we form machine-polishing clubs, and start companies to make machine polish, and open shops to sell it, and years go by before we stop and think about the flaws in which we've become invested.
So too with this idiotic chronology-switchback setup we've tempoarily settled on for blog formats. The right way to read incremental written forms, beyond any vague doubt, is in serial order. You start at the top of the first entry, you read to the bottom, and then the top of the next entry should follow the bottom of the previous one. Thousands of years of usability research has validated this basic design.
All of which was summarily and obliviously ignored by the original engineers of HTML and web browsers, with the result that they neglected to provide a simple and reliable mechanism for one absolutely essential bit of visual behavior: a fixed identity header and indepdently scrollable/pageable content. Without this, a designer of serial content can have identity reinforcement (come into the page at the top) or currency (come into it in the middle, where the new content starts), but not both. And since they didn't build in any meaningful tools for handling the user-subjectivity of "current", identity basically wins by forfeit.
The reverse-chron blog format is a sparkly-machine solution to this problem. It puts the newest entry next to the identity, thus at least superficially addressing both goals at once. For every other purpose, though, it's actively reader-hostile. If the entries form any kind of overall narrative, you have to read it in a painful zig-zag. If you are following a blog and miss a single update, you have to use the same awful up-and-down to find where you left off, read down, scroll back up, read down, repeat. This is bad.
But it's bad in what has become an established way, so even if you don't believe the alternatives are worse on their own terms, they almost certainly become worse in public practice. As with square photographs, we make our tools in the easiest shapes, and then we accomodate their limitations, and then we hone them to perfect their limits, and then we forget that this is not how we wanted to live.
Next time you make a crude tool, don't polish it, and don't accomodate its limitations. Use it the way you wish it worked, pay attention to how that hurts, and then throw it away and try to make the next crude, unsparkly tool so that using it doesn't make the tool better, it makes us better.
I should admit, I guess, that when it's me operating the camera, I'm a pretty extreme horizontalist. I'm happiest at about 3:1. In a tool universe built around 4:3, though, this is kind of a pain in the ass. I could mask my camera's LCD for 3:1 feedback, but then the picture is really too small to work from. For online display I have to assume 4:3-ish frame spaces, so 3:1 images end up in practice being shorter instead of wider, which is unsatisfying. And digital cameras will have to pack a lot more pixels into 4:3 sensors before I'll be informationally content to throw away more than half of them. And my obsessive preference for aspect-ratio consistency in exhibition sets means that I would usually rather stick to 4:3 for everything than mix in the occasional 3:1 where I spot an opportunity despite the obstacles.
So I understand, of course, the value of square-cropping in any content-neutral photo-showing application. It's possible to do an attractive job of mixing aspect-ratios, but it's exponentially easier to do an even more attractive job of displaying consistent aspect-ratios. Cropping 4:3s and 3:4s to 1:1 symmetrically is technically trivial, and although it's aesthetically unreliable in the abstract, the vast majority of amateur photographs are center-weighted, so it usually turns out OK. Actually, the vast majority of amateur photographs are also probably framed too widely, so a little universal symmetrical cropping almost certainly improves more Flickr pictures than it damages.
So cropping all pictures to squares for thumbnailing makes perfect sense as a Flickr design decision. It simplifies away arguably the biggest visual design problem in mass photo display. If you're looking at somebody else's photographs, it's easy to fall into assuming they are square, so any weirdness in framing you're likely to implicitly attribute to the photographer. The same applies to your own photographs unless you've spent some time seriously considering the originals, and the more you use Flickr, the more it is the way you consider your own photographs en masse.
But if your exhibited photographs are usually going to be approached through thumbnail galleries (the prevalence of which Flickr has also hugely influenced), and the thumbnails are usually going to be squares, it will simplify the rest of the experience if your photographs actually are square. I don't know if any digital cameras are already shipping with built-in square-cropping modes, but I expect those to start appearing very soon if they haven't already. The more square photos people have, the more display-tools will cheat and optimize for them, and the more incentive there will be to be square.
But square is a bad base ratio for photography, at least if by "photography" we mean people taking pictures of things people see, for other people to later share (or imagine) the experience of seeing. We see our world horizontally. Our eyes are side-by-side, our lives are gravity-flattened, our emotional landscapes are literal landscapes as often as metaphorical. My 3:1 fetish may be extreme, but I'm pretty sure that if you take photographs on their own terms, humans instictively respond more positively to wide aspect-ratios. The standard terms are actually telling: "landscape" refers to the subject of the picture, "portrait" to the act of picturing it. We can appreciate photographs in all sorts of shapes, but we can empathize with seeing most readily when the shared vision is the shape of experienced vision.
So this self-reinforcing dependent vogue for square photography is, I think, a machine gain and a human loss. Worse, it's a sparkly machine-gain that humans are lining up to lose. Machine gains are almost always sparkly, if only because it's far easier to polish a working machine than it is to figure out the machine you should have built instead, or admit that it was better, even if it was harder, to do something by hand. And we form machine-polishing clubs, and start companies to make machine polish, and open shops to sell it, and years go by before we stop and think about the flaws in which we've become invested.
So too with this idiotic chronology-switchback setup we've tempoarily settled on for blog formats. The right way to read incremental written forms, beyond any vague doubt, is in serial order. You start at the top of the first entry, you read to the bottom, and then the top of the next entry should follow the bottom of the previous one. Thousands of years of usability research has validated this basic design.
All of which was summarily and obliviously ignored by the original engineers of HTML and web browsers, with the result that they neglected to provide a simple and reliable mechanism for one absolutely essential bit of visual behavior: a fixed identity header and indepdently scrollable/pageable content. Without this, a designer of serial content can have identity reinforcement (come into the page at the top) or currency (come into it in the middle, where the new content starts), but not both. And since they didn't build in any meaningful tools for handling the user-subjectivity of "current", identity basically wins by forfeit.
The reverse-chron blog format is a sparkly-machine solution to this problem. It puts the newest entry next to the identity, thus at least superficially addressing both goals at once. For every other purpose, though, it's actively reader-hostile. If the entries form any kind of overall narrative, you have to read it in a painful zig-zag. If you are following a blog and miss a single update, you have to use the same awful up-and-down to find where you left off, read down, scroll back up, read down, repeat. This is bad.
But it's bad in what has become an established way, so even if you don't believe the alternatives are worse on their own terms, they almost certainly become worse in public practice. As with square photographs, we make our tools in the easiest shapes, and then we accomodate their limitations, and then we hone them to perfect their limits, and then we forget that this is not how we wanted to live.
Next time you make a crude tool, don't polish it, and don't accomodate its limitations. Use it the way you wish it worked, pay attention to how that hurts, and then throw it away and try to make the next crude, unsparkly tool so that using it doesn't make the tool better, it makes us better.
The Boston-based photo-sharing site/service Tabblo just opened for public beta-testing, and I did a writeup about it for BostonWTF.com.
¶ Jim · 5 May 2006
Jim Moran completed his life last night. His doctors said he had four months left last July, and he decided to take ten instead. He was entirely himself right up to the very last days, and died at home with people who love him. Metastasizing cancer is an awful way to end, but now I know how much grace and dignity can be imposed on even that much awfulness.
I don't think Jim ever did use the walker they brought over for him a couple weeks ago, and I think he only agreed to the wheelchair that arrived last week because he could push it around with his eyes closed.
We opened the windows one day last weekend when it was nice out, so he could hear the birds in the trees outside his second-story window. I thought I spotted him inching the chair towards the door. I told him I was sorry that the wheelchair couldn't take him outside. He didn't say anything for almost long enough that I wondered if he'd heard. But then he opened one eye a little, peered at me skeptically, and slowly closed it again.
And then he said "What makes you think it won't take the stairs?"
I don't think Jim ever did use the walker they brought over for him a couple weeks ago, and I think he only agreed to the wheelchair that arrived last week because he could push it around with his eyes closed.
We opened the windows one day last weekend when it was nice out, so he could hear the birds in the trees outside his second-story window. I thought I spotted him inching the chair towards the door. I told him I was sorry that the wheelchair couldn't take him outside. He didn't say anything for almost long enough that I wondered if he'd heard. But then he opened one eye a little, peered at me skeptically, and slowly closed it again.
And then he said "What makes you think it won't take the stairs?"
¶ CD death-watch update 2 · 28 April 2006
Credit to both Metropolis and Nonesuch for not only responding to my emails, but promptly and uncomplainingly sending replacement copies of the Gary Numan and Stephin Merritt albums. Double credit to Metropolis for not minding that I had abused my first copy of Jagged fairly badly, trying out various supposed copy-protection circumvention techniques. Double credit to Nonesuch for not minding that I had returned my two copies of Showtunes, and they were thus sending me not so much a replacement as a free copy for my trouble.
Of course, both new CDs have exactly the same problem as the old CDs. They're defective pressings, not defective individual discs. Triple demerits to both labels for not having figured this out by now. Or for not admitting having figured it out, if in fact they have.
Of course, both new CDs have exactly the same problem as the old CDs. They're defective pressings, not defective individual discs. Triple demerits to both labels for not having figured this out by now. Or for not admitting having figured it out, if in fact they have.
I'm not sure if I think it represents inspired design, cynical design, or inspiredly cynical design, but I've been watching the iTunes Music Store with alternately grim and admiring fascination since they started allowing user comments. I thought it was a pretty terrible idea for them from the beginning. Apple has a strong corporate personality, and one of the only strongly positive corporate personalities, and diluting it with inevitably idiotic "reviews" seemed self-evidently awful to me. Amazon has had some success creating a body of mildly thoughtful annotation, but Amazon's music department is still oriented around selling CDs, so there's some remote hope that a person writing about an album may actually have heard it.
In the iTMS, however, not only are individual tracks sometimes more popular than whole albums, but even more significantly, it is central and trivial to listen to samples of all the material, which leads directly to the most uselessly write-only of all possible user contributions, the snap rating of albums based on first impressions of fragments of songs, by people who haven't even heard the whole thing once. It's debatable what function reviews have at all in the previewable era, but if they still have any role, surely it's to report back from extended experience with the music. If I can listen to the fragments myself, I don't need a second-hand version of that experience.
But the self-ratcheting structure of Apple's comment system applies some brilliant/crass social judo to automatically marginalize the crap. Under each comment is the telling question "Was this review helpful? Yes/No", and then the main page for each album shows only the first few reviews (6 if there's no "editorial" review, and only 3 if there is) as sorted by helpfulness. The all-reviews page begins with one recent review, to encourage new input, and then again defaults to sorting by helpfulness. Thus the farther a review falls down the helpfulness sort, the fewer people will even see it, and the less likely it is that it will be able to move back up. The worst crap, at least, sinks.
In the iTMS, however, not only are individual tracks sometimes more popular than whole albums, but even more significantly, it is central and trivial to listen to samples of all the material, which leads directly to the most uselessly write-only of all possible user contributions, the snap rating of albums based on first impressions of fragments of songs, by people who haven't even heard the whole thing once. It's debatable what function reviews have at all in the previewable era, but if they still have any role, surely it's to report back from extended experience with the music. If I can listen to the fragments myself, I don't need a second-hand version of that experience.
But the self-ratcheting structure of Apple's comment system applies some brilliant/crass social judo to automatically marginalize the crap. Under each comment is the telling question "Was this review helpful? Yes/No", and then the main page for each album shows only the first few reviews (6 if there's no "editorial" review, and only 3 if there is) as sorted by helpfulness. The all-reviews page begins with one recent review, to encourage new input, and then again defaults to sorting by helpfulness. Thus the farther a review falls down the helpfulness sort, the fewer people will even see it, and the less likely it is that it will be able to move back up. The worst crap, at least, sinks.