furia furialog · Every Noise at Once · New Particles · The War Against Silence · Aedliga (songs) · photography · other things · contact
22 March 2006 to 5 March 2006
In the three-plus years that I've had my PowerBook, I've ripped thousands of my own CDs into iTunes to listen to them. The only one I couldn't rip was an Australian release by David Bridie that I digitized as audio because I didn't feel like sending it back to Australia.  

But in the last week I've hit two more. I tried two different copies of Stephin Merritt's Showtunes, thinking the first one was just defective, but the PowerBook just kicked both of them out as if they weren't audio discs. The packaging didn't indicate any sort of copy-protection, and Nonesuch seems like an odd offender, but my wife's PowerBook (different model, different OS version) just ejected both copies, too. I was irritated enough to take the second copy of Showtunes back to the record store and wade through their reluctance to get a refund. Merritt has too many tracks to digitize and index by hand.  

In exchange I bought the second Sounds album, and the new Gary Numan record. But the Gary Numan album has the same problem, again on both machines. Put it in, listen to some fruitless whirring, watch it get ejected as unreadable. Two solid hours of Mac troubleshooting changed nothing. Both machines can still read other new and old CDs and DVDs fine. It's a pretty strange coincidence if these discs are normal but both of our machines happen to have started having this very particular trouble at the same moment.  

But it's also a pretty strange coincidence if I've just happened to hit two of the three computer-unreadable CDs I've ever encountered (out of well over a thousand I've ripped) right in a row. I've written Numan's label (Metropolis) to see what they have to say for themselves. My fear, I guess, is that some new idiot manufacturing process has just come on line, and half the new music I try to buy from now on is going to do this. My fear is that I'll have to switch to stealing all my music simply in order to be able to hear it.
Before Saturday, my longest single run was about 8.5 miles. Saturday I ran 12.2. I'm a record-keeper, so there's a certain inherent appeal for me in getting to update one, but in neither of these particular cases was distance the independent variable, so it's not really a significant statistic. Plus, I run 6 miles routinely and comfortably, so I figured I could at least survive a 12-mile run.  

I didn't just survive, though. I only planned to do 6 miles, but it was sunny and in the 60s in Boston, and the Charles was lined with people gamely not focusing on the possible climatic implications of Spring in Boston in early March, and it just felt good to stay out there longer, so I did. My usual pace for a relaxed 6 miles is around 7:30/mile, and I finished 12.2 in a steady 7:36/mile, feeling fine.  

But that's still not the real triumph. The real triumph is that today, Monday, two days later, I did my routine 6-mile run right back on schedule. It was still routine, still comfortable, 7:32/mile, and afterwards I still feel fine.  

There are the things you think you could do, and the things you've done once and think you could do again, and then there are the things that are simply now within you.
Even the most movie-addicted people I know don't usually take the time to re-watch movies with the DVD commentary track on. I do, but I was a filmmaking major in college, and have a really high tolerance for obsessive introspection in any format.  

But if you haven't seen the movie Spy Kids, go watch it, especially if you haven't seen it because it's a kids' movie and you aren't a kid. It is a kids' movie, but in a way that you should still be a kid.  

And then, if you haven't seen Spy Kids 2, go watch that, especially if you haven't seen it because it's a kids' movie (see above), and especially if you haven't seen it because it's a sequel. It is a sequel, but it's the kind of sequel that the first movie makes wonderfully possible, not cynically expedient.  

And then watch Spy Kids 2 again with Robert Rodriguez's commentary on. It's not laborious annotation, it's an accidental evangelist's 97-minute exhortation to creativity and experimentation and simplicity and complexity, and not only makes me love the movies more, but makes me happier and more excited about art, craft, technology, freedom and everything I haven't tried yet.
Anyone who wants to have more reason to feel involved with arbitrary MLS games this year should go sign up for MFLS, the best and longest-running MLS fantasy league.  

The roster deadline for week 1 is 3pm Eastern, April 1 (and remember that Daylight Saving Time starts on April 2 this year).
"I've been sad with my magnitude lately, what and you."  

I know what it's insinuating, of course, but sometimes poetry writes itself into our deceits while we aren't watching them closely enough. Our powers are so great, and yet here we are, lately as ever, sad with our magnitudes.
I'm not sure I've seen more than a dozen music videos in my life that seemed to me like independently worthwhile demonstrations of human creativity. As a field of marketing ingenuity and technical innovation, music video is astonishingly rich, but as an art it's a horrific disaster. At this point the most I really hope to get out of a video is some more visceral impression of how an artist from some culture other than mine fits into theirs, and/or crosses out into mine.  

But I just watched HIM's DVD of 1997-2003 videos, in search of little more than some shots of their Finland, and am adding "Right Here in My Arms" to my not-formally-compiled but I'm sure pathetically short list of internally brilliant music videos. It's brilliant enough that I can tell you what's brilliant about it without you having seen it. Structurally it's mostly a performance video, staged inside and outside of a room whose walls are one-way mirrors transparent from the outside in. The band is inside the room. A girl is outside. She is watching the band, and sees them watching her back. They see and are watching nothing but themselves. She presses herself against her side of the glass, while Ville Valo writhes against his reflection. He doesn't know she's there, and she doesn't know he doesn't know.  

Thus her contact is a delusion, and an artifice, but her experience of her contact is real. The band's performance is a ballet of narcicism, but it is narcicism as a proxy for empathy. The box is how art works. The box is, in fact, what differentiates art from conversation.
As I organize my job opportunities and meta-opportunities I've inevitably found myself writing a large number of variations on the same basic summary sentences about my professional interests and experience. The most succinct explanation of my "field" I've been using is "social information systems", which I like for its deliberate ambiguity as to whether I mean information systems with social components or systems for explicitly social information, since in general I'm more interested the more data and the more people involved, and most interested when there's an apparently unmanageable overload of both.  

The problem with a compact and expressive phrase like "social information systems" is that if you use it confidently enough, anybody who doesn't already think they know what it means will assume it means something specific that they don't like (otherwise obviously they would know about it already). You have to be able to put everything in some context you can count on your audience knowing, knowing they know, and knowing they like. So I end up talking a lot about "information technology", on the theory that anybody I'm likely to work for or with knows what "information" and "technology" are and thinks they're important. The "technology" part conveys that I'm not looking for a job as a data proofreader, and the "information" part helpfully suggests that I don't know anything about metallurgy or processor cooling.  

The problem with "information technology" as a phrase, however, is that it's so similar to "Information Technology", which has come to refer exclusively to that subset of information-related technology that can be hoarded by gnomes and used to build dank, gloomy catacombs where information nobody wants goes to molder and hide from the sad people who are cursed to need it. This is not exactly my niche. And worse, perhaps, information technology is not a goal, it's at best a genre of tool, and if I think goals are more important than tools, I should have some better way of talking about what I think the information technology is for, or should be. Plus I've been reading a lot of bravely humane Theodore Sturgeon stories, and I think he'd say that displacing the oppressively dreary IT, as an acronym, is worth a quest in itself.  

The main thing I'm after, always, is understanding. And since understanding is usually a process, not a point, it's more accurate to say that my preoccupation is understanding-seeking. This is probably not a usable resumé phrase in the current software business, as it sounds dubiously spiritual, and this is an audience spooked by Faith-Based Initiatives and Creationism into taking "spiritual" as the opposite of "rational". And "rational" is the same as "logical", and obviously software is quintessentially logical.  

But then, churches aren't built by devoutly instantiated reverence, they're nailed together with hammers like anything else. The difference between good software (when it ever so occasionally exists) and bad software (the rest of the time) is not usually in execution, but in inspiration. Design, as I use the word "design", is at most 20% rational (and "usability" is rarely more than 20% of that). The rest of it is spiritual, or moral, or conceptual, or whatever you want to call the part of the process in which you decide what kind of story you are helping people tell themselves about themselves, or tell others about how we share the burdens and potential of our nature.  

So I'd love to see a world where I could write in my resumé that I do US, and have people know that Understanding-Seeking is a real and definitively pragmatic discipline. I'd love to not have to explain that nudging widgets into line is a part of the software creation process I sometimes do personally in the same way that an architect who really cares about a building will still be there the day they're putting the sinks into the bathrooms, trying to think of an even better spot on the wall to hang the hand-dryers: not because that's what "architecture" means, but because if the architecture was done well, by the time you're doing the bathrooms the hand-dryers are the largest problem that wasn't already solved long ago, and it's the architect's job to keep working on the largest remaining problems until the building is done. I'd love to not have to explain to anybody that "fixing our usability" is a way of saying "we're too committed to our big mistakes to dream of anything better than doing our bad job a fraction more efficiently". I'd love to feel like I can say to a VP of Engineering that design is a moral act, and know that I'm just reiterating something they already know, that of course engineering is the art of our holding actions against entropy, and US is why it's worth so much elaborate effort to buy ourselves these moments of time in which to live.
A conversation, from the point of view of any one participant, has these three elements:  

- the source: the other participants, either as individuals or as some collective they form for the purpose of this conversation  

- the context: as simple as an explicit topic, as subtle as an implicit one, as complex as a network of human relationships  

- your role: sometimes you are yourself, sometimes you are acting in a defined capacity, sometimes you participate in a conversation as part of a collective (like an audience)  

All conversations have these same fundamental elements. At the moment, our electronic conversations are subdivided and segregated according to ultimately trivial subclasses (personal email, IM, mailing lists, newsletters, feeds, web sites themselves, innumerable other variations on alerts), and the useful tools for managing conversations are scattered across similarly (arbitrarily) segmented applications.  

The conversation tool I really want will be built on the structural commonality of conversations, instead of the disparities. I want to apply email rules to RSS feeds, feed mechanics to mailing lists, contact lists as view filters, identities as task organizers, Growl formats as cell-phone notification styles. I want my email correspondence with my mom to be at least as rich as my soccer "correspondence" with the MLSnet front page, and I want my conversation with MLSnet to be as malleable as my own iTunes song status.  

I want, I think, for all my conversations to be structurally equal. I want to be able to look at them organized by any of those elements: by source, by context or by my role, and by the unions and intersections thereof. I want to see conversations in context when there's context, and I want there to more often be far more context than a list. I want to be able to understand my side of my conversations in aggregate, and for the memberships of my conversations to be effortlessly expandable, and for my computers to remember things my head forgets, even when I can't remember whether I knew them to begin with.  

I want, of course, the whole internet redesigned with this desire at its core, but I also want approximations of this in the meantime. I want Apple Mail or GMail to stop acting like somebody invented "mailing lists" and "address books" yesterday morning. Or I want Shrook or BlogBridge to speak POP3 and IMAP as fluently as RSS and OPML. Or I want Safari to look like Apple solving a problem from their own principles rather than letting somebody else define the form of the answer, or Flock aspiring to be a social application rather than just a browser with a few extra menu commands. I want Agenda and Magellan back now that I finally have enough information to justify them.  

I want us to have the conversations we could have if we didn't spend so much time just trying to keep track of what morbidly little we've already managed to say.
The main human information goal is understanding. Or wisdom, depending on your precise taxonomy. But either way, searching is plainly a means, not an end, and the current common incarnation of Search, which involves arbitrarily flattening a content space into a set of independent and logically equivalent Pages and then filtering them based on the presence or absence of words in their text, not only isn't an end, but is barely even a means to a means. This form of two-dimensional, context-stripping, schema-oblivous, answer-better-already-exist-somewhere searching is properly the very last resort, and it's a grotesque testament to the poverty of our information spaces that at the moment our last resort is often our only resort.  

The first big improvement in searching is giving it schema awareness. I doubt the people behind IMDb spend much time thinking about themselves as technology visionaries, but IMDb Search is a wildly instructive model of what is not only possible but arguably almost inevitable if you know something about the structure of your data. IMDb presents both the search widgetry and the answers in the vocabulary of the data-schema of movies and the people who work on them, not in "keywords" and "pages", and understands intimately that in IMDb's information-space search exists almost exclusively for the purpose of finding an entry point at which to start browsing. You go to Google to "look for something", you go to IMDb to "look something up"; the former phrase implies difficulty and disappointment in its very phrasing, the latter the comfortable assumption of success.  

On the web at large, of course, there is no meaningful schema, and it's impossible to make any simplifying assumptions about the subject matter of your question before you ask it. It is more productive to search in IMDb than with Google not because IMDb's searching is better, but because its data is better. But this does not even fractionally exonerate Google, or anybody else who is currently trying to solve an information problem by defining it as a search problem. They're all data problems. Google has the hardest version of this problem, since they don't directly control the information-space they're searching, but they have more than enough power and credibility to lead a revolution if they can muster the vision and organization. And anybody building an "enterprise" search tool has no such excuse; the enterprise does control their information-space, at least out to the edges where it touches the public space, and every second that can be invested in improving the data will be at least as productive as an hour sunk into flatly searching it.  

So if I worked for a Searching company right now, I'd start madly redefining ourselves tomorrow. We are not a searching company, we are an information organization company. The last resort is necessary, but neither sufficient nor transformative. I'd pull the smartest people I had off of "search" and put them to work on tools for the other end of the information process, reaching to the humans who are creating it and giving them the power to communicate not just the words of what they know but the structure of it, and to the collective mass of people to help them communicate and recognize and refine their collective knowledge about the schemas of known and knowable things. This is why Google Base holds the future of Google, and why you should sell your Google stock right now if they keep treating it as mainly a way for someone to buy your unused exercise equipment from you using a credit card. It should be the world's de facto public forum for the negotiation of the schema of all human knowledge, and if it isn't, every other decision Google makes will be forced by whatever is.  

But well-structured data, though necessary, isn't sufficient either. The good news for "search" companies is that improving the data is itself just a means to an end. Ideal data only encodes what we already know. The problems of useful inference from known data are hugely harder and super-hugely more valuable than the current forms of searching, especially when you realize that the boundary between private and public data is an obstacle and an opportunity in both directions not a wall to hide behind or run away from. The real future of "search" is in providing humans with the tools to form questions that haven't already been answered, and assemble the possible pieces of the answer, from threads of reasoning that traverse all kinds of territories of partial knowledge, into some form that synthesizes ideas that have never before even been juxtaposed, and onto which humans can further apply human powers where machine powers really fail -- fail because the machines are machines, not where they fail because we didn't take the time to let them be more thoroughly themselves -- so that they in turn can help us be more completely and wisely human.
I think it is becoming painfully clear that the web suffers from at least two critical design flaws, one of structure and one of usability.  

The usability flaw is the omission of real tracking and monitoring from the original browsing model. The "visited" link-state is no substitute for true unread marks, and HTTP response headers are not adequate building blocks for functional monitoring.  

RSS, born out of various motivations, is turning most coherently into a retrofit tracking/monitoring overlay for the web. My conversation with a feed is qualitatively poorer in context (and potentially in content) than my conversation with a web site, but it's qualitatively more manageable. The feed tells me when there's something new, and what it is, and lets me keep track of my interaction with it.  

This dynamic should have been built into the model from the outset, because without it the whole system does not scale in use. Providing it as an overlay, and a whole parallel information channel, is idiotic in every theoretical sense, but its pragmatic virtue is that a separate system can be built with far fewer technical and social dependencies.  

And while RSS is still nowhere near social critical mass among human information consumers, it is approaching a viable critical mass as a geek technology, and we will thus be increasingly tempted to lapse into thinking of it as a goal in itself, rather than a means. Witness, most glaringly, the fact that so far nobody, not even the people writing RSS-aware web-browsers, has attempted to use RSS to solve the original problem, which is making sense of the contents of a web site, not from it. And witness, almost as gallingly, the fact that we're pretending to think there's a future to the network model in which every reader is individually polling every information source every few minutes.  

In content, the same time-wasting cycle is going to replay in RSS that played in HTML: the new channel is initially lauded by sheltered tech geeks for freeing the "real" content from all the crap that used to surround it, and then quickly retaken by the forces of reality, which understand the commercial point of "all the crap". So ads and context will creep into feeds, and before long the content of RSS items will start to reproduce the whole experience of the originating web site, and there will no longer be anything streamlined or usably universal about the content of a feed.  

In scalability, the whole thing is just going to implode, or become horrendously convoluted as people scramble to patch over the network problems with proxies and collective batching.  

Of course, if we're going to have to rebuild the whole web for structural reasons, rebuilding the tracking/notification overlay on the current web is a throwaway project, but that doesn't mean it isn't worth doing, and it certainly doesn't mean somebody won't try to do it.  

If I were working for Microsoft or Apple right now (or, in theory, on Mozilla, but I'm not sure this can be done without corporate-scale backing), I'd have my R&D people putting serious work into treating RSS not as a content channel but as a source of the necessary metadata to build monitoring and tracking directly into the browsing experience. Forget trying to "teach web users about feeds", they shouldn't have to learn or care. Build the monitoring/tracking stuff around and into the browser where the user already lives and reads.  

If I were working for an infrastructure company, like Google or Akamai or maybe IBM, I'd have my R&D people hard at work on standards proposals and proof-of-concept prototypes for a cogently bidirectional subscription/syndication protocol that sends messages from the consumer to the source only when the consumer's interest changes, and from the source to the consumer only when the information changes. Quantum-leap bonus points for subsuming old-style email/IM messaging and web-browsing itself into the same new protocol. These are all most essentially conversations.  

And in the meantime, if I were working on any kind of RSS/OPML-related application, I would take a day or two to stop and think about my goals, not in terms of today's syntaxes but in terms of the flow of information between human beings, and between machines as our facilitators. I'd want, and maybe this is just me, to be working on something that not only improves the lives of people using the imperfect tools it has to work with right now, but would improve the lives of people even more efficiently if the world in which it operates were itself improved. Sometimes a broken window demands plywood, but as a tool-maker I dream of making something you won't just throw away after this crisis passes.  
 

(Of course, the deeper and ultimately more costly of the web's two design flaws is the structural mistake, which is the original decision to base HTML on presentation structure, rather than content structure. This is a monumental tragedy, as it has resulted in humanity staging the largest information-organization effort in the history of the species, and ending up with something that is perversely and pathetically oblivious to how much more than screen-rendering engines and address resolvers our machines could have been for us. In building this first unsemantic web we may well have thrown away more human knowledge than we've captured, and now we're going to have to build the whole thing over again, more or less from scratch, against the literally planetary inertia of our short-sighted mistakes.)
Site contents published by glenn mcdonald under a Creative Commons BY/NC/ND License except where otherwise noted.