12 February 2007 to 6 November 2006
Puressence: "Traffic Jam in Memory Lane" (1.5M mp3)
I don't know why this came to mind today, but I had an album-length drive to make, so I grabbed the CD as I went out the door. The trip ended up lasting longer than the album, so I played this song two extra times as I neared home. I liked the world a little better when there were more people who sang like this.
I don't know why this came to mind today, but I had an album-length drive to make, so I grabbed the CD as I went out the door. The trip ended up lasting longer than the album, so I played this song two extra times as I neared home. I liked the world a little better when there were more people who sang like this.
¶ Stat geekery · 8 February 2007
If you find statistical decomposition and recomposition of music-critic polls interesting (and I'm not saying you should), here's what I have to offer you:
Critical alignment ratings:
2006 Jackin' Pop
2006 Pazz & Jop
Merged
Album clusters:
2006 Jackin' Pop
2006 Pazz & Jop
Merged
Leaderboards:
2006 Jackin' Pop
2006 Pazz & Jop
Merged
Critical alignment ratings:
2006 Jackin' Pop
2006 Pazz & Jop
Merged
Album clusters:
2006 Jackin' Pop
2006 Pazz & Jop
Merged
Leaderboards:
2006 Jackin' Pop
2006 Pazz & Jop
Merged
¶ A Latin Intro · 4 February 2007
I don't know appreciably more about the state or history of Latin pop than I did a couple months ago when I came back from Puerto Rico feeling curious. But I've found some things I like. Here, if you're curious about my curiosity, is a short summary so far:
(follow along in my Latin Intro iMix if you're iTunes inclined...)
1-3. Belinda: "Lo Siento (I'm Sorry)" from Belinda, "Bella Traición" & "¿Quién Es Feliz?" from Utopía
Shameless teen-pop, but either the language disarms me somehow (they can't pander to you half as easily if you don't know what they're saying), or else the production is just far enough behind the domestic state of the art to sound like somebody actually loves what they're invoking, or else it's just catchy and for once I'm not overthinking it. (Or I'm over-overthinking it...)
4-5. David Bisbal: "Silencio" & "Torre de Babel (Reggaeton Mix)" from Premonición
Big and sentimental and soaring, but I like Meat Loaf in English, so I can't blame this on the language. Reggaeton is definitively tedious (a whole genre based on 4 bars from one song!?), but used sparingly as a cross-genre remix-affectation its twitchy insistence hasn't totally alienated me yet.
6-7. Ha-Ash: "Me Entrego a Ti" & "Ya No" from Mundos Opuestos
Mexican pop-country? It's like Shania Twain for the other border. "Ya No" tosses in a little quasi-hip-hop reggaetonality, just in case there weren't enough elements already. The iTunes "partial album" inexplicably includes everything except "Pobre Diabla", which is actually my favorite track on the album.
8-9. La 5a Estación: "El Sol No Regresa" from Flores De Alquiler & "Tu Peor Error" from El Mundo Se Equivoca
A Mexican T'Pau! Or, actually, a Spanish T'Pau transplanted to Mexico City, but whatever. Florid and electrifying. Both albums are terrific, the first a little less self-conscious about it.
10-11. LU: "La Vida Despues de Ti" & "Maria" from Album
Quivery balladry, but then, I like the slow Roxette songs, too.
12-13. Lucybell: "Eternidad" & "A Perderse" from Comiendo Fuego
Alternately menacing and bouncy Chilean semi-goth, like a version of HIM from a hemisphere where they haven't perfected mascara or slow-motion yet.
14-15. Osé: "Serás" & "No Digas" from Serás
Kinda like boy-band refugees trying to prove that they can actually play instruments and write songs themselves. But succeeding, I think.
16-17. Reik: "Invierno" & "De Que Sirve" from Secuencia
Kinda like boy-band non-refugees not trying to prove much beyond the power of the supermodel-macho pout as a vocal timbre. But succeeding, I think.
18-20. Shakira: "Te Dejo Madrid" from Laundry Service, "La Tortura" from Fijación Oral Vol. 1 and "Don't Bother" from Oral Fixation Vol. 2
Giving in to Shakira is what really got this started, for me. I liked "Te Dejo Madrid" already, but thought of it as an anomaly. She was playing a show in San Juan the night we were flipping channels on our hotel TV, though, and one of the Puerto Rican music-video stations (which are so primitive they still show music) was broadcasting every third or fourth song. She performs like it matters. Like her music matters, and language matters, and bridging cultures matters, and even her abdomen matters. Like she heard Madonna, Tori Amos and Alanis Morissette and understood not only where they intersect, but where they negate each other and leave a hole.
(follow along in my Latin Intro iMix if you're iTunes inclined...)
1-3. Belinda: "Lo Siento (I'm Sorry)" from Belinda, "Bella Traición" & "¿Quién Es Feliz?" from Utopía
Shameless teen-pop, but either the language disarms me somehow (they can't pander to you half as easily if you don't know what they're saying), or else the production is just far enough behind the domestic state of the art to sound like somebody actually loves what they're invoking, or else it's just catchy and for once I'm not overthinking it. (Or I'm over-overthinking it...)
4-5. David Bisbal: "Silencio" & "Torre de Babel (Reggaeton Mix)" from Premonición
Big and sentimental and soaring, but I like Meat Loaf in English, so I can't blame this on the language. Reggaeton is definitively tedious (a whole genre based on 4 bars from one song!?), but used sparingly as a cross-genre remix-affectation its twitchy insistence hasn't totally alienated me yet.
6-7. Ha-Ash: "Me Entrego a Ti" & "Ya No" from Mundos Opuestos
Mexican pop-country? It's like Shania Twain for the other border. "Ya No" tosses in a little quasi-hip-hop reggaetonality, just in case there weren't enough elements already. The iTunes "partial album" inexplicably includes everything except "Pobre Diabla", which is actually my favorite track on the album.
8-9. La 5a Estación: "El Sol No Regresa" from Flores De Alquiler & "Tu Peor Error" from El Mundo Se Equivoca
A Mexican T'Pau! Or, actually, a Spanish T'Pau transplanted to Mexico City, but whatever. Florid and electrifying. Both albums are terrific, the first a little less self-conscious about it.
10-11. LU: "La Vida Despues de Ti" & "Maria" from Album
Quivery balladry, but then, I like the slow Roxette songs, too.
12-13. Lucybell: "Eternidad" & "A Perderse" from Comiendo Fuego
Alternately menacing and bouncy Chilean semi-goth, like a version of HIM from a hemisphere where they haven't perfected mascara or slow-motion yet.
14-15. Osé: "Serás" & "No Digas" from Serás
Kinda like boy-band refugees trying to prove that they can actually play instruments and write songs themselves. But succeeding, I think.
16-17. Reik: "Invierno" & "De Que Sirve" from Secuencia
Kinda like boy-band non-refugees not trying to prove much beyond the power of the supermodel-macho pout as a vocal timbre. But succeeding, I think.
18-20. Shakira: "Te Dejo Madrid" from Laundry Service, "La Tortura" from Fijación Oral Vol. 1 and "Don't Bother" from Oral Fixation Vol. 2
Giving in to Shakira is what really got this started, for me. I liked "Te Dejo Madrid" already, but thought of it as an anomaly. She was playing a show in San Juan the night we were flipping channels on our hotel TV, though, and one of the Puerto Rican music-video stations (which are so primitive they still show music) was broadcasting every third or fourth song. She performs like it matters. Like her music matters, and language matters, and bridging cultures matters, and even her abdomen matters. Like she heard Madonna, Tori Amos and Alanis Morissette and understood not only where they intersect, but where they negate each other and leave a hole.
¶ A New Ointment for Ignorance · 2 February 2007 essay/tech
I've come to the unexpected and disconcerting realization that my period of greatest apparent productivity in user-interface production was, pretty clearly, when I was mainly working in Visual Basic, circa 1994-1997. Ten years later, I'm actually spending much more of my design time repeatedly re-implementing basic mechanics of no inherent interest, and my design work at any given point in time is littered with more unsightly temporary hacks of things I haven't had time to re-implement this time yet. It takes me longer to produce first approximations, and longer to add some of the most common levels of noticeable refinement.
This would be deeply tragic in several dimensions if it didn't deserve enough qualifying asterisks to make an ASCII-art smiley the size of Nebraska. The genius of Visual Basic was that it allowed you to very quickly produce 90% of a UI that was as good as 90% of everything else. Getting from 90% to 100% of a UI that was as good as 90% of everything else, however, took a large amount of extremely annoying work, some of which had to be constantly rechecked and redone every time you made the tiniest change to anything thereafter. And worse, of course, the standard established by 90% of everything else was not, in objective human terms, actually very good.
Visual Basic's success, such as it was, relied on two very large simplifications of the underlying human problems. First, technically, VB was both a development and a deployment environment. VB applications were actually configuration data for the VB runtime. There were no platform-matrix issues, because VB itself was the platform. The specific version of VB with which you built the application, even. Unless you started introducing your own external dependencies, you could basically count on the things running the same way for your users as they ran for you.
Much more significantly, though, VB instantiated a fixed vocabulary and grammar of UI interaction. This made it extremely simple to produce static forms and dialog boxes using standard Windows UI widgets, fairly easy to add a few well-defined kinds of dynamism to them, cumbersomely possible to do a little better, and beyond that you ended up fighting it more than it helped, and so usually didn't bother.
This is what all well-conceived software projects and all self-aware computer programmers do, of course: we take environments in which a great many things are complicatedly possible, and transform them into environments in which a small subset of those things are significantly easier, and a larger number of things that used to be hard are now so much harder than the easier things that nobody wastes time trying them anymore. We do this by, mainly, an obsessive attention to abstraction and instantiation and packaging and layering. This is why the details of most programming successes are staggeringly boring to non-programmers in rough proportion to their genuine significance: the best programming work usually is not done in directly solving an individual human problem, but in making some tool, ten steps away from the visible problem, that makes it incrementally easier to solve some whole class of abstruse subproblems that factor in some small way into the one you care about. The web page that lets you buy embarrassing personal ointments without having to hand them to a 4'5" woman named Ruthie, who reads every package label aloud in megaphone tones, solves a human problem for you. The good programming work behind that page has nothing to do with ointment or Ruthie, and lots to do with content-management-system staging strategies, database transaction consistency, protocol standards, application-framework meta-programming, and twenty other topics so obscure and tedious that you'd probably rather talk about why you need the ointment.
The problem with Visual Basic, circa 1994, was that the things it made easy were not important enough. Using it, I was able to produce a lot of UI very quickly, the Windows UI paradigm in which VB thrived was itself the cause of most of that work. I spent most of my time laboriously refining mildly optimized sets of widgetry in order to get the computer to show me something that I already knew. It'd be my guess that seeing and/or editing simple pieces of known data accounts for about 94% of the code ever written for computers, and that seeing and editing more-complex clumps of known data accounts for another 6%. All the stuff that does something with that data, and for which a computer is more than a very expensive and quickly obsolescent filing cabinet, at this level of precision accounts for approximately 0%.
The web, as the new default environment for data applications, is both progress and regress. The literal display of data is generally a little easier than it was in most pre-HTML environments. Simple data editing is about as easy as it ever was, but orchestration of the editing of related bits of data is harder. Managing large sets of data, and managing the social dynamics of shared data environments, are probably slightly easier than before in relative terms, but since the connected world makes them less obviously intractable, they now represent a larger fraction of an increased overall quantity of active difficulty. And if the problem is taking the things you know, and the things I know, and getting to something new we now know together, a disappointing amount of what we've built turns out to be discussion forums where people we've never met can file unverifiable reports on whether the ointment worked for them.
But the web, as it is now or even as the SPARQL/RDF version of the Semantic Web might make it, is yet another environment that generates a huge amount of the work it requires. This is ultimately the metric by which I think any tool, including software tools, and including meta-tool-sets as big as the web, should be most diligently measured: does it increase or decrease the human-scale meaningfulness of the human work done with it? In the same way that "the economy" should be graded on net human welfare, not gross currency movement, software should be graded on how it allows us to spend our lives, not how industriously it allows us to do new work nobody had to do at all before. It's amazing that it's possible to put a shame-free ointment store online, but absurd how much harder it is to do that than to put a tube of ointment on a metal shelf. And profoundly pathetic that the magnitude of our shame makes the enterprise seem reasonable.
The software-design work I'm doing now won't, directly, even get rid of your rash, let alone reverse global carbon-dioxide trends. Gauging the value of your daily work based on its capacity to increase the welfare of humanity is practically difficult, and theoretically presumptuous if not absurd. But I think it matters whether it matters, and I really do believe that I am working on a part of the solution. If human existence is the stake in a race between shared insight and thermodynamic chaos, chaos has all the big-money sponsors, but shared insight is a power law. I'm trying to build software that does, for networks of highly interconnected shared data, what Excel does for columns of numbers (or, maybe more realistically at first, what Visicalc originally did for columns of numbers). I'm trying to help bring about a new base level of operative abstraction in which data is visible, and its connections traversable, by its very nature, so nobody has to write code to see what we already know. I believe that it should be possible for computers, instead of diligently enforcing dehumanized bureaucracy, to help sustain the transformative illusion that all it takes for you, as a human, to participate in the sharing of human knowledge is one thing nobody else imagined before, or one link between things that nobody else has noticed yet.
And although global warming is not entirely a data problem, even that might arguably be mostly a data problem. Its key vectors all originate in what are fundamentally data blockages: inadequate education, incomplete understanding, incomprehensible macro-effects of unassessable micro-decisions, misapplications of science or technology, misguided optimizations of local variables at the expense of the overall system, failures of empathy, failures of recognition, and perhaps above all else the way in which untreated ignorance makes it possible to feel safely (and/or alienatingly) isolated from things to which you are really deeply and inextricably connected.
I'm trying to help make the internet into the place where we know what we know. I'm trying to help make a new place where we go to do human things of which we are collectively proud, not somewhere where we hide among the widgets while we pray that this is the ointment that will make us feel better.
This would be deeply tragic in several dimensions if it didn't deserve enough qualifying asterisks to make an ASCII-art smiley the size of Nebraska. The genius of Visual Basic was that it allowed you to very quickly produce 90% of a UI that was as good as 90% of everything else. Getting from 90% to 100% of a UI that was as good as 90% of everything else, however, took a large amount of extremely annoying work, some of which had to be constantly rechecked and redone every time you made the tiniest change to anything thereafter. And worse, of course, the standard established by 90% of everything else was not, in objective human terms, actually very good.
Visual Basic's success, such as it was, relied on two very large simplifications of the underlying human problems. First, technically, VB was both a development and a deployment environment. VB applications were actually configuration data for the VB runtime. There were no platform-matrix issues, because VB itself was the platform. The specific version of VB with which you built the application, even. Unless you started introducing your own external dependencies, you could basically count on the things running the same way for your users as they ran for you.
Much more significantly, though, VB instantiated a fixed vocabulary and grammar of UI interaction. This made it extremely simple to produce static forms and dialog boxes using standard Windows UI widgets, fairly easy to add a few well-defined kinds of dynamism to them, cumbersomely possible to do a little better, and beyond that you ended up fighting it more than it helped, and so usually didn't bother.
This is what all well-conceived software projects and all self-aware computer programmers do, of course: we take environments in which a great many things are complicatedly possible, and transform them into environments in which a small subset of those things are significantly easier, and a larger number of things that used to be hard are now so much harder than the easier things that nobody wastes time trying them anymore. We do this by, mainly, an obsessive attention to abstraction and instantiation and packaging and layering. This is why the details of most programming successes are staggeringly boring to non-programmers in rough proportion to their genuine significance: the best programming work usually is not done in directly solving an individual human problem, but in making some tool, ten steps away from the visible problem, that makes it incrementally easier to solve some whole class of abstruse subproblems that factor in some small way into the one you care about. The web page that lets you buy embarrassing personal ointments without having to hand them to a 4'5" woman named Ruthie, who reads every package label aloud in megaphone tones, solves a human problem for you. The good programming work behind that page has nothing to do with ointment or Ruthie, and lots to do with content-management-system staging strategies, database transaction consistency, protocol standards, application-framework meta-programming, and twenty other topics so obscure and tedious that you'd probably rather talk about why you need the ointment.
The problem with Visual Basic, circa 1994, was that the things it made easy were not important enough. Using it, I was able to produce a lot of UI very quickly, the Windows UI paradigm in which VB thrived was itself the cause of most of that work. I spent most of my time laboriously refining mildly optimized sets of widgetry in order to get the computer to show me something that I already knew. It'd be my guess that seeing and/or editing simple pieces of known data accounts for about 94% of the code ever written for computers, and that seeing and editing more-complex clumps of known data accounts for another 6%. All the stuff that does something with that data, and for which a computer is more than a very expensive and quickly obsolescent filing cabinet, at this level of precision accounts for approximately 0%.
The web, as the new default environment for data applications, is both progress and regress. The literal display of data is generally a little easier than it was in most pre-HTML environments. Simple data editing is about as easy as it ever was, but orchestration of the editing of related bits of data is harder. Managing large sets of data, and managing the social dynamics of shared data environments, are probably slightly easier than before in relative terms, but since the connected world makes them less obviously intractable, they now represent a larger fraction of an increased overall quantity of active difficulty. And if the problem is taking the things you know, and the things I know, and getting to something new we now know together, a disappointing amount of what we've built turns out to be discussion forums where people we've never met can file unverifiable reports on whether the ointment worked for them.
But the web, as it is now or even as the SPARQL/RDF version of the Semantic Web might make it, is yet another environment that generates a huge amount of the work it requires. This is ultimately the metric by which I think any tool, including software tools, and including meta-tool-sets as big as the web, should be most diligently measured: does it increase or decrease the human-scale meaningfulness of the human work done with it? In the same way that "the economy" should be graded on net human welfare, not gross currency movement, software should be graded on how it allows us to spend our lives, not how industriously it allows us to do new work nobody had to do at all before. It's amazing that it's possible to put a shame-free ointment store online, but absurd how much harder it is to do that than to put a tube of ointment on a metal shelf. And profoundly pathetic that the magnitude of our shame makes the enterprise seem reasonable.
The software-design work I'm doing now won't, directly, even get rid of your rash, let alone reverse global carbon-dioxide trends. Gauging the value of your daily work based on its capacity to increase the welfare of humanity is practically difficult, and theoretically presumptuous if not absurd. But I think it matters whether it matters, and I really do believe that I am working on a part of the solution. If human existence is the stake in a race between shared insight and thermodynamic chaos, chaos has all the big-money sponsors, but shared insight is a power law. I'm trying to build software that does, for networks of highly interconnected shared data, what Excel does for columns of numbers (or, maybe more realistically at first, what Visicalc originally did for columns of numbers). I'm trying to help bring about a new base level of operative abstraction in which data is visible, and its connections traversable, by its very nature, so nobody has to write code to see what we already know. I believe that it should be possible for computers, instead of diligently enforcing dehumanized bureaucracy, to help sustain the transformative illusion that all it takes for you, as a human, to participate in the sharing of human knowledge is one thing nobody else imagined before, or one link between things that nobody else has noticed yet.
And although global warming is not entirely a data problem, even that might arguably be mostly a data problem. Its key vectors all originate in what are fundamentally data blockages: inadequate education, incomplete understanding, incomprehensible macro-effects of unassessable micro-decisions, misapplications of science or technology, misguided optimizations of local variables at the expense of the overall system, failures of empathy, failures of recognition, and perhaps above all else the way in which untreated ignorance makes it possible to feel safely (and/or alienatingly) isolated from things to which you are really deeply and inextricably connected.
I'm trying to help make the internet into the place where we know what we know. I'm trying to help make a new place where we go to do human things of which we are collectively proud, not somewhere where we hide among the widgets while we pray that this is the ointment that will make us feel better.
¶ 2006 music lists · 14 January 2007
David Bisbal: Silencio (1.6M mp3)
I know very little about Spanish-language pop, but enough intriguing things bounced past my awareness while we were in Puerto Rico that I've been doing some investigating, and finding some stuff I really like. Bisbal is actually Spanish from Spain, and I came across him just poking around in iTunes, but his new album Premonición has some production work by Puerto Rican reggaeton duo Wisin & Yandel, so there's the connection for you.
I know very little about Spanish-language pop, but enough intriguing things bounced past my awareness while we were in Puerto Rico that I've been doing some investigating, and finding some stuff I really like. Bisbal is actually Spanish from Spain, and I came across him just poking around in iTunes, but his new album Premonición has some production work by Puerto Rican reggaeton duo Wisin & Yandel, so there's the connection for you.
¶ The Disappearing Sky · 26 December 2006
If the conditions are exactly right, at least, you can still disappear into the sky. You float on your back in the Bahía Mosquito on a moonless and cloudless night, trailing your arms beside you in the warm Caribbean water. You lift them up and point from Vieques towards Orion, and suddenly the stars are falling along your arms and past you and you are diving up into the sky and away.
But the simple magics for escape are waning. All our magics for escape are waning. You used to only have to evade the moon and the clouds, but now people are building along the shores of the Bahía, and the oblivious lights of their carports slash between the earnest dinoflagelates and the billion far sad stars. Who builds on the shore of the world's brightest bioluminescent bay? And who, having picked this of all the places in the world to stack yet more buildings, puts lights on the outside of them? Who drags sacks of cement and disregard out to a twenty-mile island off the east coast of Puerto Rico where the army finally stopped dropping bombs, and curses some of the best darkness we have left?
Buried in a karst sinkhole in Arecibo, less than an hour's flight from Vieques, is the reflector dish of the world's largest radio telescope. This is a different darkness, and a different way of trying to reach into the sky, but signs on the viewing platform have to beg visitors to turn off their cell phones, and are probably one bad funding decision away from having to beg them not to perch houses and Subways and ferret stores on the ringing hills. And maybe darkness doesn't sound like it's worth that much, or that the kind of amazement we can see in tiny light-up sea monsters and SETI@Home are petulant objections against seeing where you're parking your Durango, but these scattered oases of left-alone darkness are part too of how we see who we are.
We're in Puerto Rico, Belle and I, to float in darknesses and sit on still-near-perfect beaches for a few days before we go back to gray New England winter and the last few months in which we're not yet parents. So we're part of the problem ourselves, in the strictest sense. We don't leave any trash on the beaches, but we drive rented cars to them. We paddle around the biobay on kayaks, with little cyalume sticks, but I can't pretend I really know that this is harmless at the current scale, never mind the ones implied by the flow of people like us. Las Cavernas del Rio Camuy, just a little south of Arecibo, used to be a private magic kingdom for bats and brave Camuy kids with flashlights, and now they're a trolley-fed public park where tourists from Boston and New York line-up to point digital cameras at, mostly and necessarily, the new buried lighting system hacked into million-year-dark stone.
We always change what we observe, of course. Tourism and colonialism are differences of degree. Arecibo beeps like the Earth is backing up, and we kick millions of dinos into panicked glow, so even in the last few darknesses we the just are not just listening. Curiosity is inherently presumption. Our worst egotism and our most powerfully humane hopes are both entwined in science, and everything we touch we touch. Our greatest revelations and our most inexcusable massacres share the gaudy heraldry of discovery. There were people here when Columbus came, and none of them are left to make forests again out of his statues and plazas.
And no, I don't know how to follow him into promising unknowns without following him into collapse. I don't know how to stop asking questions, or how to stop answering them with more and faster. I don't know how to stop wanting to be the one for whom we make the exceptions, the one person allowed to make the near-perfect darkness one more person less perfect. I don't know how to improve the world in any better way than by trying to make what we discover be worth what we can't keep it from costing. I don't know what better to do with our vanishing darknesses than promise to remember them with our fondest curses. I don't know how to promise that there will always be a sky into which to disappear. For a while, probably, but before too long we're going to need what we used to only want. Selfishness and charity are differences of timing. We will fill in all these holes. We will build highways from everywhere to everywhere else, until all that is left of place is name. We will build casinos on the moon, and come to them to throw away whatever we saved from home below.
But every once in a while, if we run a little quieter and faster, just outside the lights, we get to see a little of what it used to be like. And what we can't help displacing, we will find tiny ways to become. Everything we touch at least has touched us back. I believe in darkness, but not as much as I believe in us. I believe we help more than we hurt, and that we understand more than we know, and that we know enough. I believe we're ready, and that this child and this world will be grateful for each other.
I will miss Orion, when he finally turns and leaves us here, but not a billionth as much as I'd miss these blaring lights and crushing statues and people if we too gave up and disappeared into the sky.
But the simple magics for escape are waning. All our magics for escape are waning. You used to only have to evade the moon and the clouds, but now people are building along the shores of the Bahía, and the oblivious lights of their carports slash between the earnest dinoflagelates and the billion far sad stars. Who builds on the shore of the world's brightest bioluminescent bay? And who, having picked this of all the places in the world to stack yet more buildings, puts lights on the outside of them? Who drags sacks of cement and disregard out to a twenty-mile island off the east coast of Puerto Rico where the army finally stopped dropping bombs, and curses some of the best darkness we have left?
Buried in a karst sinkhole in Arecibo, less than an hour's flight from Vieques, is the reflector dish of the world's largest radio telescope. This is a different darkness, and a different way of trying to reach into the sky, but signs on the viewing platform have to beg visitors to turn off their cell phones, and are probably one bad funding decision away from having to beg them not to perch houses and Subways and ferret stores on the ringing hills. And maybe darkness doesn't sound like it's worth that much, or that the kind of amazement we can see in tiny light-up sea monsters and SETI@Home are petulant objections against seeing where you're parking your Durango, but these scattered oases of left-alone darkness are part too of how we see who we are.
We're in Puerto Rico, Belle and I, to float in darknesses and sit on still-near-perfect beaches for a few days before we go back to gray New England winter and the last few months in which we're not yet parents. So we're part of the problem ourselves, in the strictest sense. We don't leave any trash on the beaches, but we drive rented cars to them. We paddle around the biobay on kayaks, with little cyalume sticks, but I can't pretend I really know that this is harmless at the current scale, never mind the ones implied by the flow of people like us. Las Cavernas del Rio Camuy, just a little south of Arecibo, used to be a private magic kingdom for bats and brave Camuy kids with flashlights, and now they're a trolley-fed public park where tourists from Boston and New York line-up to point digital cameras at, mostly and necessarily, the new buried lighting system hacked into million-year-dark stone.
We always change what we observe, of course. Tourism and colonialism are differences of degree. Arecibo beeps like the Earth is backing up, and we kick millions of dinos into panicked glow, so even in the last few darknesses we the just are not just listening. Curiosity is inherently presumption. Our worst egotism and our most powerfully humane hopes are both entwined in science, and everything we touch we touch. Our greatest revelations and our most inexcusable massacres share the gaudy heraldry of discovery. There were people here when Columbus came, and none of them are left to make forests again out of his statues and plazas.
And no, I don't know how to follow him into promising unknowns without following him into collapse. I don't know how to stop asking questions, or how to stop answering them with more and faster. I don't know how to stop wanting to be the one for whom we make the exceptions, the one person allowed to make the near-perfect darkness one more person less perfect. I don't know how to improve the world in any better way than by trying to make what we discover be worth what we can't keep it from costing. I don't know what better to do with our vanishing darknesses than promise to remember them with our fondest curses. I don't know how to promise that there will always be a sky into which to disappear. For a while, probably, but before too long we're going to need what we used to only want. Selfishness and charity are differences of timing. We will fill in all these holes. We will build highways from everywhere to everywhere else, until all that is left of place is name. We will build casinos on the moon, and come to them to throw away whatever we saved from home below.
But every once in a while, if we run a little quieter and faster, just outside the lights, we get to see a little of what it used to be like. And what we can't help displacing, we will find tiny ways to become. Everything we touch at least has touched us back. I believe in darkness, but not as much as I believe in us. I believe we help more than we hurt, and that we understand more than we know, and that we know enough. I believe we're ready, and that this child and this world will be grateful for each other.
I will miss Orion, when he finally turns and leaves us here, but not a billionth as much as I'd miss these blaring lights and crushing statues and people if we too gave up and disappeared into the sky.
(I'm trying to forget about another score from today...)
I signed up for the NetFlix Prize without, originally, any intention to ever actually compete. I haven't studied any math or statistics since AP Calculus over 20 years ago, and my usual analytical tools aren't remotely adequate for analyzing data on the scale required by the contest. I have a couple laptops at my disposal, but no distributed number-crunching farms. My current programming tool of choice is Ruby, which is optimized for coding efficiency rather than processing speed or memory compactness.
But my current job involves thinking about useful ways to explore and analyze arbitrary bodies of data, and it's not often that anybody gives away a dataset as large or interesting as NetFlix's, so I wanted to at least play with it a little.
And after playing with it a little I realized that submitting a set of bad guesses, at least, wasn't that difficult. So I did, just for the satisfaction of feeling involved. One of the strangest and maybe most addictive draws of the NetFlix Prize is that bad guesses, if there's even the slightest bit of non-badness embedded in them, are for most practical purposes only trivially worse than fairly good guesses.
The way the contest works is that you're given the data for several million rating actions taken by real people on the real NetFlix system. Each rating has a user ID, a movie ID, a date, and a score of 1, 2, 3, 4 or 5. You get no other information about the user IDs, of which there about half a million, and only titles and years for the movies, of which there are 17,770.
In addition to all this, you also get a file of movie IDs, user IDs and dates, but no scores. These represent other rating actions actually taken by some of those same users on those same movies. NetFlix knows what scores go in these blanks. You don't. To play, you just have to submit a set of guesses for all the blanks. You're scored on how close you get, with a little squaring and square-rooting just to make it sound more complicated.
Meanwhile, back at headquarters, NetFlix has run their own actual software for analyzing movie ratings, using this same exact data. It scores 0.9514, which means it can usually guess ratings within about 1. My first simplistic set of guesses, doing no similarity analysis of any kind, scored 1.0451. This means that my dumb method can usually guess ratings within about 1. Obviously 0.9514 is better than 1.0451, and it's even a little better than it looks due to the exact math, but the human-scale truth is still this: when you're talking about person-assigned integer ratings on a scale of 1 to 5, nobody but us number-geeks is ever going to get excited about the difference between one "about 1" and another "about 1" based on hundredths of a point. In once sense this makes the whole contest pretty idiotic, because who the hell cares?
But if you can get your score down to the NetFlix-chosen target of 0.8563, you stand to win a million dollars. And if 0.8563 is also pretty much "about 1", the difference between $1,000,000 and $0 is much more readily apparent. And submitting some guesses that were at least a little smarter than my first ones wasn't really any harder.
So I still have absolutely no illusions of winning. As of now, the leaders are fighting for 0.0001s down in the neighborhood of 0.9042. I got down to 0.9829 without any similarity-analysis, just to see how much variability I could squeeze out of the problem before starting to work on actual logic. My first wild-ass approximation of "similarity" scored worse than that, but after a small amount of de-assing it got me to 0.9508. As of today, you need 0.9402 to get onto the official leaderboard, but 0.9508 is better than NetFlix's own software, which means that in a few hours of my spare time, using no fancy methods or hardware (albeit with a MacBook Pro running overnight a couple times), I have produced better software for this specific definition of movie recommendation than however many people NetFlix pays to work on this problem full-time. If I were one of those people, I'd be embarrassed.
But actually, if I were one of them, I'd probably also be feverishly working on my own entry in the contest. I'm betting their real engine was built by committee, and thus represents nobody's best ideas. If you held a sufficiently well-constructed contest to see if people could do your team's team-job better than your team was doing it before you knew there was going to be a contest about it, you'd probably lose it about as quickly as NetFlix has lost this one. No matter how complicated you think your work is, once you figure out how to simplify it to the point where you can challenge other people to do it better, somebody will be able to.
Which suggests, of course, a one-step self-improvement program for just about any existing endeavor: treat your contribution as if you are entering a contest to do your job better than you. Assume some stranger could, with hardly any real insight or work, so that's how easy it ought to be.
Or, if that all sounds more tiring than inspiring, there's the other possible moral of the story: The most intense effort in any project usually ends up being spent on the 0.0001s, tiny fractional improvements of no genuine consequence, given apparent significance only by the act of measuring them. It's not clear whether it's actually possible to get to 0.8563 using this set of data. Nor whether doing so would meaningfully improve any NetFlix user's real experience. If they really wanted to qualitatively improve their system, NetFlix would have to hold a design contest, not a data-analysis contest, and that would be a lot harder to score. Maybe you're doing your job, as it's currently defined, more or less as well as it can meaningfully be done, too. Maybe, instead of a contest to dicker over your 0.0001s, you should figure out how to do your current job just as well with a lot less effort, and then spend the rest of your time entering the contest to find a totally different and better way.
But my current job involves thinking about useful ways to explore and analyze arbitrary bodies of data, and it's not often that anybody gives away a dataset as large or interesting as NetFlix's, so I wanted to at least play with it a little.
And after playing with it a little I realized that submitting a set of bad guesses, at least, wasn't that difficult. So I did, just for the satisfaction of feeling involved. One of the strangest and maybe most addictive draws of the NetFlix Prize is that bad guesses, if there's even the slightest bit of non-badness embedded in them, are for most practical purposes only trivially worse than fairly good guesses.
The way the contest works is that you're given the data for several million rating actions taken by real people on the real NetFlix system. Each rating has a user ID, a movie ID, a date, and a score of 1, 2, 3, 4 or 5. You get no other information about the user IDs, of which there about half a million, and only titles and years for the movies, of which there are 17,770.
In addition to all this, you also get a file of movie IDs, user IDs and dates, but no scores. These represent other rating actions actually taken by some of those same users on those same movies. NetFlix knows what scores go in these blanks. You don't. To play, you just have to submit a set of guesses for all the blanks. You're scored on how close you get, with a little squaring and square-rooting just to make it sound more complicated.
Meanwhile, back at headquarters, NetFlix has run their own actual software for analyzing movie ratings, using this same exact data. It scores 0.9514, which means it can usually guess ratings within about 1. My first simplistic set of guesses, doing no similarity analysis of any kind, scored 1.0451. This means that my dumb method can usually guess ratings within about 1. Obviously 0.9514 is better than 1.0451, and it's even a little better than it looks due to the exact math, but the human-scale truth is still this: when you're talking about person-assigned integer ratings on a scale of 1 to 5, nobody but us number-geeks is ever going to get excited about the difference between one "about 1" and another "about 1" based on hundredths of a point. In once sense this makes the whole contest pretty idiotic, because who the hell cares?
But if you can get your score down to the NetFlix-chosen target of 0.8563, you stand to win a million dollars. And if 0.8563 is also pretty much "about 1", the difference between $1,000,000 and $0 is much more readily apparent. And submitting some guesses that were at least a little smarter than my first ones wasn't really any harder.
So I still have absolutely no illusions of winning. As of now, the leaders are fighting for 0.0001s down in the neighborhood of 0.9042. I got down to 0.9829 without any similarity-analysis, just to see how much variability I could squeeze out of the problem before starting to work on actual logic. My first wild-ass approximation of "similarity" scored worse than that, but after a small amount of de-assing it got me to 0.9508. As of today, you need 0.9402 to get onto the official leaderboard, but 0.9508 is better than NetFlix's own software, which means that in a few hours of my spare time, using no fancy methods or hardware (albeit with a MacBook Pro running overnight a couple times), I have produced better software for this specific definition of movie recommendation than however many people NetFlix pays to work on this problem full-time. If I were one of those people, I'd be embarrassed.
But actually, if I were one of them, I'd probably also be feverishly working on my own entry in the contest. I'm betting their real engine was built by committee, and thus represents nobody's best ideas. If you held a sufficiently well-constructed contest to see if people could do your team's team-job better than your team was doing it before you knew there was going to be a contest about it, you'd probably lose it about as quickly as NetFlix has lost this one. No matter how complicated you think your work is, once you figure out how to simplify it to the point where you can challenge other people to do it better, somebody will be able to.
Which suggests, of course, a one-step self-improvement program for just about any existing endeavor: treat your contribution as if you are entering a contest to do your job better than you. Assume some stranger could, with hardly any real insight or work, so that's how easy it ought to be.
Or, if that all sounds more tiring than inspiring, there's the other possible moral of the story: The most intense effort in any project usually ends up being spent on the 0.0001s, tiny fractional improvements of no genuine consequence, given apparent significance only by the act of measuring them. It's not clear whether it's actually possible to get to 0.8563 using this set of data. Nor whether doing so would meaningfully improve any NetFlix user's real experience. If they really wanted to qualitatively improve their system, NetFlix would have to hold a design contest, not a data-analysis contest, and that would be a lot harder to score. Maybe you're doing your job, as it's currently defined, more or less as well as it can meaningfully be done, too. Maybe, instead of a contest to dicker over your 0.0001s, you should figure out how to do your current job just as well with a lot less effort, and then spend the rest of your time entering the contest to find a totally different and better way.
The logistical irony of substantive blogging, meaning something closer to essay-writing than link-jockeying, is that it enthusiastically both rewards and consumes time. For quantitatively ideal productivity you need to minimize the amount of time you spend doing anything else in your life, which obviously includes anything orthogonal to blogging like shopping for diaper-containment systems or eating brunch with your friends (unless, obviously, you are running a diaper-containment- or brunch-blog), but also includes whatever it takes to actually generate the material to write about whatever it is you're writing about.
I only ever came up with two good solutions to this, where by "good" I mean effective in producing output, not necessarily in improving my own life any.
One was writing about music during a period when most of my spare time was spent listening to and thinking about music anyway. It's a trivial psychological insight to observe that the magnitude of this preoccupation betrays it as an attempt to distract attention, mostly mine, from the absences of other things in my life.
The other solution was writing about technology, or around technology in the same sense that I wrote around music, during a period when my day-job had devolved into pretty much exclusively sitting around thinking about and around technology. This is a pretty interesting experience to get to have, and as I repeated countless times in the job interviews that followed its inevitable eventual termination, I was never even remotely as well informed about the state of technology when I was actually contributing to it.
I'm contributing to it again. My new job, still settling past its newness after only a few months, is by far the best job I've ever had. There's one formulation of job ideal where you say "I'd be doing this even if they weren't paying me", and this one is beyond that to "I'd be sitting at home frustrated at being unable to participate without a team around me."
Several things are different and cooler about this job, at least in my own experience, the top few being that I'm explicitly managing it in addition to leading by design, that we're trying to find the shape of the solution we're building (and of the problem we're defining to solve) instead of being handed a VC precis or business plan to fulfill, and that the development work is not primarily segmented by function.
The combination of these things is very challenging and extremely engrossing. I discovered a quantum jump in responsibility when I went from being a member of a large design team at Interchange to being the sole designer at eRoom, and it's a similar one from being the designer to being the designer and manager, and maybe another level again to be the person with the most navigational accountability for an exploration undertaken knowingly without a map. The internal pressures intensify radically in both directions: to dismiss inanely crazy ideas to avoid causing magnified distractions, and to take superficially inexplicable intuitions even more seriously for their transformational potential. If you don't know where you're going, exactly, you have to operate as if patience and diligence are doomed.
Somehow this has resulted, over the past few weeks, in my spending the vast majority of my working hours actually programming. I won't reveal much by saying that we're building a big software system that will, when it's working, take in a large amount of previously scattered and unanalyzed data, and attempt to reduce its disorder. Most of the team has been working, since before my arrival, on the data-acquisition aspects of the project. So without anybody else to assign to the languishing data-analysis portion of the system, I assigned it to myself, and have been maniacally trying to figure out what I mean it to do by writing it.
I am not, in case I haven't been clear about this, really a professional programmer. Over the course of my career I've written a lot of code in a lot of languages, but almost always in carefully circumscribed senses. The UI architecture of eRoom was devised in no small part with my individual capabilities in mind, so that I could write a large amount of the actual UI code without having to participate in non-goal-oriented conversations about lazy pointer deallocation or rogue mutexes. And even the code I'm writing now is, probably, only a prototype, and I've written lots of prototype code.
My previous prototyping, though, has always been about faking a system in order to play with how it looks and acts. The easiest way to fake a large system is usually to actually build parts of it more or less for real, of course, but every project I seem to fake a little less. The content-management and discussion-forum systems underlying this website are constrained and minimal, but not faked at all. And although the prototype I'm writing at work will probably be migrated out of my eccentric Ruby improvisation into somebody's tediously meticulous Java for dull scalability reasons, my Ruby version really does do what it says it does.
This is tremendous fun, but it also pushes me into a part of my brain where my most obsessive nature is most dramatically exacerbated. I sit down at my desk in the morning, and eight or nine hours slide by in an un-self-interrupted continuum. I forget to eat lunch, I forget to check my email, I forget the five-minute phone-call-returning task I'm supposed to do while eating the lunch I forgot to eat, I look up at the clock and realize I was supposed to be home an hour ago, and when you live a block from your office there's no possible excuse. Making information fall magically into order is not the only thing I want to do with my life, but it's one of the ways I think I can make the world better, and in the grip of a particularly involved spell it feels like a calling. Something shadowily like a pattern starts to emerge, and I frown at it, and twitch something I've built to be twitched, and suddenly answers are fountaining out of the screen from their own joy at existing.
And that's not even the interesting news.
The interesting news is that Bethany is pregnant. If the tests and milestones keep going as well as they've been going so far, about six months from now we'll have a child. Software isn't easy, exactly, but it's difficult in easy ways. If you know what to twitch, you can know what kind of magic you'll be able to make.
People are, obviously, far less tractable. I don't know what kind of child we're going to have. I don't know what kind of parents we're going to be. I don't know what's going to be ecstatic and what's going hurt and what's going to be unbearable. I know nothing will ever be this simple again, and I was already barely coping with simplicity's complexity. Beth and I could probably have spent another ten or twenty years just figuring out how to merge two lives most inspiringly, and now we're going to try to make sense of three. My own explanation of wanting to do this is that raising children is the biggest project available to individual humans, and Bethany is the person with whom I feel most capable of the most humanity. I want us to take on the greatest, hardest, most human challenge for which we are eligible.
As crazy intuitions go, this makes my daftest futurist semantic-web predictions feel like placing wrist-flick roulette bets with free (and mostly unredeemable) chips. I really do believe that the semantic-web project I'm working on has the potential to be the seed of something that changes the dynamics of human interaction with information sufficiently to advance human nature, but I know that that potential is prorated by its likelihood. I have a small team, and there are many teams and bigger, and many factors and convolutions. Most attempts at deliberate innovation fail, and although opportunities for accidental innovation fail at a far greater rate, they arise at an even greater rate than they fail, so the innovations arrived at by luck vastly outnumber and outfascinate the few we make knowingly.
We will have made this child knowingly, and although accidents will probably outnumber clevernesses even more lopsidedly thereafter, in this project we will not have the luxury or salvation of irrelevance. There are no swarming competitors feverishly plotting to steal away our parent-share or undermine B and my parenting brand. There are no breathless marketers trying to find the most flattering spin on whatever turn out to be our signature family dysfunctions. Nothing will release us from this. There's just us, and a new person nobody on Earth has met yet. Somebody has to care for everybody, and this one is ours.
So I'm obviously not ready, and can't possibly be, but I'm ready. There are a thousand things to do, and stacks of books and friends and doctors and counselors lining up to add more things to our lists, and I can't imagine how we'll ever get them all done, so I assume and stipulate that we must not have to. I will do whatever it takes to be a good parent, and that will usually be almost enough, and hopefully every once in a while will jubilantly suffice. I will try to be better at this than anything else I have ever done. I don't intend for it to become the only thing I do, but I understand that, like marriage, it is a private commitment that takes precedence over my public aspirations, and that I will make different choices now, and dream of different things. I will have far more important things to write about than b-sides, and rarely even enough time to list them. I will do what I can, and allow myself not to catalogue, even mentally, all the things I would have catalogued when I had time instead of purpose.
And if this is the most ordinary, and thus in a sense the most mundane, way of raising the stakes for the ways in which I hope to improve the world, and to avoid being even a passive vector of thoughtlessness or evil, then so be it. These swiveling electrical-outlet covers I need to install so a baby that doesn't even exist yet can't stick (I guess) a fork in them, and whatever diaper-containment system we ultimately settle on, and the semantic web and the way we love each other and ourselves, will now all also be part of the inescapably consequential context for a new life. None of this will be faked, or maybe all of it will be, or maybe this is where it stops being meaningful to talk about the difference between what you fake and what you know. This is where suddenly the craziest answers are fountaining out of nowhere into a flood of what it is up to us to make be joy.
(Discussion on vF...)
I only ever came up with two good solutions to this, where by "good" I mean effective in producing output, not necessarily in improving my own life any.
One was writing about music during a period when most of my spare time was spent listening to and thinking about music anyway. It's a trivial psychological insight to observe that the magnitude of this preoccupation betrays it as an attempt to distract attention, mostly mine, from the absences of other things in my life.
The other solution was writing about technology, or around technology in the same sense that I wrote around music, during a period when my day-job had devolved into pretty much exclusively sitting around thinking about and around technology. This is a pretty interesting experience to get to have, and as I repeated countless times in the job interviews that followed its inevitable eventual termination, I was never even remotely as well informed about the state of technology when I was actually contributing to it.
I'm contributing to it again. My new job, still settling past its newness after only a few months, is by far the best job I've ever had. There's one formulation of job ideal where you say "I'd be doing this even if they weren't paying me", and this one is beyond that to "I'd be sitting at home frustrated at being unable to participate without a team around me."
Several things are different and cooler about this job, at least in my own experience, the top few being that I'm explicitly managing it in addition to leading by design, that we're trying to find the shape of the solution we're building (and of the problem we're defining to solve) instead of being handed a VC precis or business plan to fulfill, and that the development work is not primarily segmented by function.
The combination of these things is very challenging and extremely engrossing. I discovered a quantum jump in responsibility when I went from being a member of a large design team at Interchange to being the sole designer at eRoom, and it's a similar one from being the designer to being the designer and manager, and maybe another level again to be the person with the most navigational accountability for an exploration undertaken knowingly without a map. The internal pressures intensify radically in both directions: to dismiss inanely crazy ideas to avoid causing magnified distractions, and to take superficially inexplicable intuitions even more seriously for their transformational potential. If you don't know where you're going, exactly, you have to operate as if patience and diligence are doomed.
Somehow this has resulted, over the past few weeks, in my spending the vast majority of my working hours actually programming. I won't reveal much by saying that we're building a big software system that will, when it's working, take in a large amount of previously scattered and unanalyzed data, and attempt to reduce its disorder. Most of the team has been working, since before my arrival, on the data-acquisition aspects of the project. So without anybody else to assign to the languishing data-analysis portion of the system, I assigned it to myself, and have been maniacally trying to figure out what I mean it to do by writing it.
I am not, in case I haven't been clear about this, really a professional programmer. Over the course of my career I've written a lot of code in a lot of languages, but almost always in carefully circumscribed senses. The UI architecture of eRoom was devised in no small part with my individual capabilities in mind, so that I could write a large amount of the actual UI code without having to participate in non-goal-oriented conversations about lazy pointer deallocation or rogue mutexes. And even the code I'm writing now is, probably, only a prototype, and I've written lots of prototype code.
My previous prototyping, though, has always been about faking a system in order to play with how it looks and acts. The easiest way to fake a large system is usually to actually build parts of it more or less for real, of course, but every project I seem to fake a little less. The content-management and discussion-forum systems underlying this website are constrained and minimal, but not faked at all. And although the prototype I'm writing at work will probably be migrated out of my eccentric Ruby improvisation into somebody's tediously meticulous Java for dull scalability reasons, my Ruby version really does do what it says it does.
This is tremendous fun, but it also pushes me into a part of my brain where my most obsessive nature is most dramatically exacerbated. I sit down at my desk in the morning, and eight or nine hours slide by in an un-self-interrupted continuum. I forget to eat lunch, I forget to check my email, I forget the five-minute phone-call-returning task I'm supposed to do while eating the lunch I forgot to eat, I look up at the clock and realize I was supposed to be home an hour ago, and when you live a block from your office there's no possible excuse. Making information fall magically into order is not the only thing I want to do with my life, but it's one of the ways I think I can make the world better, and in the grip of a particularly involved spell it feels like a calling. Something shadowily like a pattern starts to emerge, and I frown at it, and twitch something I've built to be twitched, and suddenly answers are fountaining out of the screen from their own joy at existing.
And that's not even the interesting news.
The interesting news is that Bethany is pregnant. If the tests and milestones keep going as well as they've been going so far, about six months from now we'll have a child. Software isn't easy, exactly, but it's difficult in easy ways. If you know what to twitch, you can know what kind of magic you'll be able to make.
People are, obviously, far less tractable. I don't know what kind of child we're going to have. I don't know what kind of parents we're going to be. I don't know what's going to be ecstatic and what's going hurt and what's going to be unbearable. I know nothing will ever be this simple again, and I was already barely coping with simplicity's complexity. Beth and I could probably have spent another ten or twenty years just figuring out how to merge two lives most inspiringly, and now we're going to try to make sense of three. My own explanation of wanting to do this is that raising children is the biggest project available to individual humans, and Bethany is the person with whom I feel most capable of the most humanity. I want us to take on the greatest, hardest, most human challenge for which we are eligible.
As crazy intuitions go, this makes my daftest futurist semantic-web predictions feel like placing wrist-flick roulette bets with free (and mostly unredeemable) chips. I really do believe that the semantic-web project I'm working on has the potential to be the seed of something that changes the dynamics of human interaction with information sufficiently to advance human nature, but I know that that potential is prorated by its likelihood. I have a small team, and there are many teams and bigger, and many factors and convolutions. Most attempts at deliberate innovation fail, and although opportunities for accidental innovation fail at a far greater rate, they arise at an even greater rate than they fail, so the innovations arrived at by luck vastly outnumber and outfascinate the few we make knowingly.
We will have made this child knowingly, and although accidents will probably outnumber clevernesses even more lopsidedly thereafter, in this project we will not have the luxury or salvation of irrelevance. There are no swarming competitors feverishly plotting to steal away our parent-share or undermine B and my parenting brand. There are no breathless marketers trying to find the most flattering spin on whatever turn out to be our signature family dysfunctions. Nothing will release us from this. There's just us, and a new person nobody on Earth has met yet. Somebody has to care for everybody, and this one is ours.
So I'm obviously not ready, and can't possibly be, but I'm ready. There are a thousand things to do, and stacks of books and friends and doctors and counselors lining up to add more things to our lists, and I can't imagine how we'll ever get them all done, so I assume and stipulate that we must not have to. I will do whatever it takes to be a good parent, and that will usually be almost enough, and hopefully every once in a while will jubilantly suffice. I will try to be better at this than anything else I have ever done. I don't intend for it to become the only thing I do, but I understand that, like marriage, it is a private commitment that takes precedence over my public aspirations, and that I will make different choices now, and dream of different things. I will have far more important things to write about than b-sides, and rarely even enough time to list them. I will do what I can, and allow myself not to catalogue, even mentally, all the things I would have catalogued when I had time instead of purpose.
And if this is the most ordinary, and thus in a sense the most mundane, way of raising the stakes for the ways in which I hope to improve the world, and to avoid being even a passive vector of thoughtlessness or evil, then so be it. These swiveling electrical-outlet covers I need to install so a baby that doesn't even exist yet can't stick (I guess) a fork in them, and whatever diaper-containment system we ultimately settle on, and the semantic web and the way we love each other and ourselves, will now all also be part of the inescapably consequential context for a new life. None of this will be faked, or maybe all of it will be, or maybe this is where it stops being meaningful to talk about the difference between what you fake and what you know. This is where suddenly the craziest answers are fountaining out of nowhere into a flood of what it is up to us to make be joy.
(Discussion on vF...)