22 July 2006 to 24 May 2006
¶ Rudolph's Cud · 22 July 2006 essay/tech
I spent this week at the annual conference of the American Association for Artificial Intelligence. I have no academic background in artificial intelligence. Of course, I have no academic background in computer science, or interface design, or technical support, or pretty much anything else I've ever been paid to do. The tenuous premise of my professional career is that I think well, in general, and maybe that I think particularly well in the intersection between the things that might improve human lives and the things that machines can do. My track record indicates that I've been involved with projects where this tension has been usefully resolved, and my day-to-day work seems to support my presumptuous contention that I am a personally significant contributor towards such resolutions.
So I'm clearly unqualified to assess, in general, the internal quality of advanced research work in the field of artificial intelligence. But I'm in charge of a software project in which machine-learning algorithms are being put to practical use, so I have a stake in the field. And the general meta-question of the application of technology to the class of what, to humans, are thinking problems, is probably no farther outside of my domain than anything else I ever deal with. And even if I were deeper in the field, there were six parallel tracks of talks taking place during most of the conference, so no one person could really hope to directly evaluate the whole thing anyway.
Take this, then, for whatever it evokes. My scattered impressions cluster around two ideas. The first, which covers about half of my experiences during the week, is that I had wandered into a convocation of alchemists. Time after time I sat through earnest descriptions of patently clever mechanisms for conveying the lead to the transmutation chamber, or of organizing the variations of gold sure to soon be produced, or of venting the toxic fumes that somebody had proved (in last year's paper) would result from the conversion. Not so much actual gold, just yet. This year's conference was celebrating the 50th anniversary of AI as a field, and although it would be wrong to dismiss a hard problem on the grounds that it hasn't been solved quickly, the only way Zeno got away with diminishing returns was by publishing measurable progress so frequently.
One major branch of AI is based on the observations that humans think, and that we express thoughts, and that our expression of each seemingly individual thought involves a complex background of assumptions and definitions and relationships and conceptual leaps, so maybe if computers had all that context to work in, they'd be able to think, too. The logistical and emotional center of the cargo-cult effort to build that foundation of knowledge, and hope it attracts Thinkingness, is the Cyc project, whose director proudly told a room full of people that they have so far encoded 1,000,000 terms, 100,000 relationships between them, and 10,000 meta-assertions about those things and relationships, and that they believe this constitutes about 10% of what is required.
Then he added "(isa Rudolph reindeer)" as 1,000,001 and 10,001, and with a flourish derived "Rudolph is a ruminant", which is so far from 10% correct that I'm still laughing about it a week later. I suspect the Atlantean Association of Alchemical Investigators thought they were 10% of the way to transmutation at some point, too. To me this is not only almost certainly wrong, but also fabulously self-ridiculing. There is no reasonable way to assess the size of something you can't even define, and you can't be 10% to nowhere any more than you can be 10% to a miracle. 10% to a miracle is not a scientific progress-report, it's a numerology of the angel-capacity of certain ornamental pins.
And even if it made any sense to say that we're 10% of the way to encoding the basics of human common sense, to me the premise of the project is fatally flawed in three ways:
1. Our ability to carry on conversations is less a product of the depths of information we plumb with every statement than of our ability to levitate on ignorance. You can always add a new level of notation to explain what the previous level means, but you can always add a new level, so all you've really succeeded in doing is turning the second M in MLM to Modeling and wasting the time of a lot of people who already knew what you meant in the first place. The machines might remember everything you can figure out how to tell them, but remembering isn't half as useful as forgetting. Not only is deduction less valuable than guessing, it's ultimately even less valuable than being productively wrong. And if after 50 years of AI and 20 years of Cyc we're only 10% of the way to encoding the things people know, then you and I will be long dead before we ever so much as get started on the exponentially larger domain of our idiocies and misconceptions and errors.
2. Even if we could encode all of that, I'm in no way convinced that it's necessary for thinking, let alone sufficient. I sat through a lot of talks about what felt to me, ultimately, like attempts to figure out how to build a bicycle by taking apart a horse. Cyc is pretty much the embodiment of Intelligent Design as an engineering methodology, trying to build a grown human adult an atom at a time. I suspect they would defend themselves by saying that they're only really trying to build a 14-year-old, and then they can have it read the rest of the stuff in books. This is never going to work. Even building a baby and letting it learn the whole thing from scratch isn't going to work. Complexity evolves. If we're going to build artificial constructs that think, we're going to do it by sowing the most primitive of artificial creatures into the most accelerated of generative artificial worlds and giving them virtual epochs in which to solve (and state) their problems themselves.
3. Underlying both of these errors, I think, is one fundamental misconception, engendered by the words we use to describe the point of all this speculation and work, and maybe most encouraged to fester by the inescapable apparent simplicity of one unfortunate thought-experiment. It's easy to imagine talking to a computer. The genius of Turing's Test is that it requires less equipment than an E-meter audit and less training than teaching SAT-prep classes. It's even easier to understand in the IM era than it was when he posed it. If a computer can convince us it's a person, then it is thinking. Translated to what I suspect most people would think this means, and then inverted for logic, this basically says that "people" are whatever we can't tell aren't like ourselves. But this is Fake People, not Artificial Intelligence, and shouldn't be what we mean by "thinking", or by artificial intelligence. My cats apply the Turing Test to me several times a day. I always fail. The first machine to think will be the first one that comes up with its own idea of what thinking means. It won't be like us. It won't think like us, it may not have much to say to us, and it almost certainly won't care. It will be good at things it likes to do, not the stuff we want just because we want it.
And therein, I think, is something closer to what I think we ought to mean by artificial intelligence: not the manipulation of ideas, however facile or reified, but the generation of them. Here's the Furia Test: the first true AI is the first machine to learn to spit out Rudolph's cud and tell the Turing judges to take their incessant idiotic questions and fuck off.
But I said that that was only half of my experience. Humans have come up with a lot of good ideas while working on bad ones, and arguably most human progress takes place in the context of grandly evocative error. I'm not really interested in building a machine I can condescend to, and I bet a lot of the other people at the conference weren't there as part of that dream, either. Taking apart a horse teaches you things. Building bicycles teaches you things. Even the effort to teach Cyc about ThingsThatMakeFartingNoisesWhenYouAccidentallySitOnThemFn teaches you something.
My part of this comes from a much, much simpler dream. I already know people who think. I don't need machines for that. True AI will have its own questions to devise and then answer. What I need is deeper and broader answers to human questions we already know, and new questions that matter to us. I see that the information we already have, jammed into computers that don't think and aren't likely to start thinking any time soon, is sufficient to answer several orders of magnitude more questions than our current question-answering tools facilitate. This isn't an existential problem, and it's our problem because it's so clearly our fault. Computers would be perfectly happy to synthesize and connect, but instead we've obtusely put them to work obscuring information from each other so that even the syntheses inherent in the character of the information become intractable. The machine-learning parts of the system I'm working on are tangibly useful, but in a sense mostly regrettable necessities, devised to undo the structural damage done by too-exclusively packaging small amounts of knowledge for unhelpful individual resale.
The good news from AAAI, for me, is that there are incredibly interesting things going on in the subfields of knowledge representation and reasoning. I actually do have some academic background in deductive logic, and the project I'm working on is both deeply and shallowly involved with information structure, so maybe my grounding in this subarea is just a little firmer. More than that, though, I can understand why it matters. Some of the new information-alchemy equipment is really new good information-chemistry gear opportunistically relabeled. I understand, for example, that we need graph-structures in most of the places where we have heretofore had only trees, and that our tools for comprehending graph structures are way behind our tools for understanding tree structures, and our tools for understanding tree structures without chopping them down first were already none too good. I understand what each order of logic adds, if that's not all you're doing, and what the effort to encode each new level costs and potentially loses. I understand that we want to make closed-world decisions in an open world, and that we have to. I understand why Tim Berners-Lee wants URIs on everything and I understand that a huge part of how we're able to talk to each other about anything is that we are not required to unambiguously identify every element of what we are attempting to say. I understand that Zeno was only ever half right, and that he never had to actually ship anything.
So I go back to work, where we aren't trying to build machines that think, and aren't waiting for anybody else to. Our bicycles aren't going to look or smell much like horses, but we're going to try our best to make them faster than walking. I am bemused to have fallen into the company of alchemists, but even at their most insularly foolish they're a lot more interesting to talk to than Sarbanes-Oxley compliance officers. I'd much rather worry about what OWL leaves out than have to explain to business-unit managers why writing "XML BPM" on a PowerPoint slide doesn't constitute an advanced-technology plan. Ultimately my complaint about AI is probably not so much that its premises are flawed as that the current particular flaws lead to wrong work that is too mundane in its wrongness. I understand that I am paid to be an applied philosopher, and I want to feel, after a week among people freer to linger in theory, that practice is pedaling flat-out just to keep possibility in sight. But if we're just out here with our new fancy bicycle prototypes, and our old blank maps, and no plausible rumors of buried gold or miracles, then fine. There are places to ride, and better things to see by looking up than down, and we'll see more of them if we aren't always stopping to dig new collapsing holes in the same empty sand.
So I'm clearly unqualified to assess, in general, the internal quality of advanced research work in the field of artificial intelligence. But I'm in charge of a software project in which machine-learning algorithms are being put to practical use, so I have a stake in the field. And the general meta-question of the application of technology to the class of what, to humans, are thinking problems, is probably no farther outside of my domain than anything else I ever deal with. And even if I were deeper in the field, there were six parallel tracks of talks taking place during most of the conference, so no one person could really hope to directly evaluate the whole thing anyway.
Take this, then, for whatever it evokes. My scattered impressions cluster around two ideas. The first, which covers about half of my experiences during the week, is that I had wandered into a convocation of alchemists. Time after time I sat through earnest descriptions of patently clever mechanisms for conveying the lead to the transmutation chamber, or of organizing the variations of gold sure to soon be produced, or of venting the toxic fumes that somebody had proved (in last year's paper) would result from the conversion. Not so much actual gold, just yet. This year's conference was celebrating the 50th anniversary of AI as a field, and although it would be wrong to dismiss a hard problem on the grounds that it hasn't been solved quickly, the only way Zeno got away with diminishing returns was by publishing measurable progress so frequently.
One major branch of AI is based on the observations that humans think, and that we express thoughts, and that our expression of each seemingly individual thought involves a complex background of assumptions and definitions and relationships and conceptual leaps, so maybe if computers had all that context to work in, they'd be able to think, too. The logistical and emotional center of the cargo-cult effort to build that foundation of knowledge, and hope it attracts Thinkingness, is the Cyc project, whose director proudly told a room full of people that they have so far encoded 1,000,000 terms, 100,000 relationships between them, and 10,000 meta-assertions about those things and relationships, and that they believe this constitutes about 10% of what is required.
Then he added "(isa Rudolph reindeer)" as 1,000,001 and 10,001, and with a flourish derived "Rudolph is a ruminant", which is so far from 10% correct that I'm still laughing about it a week later. I suspect the Atlantean Association of Alchemical Investigators thought they were 10% of the way to transmutation at some point, too. To me this is not only almost certainly wrong, but also fabulously self-ridiculing. There is no reasonable way to assess the size of something you can't even define, and you can't be 10% to nowhere any more than you can be 10% to a miracle. 10% to a miracle is not a scientific progress-report, it's a numerology of the angel-capacity of certain ornamental pins.
And even if it made any sense to say that we're 10% of the way to encoding the basics of human common sense, to me the premise of the project is fatally flawed in three ways:
1. Our ability to carry on conversations is less a product of the depths of information we plumb with every statement than of our ability to levitate on ignorance. You can always add a new level of notation to explain what the previous level means, but you can always add a new level, so all you've really succeeded in doing is turning the second M in MLM to Modeling and wasting the time of a lot of people who already knew what you meant in the first place. The machines might remember everything you can figure out how to tell them, but remembering isn't half as useful as forgetting. Not only is deduction less valuable than guessing, it's ultimately even less valuable than being productively wrong. And if after 50 years of AI and 20 years of Cyc we're only 10% of the way to encoding the things people know, then you and I will be long dead before we ever so much as get started on the exponentially larger domain of our idiocies and misconceptions and errors.
2. Even if we could encode all of that, I'm in no way convinced that it's necessary for thinking, let alone sufficient. I sat through a lot of talks about what felt to me, ultimately, like attempts to figure out how to build a bicycle by taking apart a horse. Cyc is pretty much the embodiment of Intelligent Design as an engineering methodology, trying to build a grown human adult an atom at a time. I suspect they would defend themselves by saying that they're only really trying to build a 14-year-old, and then they can have it read the rest of the stuff in books. This is never going to work. Even building a baby and letting it learn the whole thing from scratch isn't going to work. Complexity evolves. If we're going to build artificial constructs that think, we're going to do it by sowing the most primitive of artificial creatures into the most accelerated of generative artificial worlds and giving them virtual epochs in which to solve (and state) their problems themselves.
3. Underlying both of these errors, I think, is one fundamental misconception, engendered by the words we use to describe the point of all this speculation and work, and maybe most encouraged to fester by the inescapable apparent simplicity of one unfortunate thought-experiment. It's easy to imagine talking to a computer. The genius of Turing's Test is that it requires less equipment than an E-meter audit and less training than teaching SAT-prep classes. It's even easier to understand in the IM era than it was when he posed it. If a computer can convince us it's a person, then it is thinking. Translated to what I suspect most people would think this means, and then inverted for logic, this basically says that "people" are whatever we can't tell aren't like ourselves. But this is Fake People, not Artificial Intelligence, and shouldn't be what we mean by "thinking", or by artificial intelligence. My cats apply the Turing Test to me several times a day. I always fail. The first machine to think will be the first one that comes up with its own idea of what thinking means. It won't be like us. It won't think like us, it may not have much to say to us, and it almost certainly won't care. It will be good at things it likes to do, not the stuff we want just because we want it.
And therein, I think, is something closer to what I think we ought to mean by artificial intelligence: not the manipulation of ideas, however facile or reified, but the generation of them. Here's the Furia Test: the first true AI is the first machine to learn to spit out Rudolph's cud and tell the Turing judges to take their incessant idiotic questions and fuck off.
But I said that that was only half of my experience. Humans have come up with a lot of good ideas while working on bad ones, and arguably most human progress takes place in the context of grandly evocative error. I'm not really interested in building a machine I can condescend to, and I bet a lot of the other people at the conference weren't there as part of that dream, either. Taking apart a horse teaches you things. Building bicycles teaches you things. Even the effort to teach Cyc about ThingsThatMakeFartingNoisesWhenYouAccidentallySitOnThemFn teaches you something.
My part of this comes from a much, much simpler dream. I already know people who think. I don't need machines for that. True AI will have its own questions to devise and then answer. What I need is deeper and broader answers to human questions we already know, and new questions that matter to us. I see that the information we already have, jammed into computers that don't think and aren't likely to start thinking any time soon, is sufficient to answer several orders of magnitude more questions than our current question-answering tools facilitate. This isn't an existential problem, and it's our problem because it's so clearly our fault. Computers would be perfectly happy to synthesize and connect, but instead we've obtusely put them to work obscuring information from each other so that even the syntheses inherent in the character of the information become intractable. The machine-learning parts of the system I'm working on are tangibly useful, but in a sense mostly regrettable necessities, devised to undo the structural damage done by too-exclusively packaging small amounts of knowledge for unhelpful individual resale.
The good news from AAAI, for me, is that there are incredibly interesting things going on in the subfields of knowledge representation and reasoning. I actually do have some academic background in deductive logic, and the project I'm working on is both deeply and shallowly involved with information structure, so maybe my grounding in this subarea is just a little firmer. More than that, though, I can understand why it matters. Some of the new information-alchemy equipment is really new good information-chemistry gear opportunistically relabeled. I understand, for example, that we need graph-structures in most of the places where we have heretofore had only trees, and that our tools for comprehending graph structures are way behind our tools for understanding tree structures, and our tools for understanding tree structures without chopping them down first were already none too good. I understand what each order of logic adds, if that's not all you're doing, and what the effort to encode each new level costs and potentially loses. I understand that we want to make closed-world decisions in an open world, and that we have to. I understand why Tim Berners-Lee wants URIs on everything and I understand that a huge part of how we're able to talk to each other about anything is that we are not required to unambiguously identify every element of what we are attempting to say. I understand that Zeno was only ever half right, and that he never had to actually ship anything.
So I go back to work, where we aren't trying to build machines that think, and aren't waiting for anybody else to. Our bicycles aren't going to look or smell much like horses, but we're going to try our best to make them faster than walking. I am bemused to have fallen into the company of alchemists, but even at their most insularly foolish they're a lot more interesting to talk to than Sarbanes-Oxley compliance officers. I'd much rather worry about what OWL leaves out than have to explain to business-unit managers why writing "XML BPM" on a PowerPoint slide doesn't constitute an advanced-technology plan. Ultimately my complaint about AI is probably not so much that its premises are flawed as that the current particular flaws lead to wrong work that is too mundane in its wrongness. I understand that I am paid to be an applied philosopher, and I want to feel, after a week among people freer to linger in theory, that practice is pedaling flat-out just to keep possibility in sight. But if we're just out here with our new fancy bicycle prototypes, and our old blank maps, and no plausible rumors of buried gold or miracles, then fine. There are places to ride, and better things to see by looking up than down, and we'll see more of them if we aren't always stopping to dig new collapsing holes in the same empty sand.
¶ The Good Myths · 10 July 2006
Every once in a while someone asks me, with the sudden leer of a half-starved mid-food-chain predator thinking it's finally about to get a snack, whether I'm an agnostic or an atheist. Usually this comes right after I've said I'm an atheist, which kind of spoils the surprise of the prepared follow-up attack, which attempts to structurally equate theism and atheism as mortal assertions about the immortal, rendering atheism vaguely self-undermining. Thus, in theory, I'd be forced to concede that I'm "only" an agnostic, and presumably if the worst objection one may raise against religion is that "we don't know", then religious belief is a more reasonable response than it might be if it were allowable to just say "no".
But it is allowable to just say no. In fact, theism and atheism are not structurally equivalent. Theism is the set of mortal assertions about the immortal. Atheism is a rejection of making mortal assertions about the immortal. Religion is not a real question, it's metaphor cowering beligerently as axiom, and thus it can neither require nor benefit from answers. I should be no more defined or labeled by my disbelief in gods than by my disbelief in elves. Myth is where we keep our old placeholders for the things we didn't used to know how to know. The good myths are the ones that are worth something as art after we stop inanely insisting that they're still, or were ever, science.
But it is allowable to just say no. In fact, theism and atheism are not structurally equivalent. Theism is the set of mortal assertions about the immortal. Atheism is a rejection of making mortal assertions about the immortal. Religion is not a real question, it's metaphor cowering beligerently as axiom, and thus it can neither require nor benefit from answers. I should be no more defined or labeled by my disbelief in gods than by my disbelief in elves. Myth is where we keep our old placeholders for the things we didn't used to know how to know. The good myths are the ones that are worth something as art after we stop inanely insisting that they're still, or were ever, science.
¶ ITA Software Battle of the Bands · 29 June 2006
As an odd and delayed result of having once written about music for a long time, I got to be a judge for a Battle of the Bands on Monday night.
¶ World Cup prediction contest · 6 June 2006
If you're willing to look foolish when your guesses all turn out to be wildly wrong, join me and Michael Zwirn in our vF World Cup prediction thread.
One of the BarCamp principles is that if you aren't learning or contributing, you should get up and go somewhere else. So today, day 2 of BarCampBoston, I have gotten up and not gone anywhere, and am sitting at home in Cambridge with an IRC window open just in case something mind-blowing happens in Maynard and somebody there thinks to type about it. I'd guess there were something like 150 people who showed up for day 1, and IRC reports estimate more like 30 who returned for day 2, so I don't appear to be alone in my decision.
Actually, there's probably nothing really un about this unreport, and for me there wasn't enough un about this nominal unconference. The reason I don't go to a lot of regular technology conferences is that I find them too often to consist of a series of sessions driven by over-specific presentations that are, at best, distantly related to some topic in which I am interested. My best hope is usually that they will inspire some discussion that wanders closer, but the structure usually almost guarantees that the presentations will be too long, and the discussions (if they happen at all) too short.
BarCamp was basically just like this. I went to some sessions I liked, but all of these ended just when they were getting going, if not sooner. I went to some I didn't, and these went on too long, or without viable alternatives. At least on day 1, the meeting spaces were far from the staging center, and even farther from each other, making it hard to do anything but pick a room and hope for the best. Day 1 was over-scheduled, day 2 under-scheduled, and in the absence of any compensating plan, this became self-reinforcing: I'm sure I wasn't the only person who moved their presentation from day 2 to day 1 in the fear that if I gave it on day 2 there'd be no audience, and then once I'd given it there was one less reason to come back on day 2 myself.
I can think of two obvious ways to combat this structural problem. One is to have better presentations. I don't know any shortcut to this. The longcut is to have the presentations solicited, proposed, submitted, judged and endorsed ahead of time. This could probably be done in a democratic, ad hoc, self-organizing, non-authoritarian, grassrooted, BarCampy way, but it would be hard, and I don't think that approach takes any fundamental advantage of the unique nature of BarCamp.
The other approach is to acknowledge explicitly that unless the presentations are exceptional, the real value is much more likely in the discussions. This, conversely, is exactly in keeping with the ad hoc, participatory nature of BarCamp. So my proposal for the next BarCampBoston is that it be mostly, or perhaps entirely, about discussions. Forget 30-minute presentations, which usually produce the kind of information quantity and density you could just as well read on the web anyway, and produce way too much scheduling churn and running around.
Instead, organize everything into 90-minute discussion groups with 30-minute blocks in between for logistics. So 10-11:30, 12-1:30 (over lunch), 2-3:30 and 4-5:30. For each discussion group there should be:
- A framing topic, ideally in the form of a theoretically answerable question, like "What is the killer app for the semantic web?", or "What is the next step towards practical distributed identity-management?"
- 3-6 participants prepared to contribute 5-minute demos of (or talks about) directly relevant work, so that the discussion has both concrete references and personal investments.
- A moderator, who is in charge of nudging the conversation out of ruts and back from digressions when necessary, and helping spot the right places to insert the demos. One of the contributors can also serve as moderator, or the moderator can be somebody else.
- Some idea, explicitly stated if it isn't adequately implied by the topic and the list of demos, of the expected context or background of discussion participants. It's particularly worthwhile to distinguish between introductions ("What is the semantic web and why might I care?: An invitation for web developers discovering data structure.") and working groups ("From DTDs to RDFS to OWL: How do you decide how much ontology is worth modeling?")
- Few enough other people that all of you can actually have a discussion. No more than 12 total people who plan on talking, probably, including the moderator and the contributors. And additional silent observers only if the space can still accommodate everyone comfortably for 90 minutes.
- Comfortable, discussion-suited spaces, ideally with net connections and presentation screens, and double-ideally arranged so that they feel connected, and there's a central common area from which everything else can be staged. Nobody should end up sitting somewhere bored because it's too hard to figure out where else they could be.
This can all be totally self-organizing, but most (if not all) of it should be self-organizing in advance: at least the topics, demos, moderators and contexts; and any amount of schedule- and space-assignment will only help. Among other virtues, laying out the options in advance allows participants to anticipate a good experience by actually planning it, and allows the group to potentially consolidate less popular topics and split (or clone) more popular ones. Clone with impunity, in fact. In a self-organizing conference, the participants are generally going to be self-motivated and self-filtered, so the chance of a large group having too much to say is far higher than the chance of splitting it in two and finding that either half runs out of ideas.
And if you can manage to have the unconference be self-reorganizing on the fly, then fantastic. Whether you plan for this or not is a question of your optimism. It sounds cool to say that you'll leave the 4:00 time-slot open until 3:30, for example, so that sessions inspired by earlier developments can spontaneously materialize. I bet it generally won't work, though. At most I might leave some of the spaces in a time-slot unbooked so that a planned session that overflows can be split or cloned on the fly. But ideally I'd state even this possibility ahead of time, so that the potential second session can be provisionally self-organized with a moderator and a share of the contributors. Remember that it's always easier to not do something you were prepared to do than it is to do something you weren't prepared to do.
The one other hope I have for future BarCamps and related gatherings (Geek Dinners, WebInno, RSS Alley experiments...) is that we find a way to get a little more participation from non-startups. I don't want to crowd out the self-employed, the unemployed and the aspiringly employed, but there are times when I feel like this is the Boston Technology Underdogs Club, rather than any kind of representative sample of the real community of people interested in new technology. Credit to Monster for hosting, but the people in the orange shirts weren't acting so much as ushering, and even if they had been participating, when the next largest concern represented is Tabblo, you know we're missing some people.
Actually, there's probably nothing really un about this unreport, and for me there wasn't enough un about this nominal unconference. The reason I don't go to a lot of regular technology conferences is that I find them too often to consist of a series of sessions driven by over-specific presentations that are, at best, distantly related to some topic in which I am interested. My best hope is usually that they will inspire some discussion that wanders closer, but the structure usually almost guarantees that the presentations will be too long, and the discussions (if they happen at all) too short.
BarCamp was basically just like this. I went to some sessions I liked, but all of these ended just when they were getting going, if not sooner. I went to some I didn't, and these went on too long, or without viable alternatives. At least on day 1, the meeting spaces were far from the staging center, and even farther from each other, making it hard to do anything but pick a room and hope for the best. Day 1 was over-scheduled, day 2 under-scheduled, and in the absence of any compensating plan, this became self-reinforcing: I'm sure I wasn't the only person who moved their presentation from day 2 to day 1 in the fear that if I gave it on day 2 there'd be no audience, and then once I'd given it there was one less reason to come back on day 2 myself.
I can think of two obvious ways to combat this structural problem. One is to have better presentations. I don't know any shortcut to this. The longcut is to have the presentations solicited, proposed, submitted, judged and endorsed ahead of time. This could probably be done in a democratic, ad hoc, self-organizing, non-authoritarian, grassrooted, BarCampy way, but it would be hard, and I don't think that approach takes any fundamental advantage of the unique nature of BarCamp.
The other approach is to acknowledge explicitly that unless the presentations are exceptional, the real value is much more likely in the discussions. This, conversely, is exactly in keeping with the ad hoc, participatory nature of BarCamp. So my proposal for the next BarCampBoston is that it be mostly, or perhaps entirely, about discussions. Forget 30-minute presentations, which usually produce the kind of information quantity and density you could just as well read on the web anyway, and produce way too much scheduling churn and running around.
Instead, organize everything into 90-minute discussion groups with 30-minute blocks in between for logistics. So 10-11:30, 12-1:30 (over lunch), 2-3:30 and 4-5:30. For each discussion group there should be:
- A framing topic, ideally in the form of a theoretically answerable question, like "What is the killer app for the semantic web?", or "What is the next step towards practical distributed identity-management?"
- 3-6 participants prepared to contribute 5-minute demos of (or talks about) directly relevant work, so that the discussion has both concrete references and personal investments.
- A moderator, who is in charge of nudging the conversation out of ruts and back from digressions when necessary, and helping spot the right places to insert the demos. One of the contributors can also serve as moderator, or the moderator can be somebody else.
- Some idea, explicitly stated if it isn't adequately implied by the topic and the list of demos, of the expected context or background of discussion participants. It's particularly worthwhile to distinguish between introductions ("What is the semantic web and why might I care?: An invitation for web developers discovering data structure.") and working groups ("From DTDs to RDFS to OWL: How do you decide how much ontology is worth modeling?")
- Few enough other people that all of you can actually have a discussion. No more than 12 total people who plan on talking, probably, including the moderator and the contributors. And additional silent observers only if the space can still accommodate everyone comfortably for 90 minutes.
- Comfortable, discussion-suited spaces, ideally with net connections and presentation screens, and double-ideally arranged so that they feel connected, and there's a central common area from which everything else can be staged. Nobody should end up sitting somewhere bored because it's too hard to figure out where else they could be.
This can all be totally self-organizing, but most (if not all) of it should be self-organizing in advance: at least the topics, demos, moderators and contexts; and any amount of schedule- and space-assignment will only help. Among other virtues, laying out the options in advance allows participants to anticipate a good experience by actually planning it, and allows the group to potentially consolidate less popular topics and split (or clone) more popular ones. Clone with impunity, in fact. In a self-organizing conference, the participants are generally going to be self-motivated and self-filtered, so the chance of a large group having too much to say is far higher than the chance of splitting it in two and finding that either half runs out of ideas.
And if you can manage to have the unconference be self-reorganizing on the fly, then fantastic. Whether you plan for this or not is a question of your optimism. It sounds cool to say that you'll leave the 4:00 time-slot open until 3:30, for example, so that sessions inspired by earlier developments can spontaneously materialize. I bet it generally won't work, though. At most I might leave some of the spaces in a time-slot unbooked so that a planned session that overflows can be split or cloned on the fly. But ideally I'd state even this possibility ahead of time, so that the potential second session can be provisionally self-organized with a moderator and a share of the contributors. Remember that it's always easier to not do something you were prepared to do than it is to do something you weren't prepared to do.
The one other hope I have for future BarCamps and related gatherings (Geek Dinners, WebInno, RSS Alley experiments...) is that we find a way to get a little more participation from non-startups. I don't want to crowd out the self-employed, the unemployed and the aspiringly employed, but there are times when I feel like this is the Boston Technology Underdogs Club, rather than any kind of representative sample of the real community of people interested in new technology. Credit to Monster for hosting, but the people in the orange shirts weren't acting so much as ushering, and even if they had been participating, when the next largest concern represented is Tabblo, you know we're missing some people.
¶ the islands and bays are for paddlers · 29 May 2006
¶ public and private · 24 May 2006
Two notes of rather different kinds:
- I'll be at BarCampBoston in Maynard on June 3/4. I haven't been to any unconferences before, so I don't have any guess at how successful this one might be, nor even exactly what the criteria for success are, but it's free and it's experimental, and if it turns out to be interesting I'll be sure to brag about having been at the first one.
- Bethany has finally given in and started a blog, called rantum scoot, and her desire for feedback has overcome her public reticence, so she said it's OK for me to mention it now.
- I'll be at BarCampBoston in Maynard on June 3/4. I haven't been to any unconferences before, so I don't have any guess at how successful this one might be, nor even exactly what the criteria for success are, but it's free and it's experimental, and if it turns out to be interesting I'll be sure to brag about having been at the first one.
- Bethany has finally given in and started a blog, called rantum scoot, and her desire for feedback has overcome her public reticence, so she said it's OK for me to mention it now.