¶ alpha beta go · 13 April 2006 essay/tech
Say you're making software to do X. The audience for this is people who want to do X, and the evaluation criteria for the software to do X is that it actually can be used to do X in some respectable and hopefully distinguished manner.
Before you get to the released version of your software to do X, there are two useful stages you will probably want to pass through: beta and alpha. These are not percentages of completion or levels of toleration of error, they are themselves tools for different purposes than the released version. X beta exists to provide an evaluable preview of X. X alpha exists to provide an evaluable preview of X beta.
It is not only possible, but critical, to substitute these purposes of beta and alpha back into the original statement. Like this, for beta:
Note that this is different. The people interested in previewing a tool to do X are not necessarily the same people as the ones who want to do X, and their goal is not doing X but seeing how your software is going to do X.
Likewise with alpha:
That is, the beta is a tool for seeing how the real thing is eventually going to work, and the alpha is a tool for seeing how the beta is eventually going to work. The audience for alpha is thus double-meta, and is likely to be very small, sometimes even smaller than the team working on it.
Even more crucially, the substitution works in the other direction, too: beta is a released version of software for evaluating a preview of X, and the alpha is a released version of software for evaluating a preview of the beta. So while the X beta may have omissions and errors in its performance of X without failing, if it has omissions and errors in its performance of the evaluation and preview of X, it fails.
Thus the released version of an online calendar for shared scheduling, for example, fails if it can't be used for shared scheduling. But the beta fails if it can't be used for experimenting with shared scheduling. So the beta doesn't require that all the shared scheduling features work, but it does require that all the experimentation features are ready.
If your software involves manipulating user data of any kind, then, here are some things that probably have to be already working, without appreciable error, in your beta:
- import
- reimport
- importing useful sample data
- distinguishing data by import batch
- undoing import batches
- deleting data
- deleting data en masse
- full data purge and restart of beta experience
- comparing your treatment of this data to the treatment of it wherever it came from
- comparing different treatments of this data within your own software
- export/sync/feed of this data to any evaluation-critical downstream integration points
- sync/feed of source data from any evaluation-critical upstream integration points
- full operation of a representative subset of anything critical of which there will be multiples (integration points, environments, styles, templates, other users, etc.)
- any configuration settings integral to personal evaluation
- clear explanation of beta limitations
If any of these aren't done, don't ship your beta yet. If some of them aren't even in your design, you've got more designing to do.
Before you get to the released version of your software to do X, there are two useful stages you will probably want to pass through: beta and alpha. These are not percentages of completion or levels of toleration of error, they are themselves tools for different purposes than the released version. X beta exists to provide an evaluable preview of X. X alpha exists to provide an evaluable preview of X beta.
It is not only possible, but critical, to substitute these purposes of beta and alpha back into the original statement. Like this, for beta:
Say you're making software to {provide an evaluable preview of X}. The audience for this is people who want to {evaluate a preview of X}, and the evaluation criteria for the software to {provide an evaluable preview of X} is that it actually can be used to {evaluate a preview of X} in some respectable and hopefully distinguished manner.
Note that this is different. The people interested in previewing a tool to do X are not necessarily the same people as the ones who want to do X, and their goal is not doing X but seeing how your software is going to do X.
Likewise with alpha:
Say you're making software to {provide an evaluable preview of an evaluable preview of X}. The audience for this is people who want to {evaluate a preview of an evaluable preview of X}, and the evaluation criteria for the software to {provide an evaluable preview of an evaluable preview of X} is that it actually can be used to {evaluate a preview of an evaluable preview of X} in some respectable and hopefully distinguished manner.
That is, the beta is a tool for seeing how the real thing is eventually going to work, and the alpha is a tool for seeing how the beta is eventually going to work. The audience for alpha is thus double-meta, and is likely to be very small, sometimes even smaller than the team working on it.
Even more crucially, the substitution works in the other direction, too: beta is a released version of software for evaluating a preview of X, and the alpha is a released version of software for evaluating a preview of the beta. So while the X beta may have omissions and errors in its performance of X without failing, if it has omissions and errors in its performance of the evaluation and preview of X, it fails.
Thus the released version of an online calendar for shared scheduling, for example, fails if it can't be used for shared scheduling. But the beta fails if it can't be used for experimenting with shared scheduling. So the beta doesn't require that all the shared scheduling features work, but it does require that all the experimentation features are ready.
If your software involves manipulating user data of any kind, then, here are some things that probably have to be already working, without appreciable error, in your beta:
- import
- reimport
- importing useful sample data
- distinguishing data by import batch
- undoing import batches
- deleting data
- deleting data en masse
- full data purge and restart of beta experience
- comparing your treatment of this data to the treatment of it wherever it came from
- comparing different treatments of this data within your own software
- export/sync/feed of this data to any evaluation-critical downstream integration points
- sync/feed of source data from any evaluation-critical upstream integration points
- full operation of a representative subset of anything critical of which there will be multiples (integration points, environments, styles, templates, other users, etc.)
- any configuration settings integral to personal evaluation
- clear explanation of beta limitations
If any of these aren't done, don't ship your beta yet. If some of them aren't even in your design, you've got more designing to do.