One of the tricks about using metrics to make decisions is that many metrics are more or less easily manipulated to affect the decisions themselves. It's a sort of social version of the Uncertainty principle, and the uncertainty in this case would be the integrity of the metrics.
That's not to say that metrics should not influence policy or other decisions; it simply means that good metrics are typically measurements of indirect and aggregate data.
,Wil
On Sat, May 10, 2014 at 8:59 PM, James Salsman jsalsman@gmail.com wrote:
Philippe Beaudette wrote:
... asking which of those things people support *in a vacuum* [is] not the question at hand....
That is true. A community survey leading to revision of strategic goals should be asked of actual contributors, i.e., by selecting editors from wikis' recent changes around the clock in proportion to the volume of edits each hour, and asking them "which do you think would help editors the most, _A_, or _B_?" Ideally their total number of contributions should be noted along with their pairwise preference so that it is possible to produce an unweighted ranking and rankings where the number of contributions is given greater weight.
Then the resulting rankings should be submitted to the ED and Board to mull over as to what would be spreading work too thin, where various opportunities and roadblocks are for the top ranking preferences, etc.
Step one, collect the data. Because of the nature of such a survey, most people can do that themselves. Ideally the Foundation would want to be at the forefront of collecting and publishing the underlying information.
Best regards, James Salsman
Wikimedia-l mailing list, guidelines at: https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines Wikimedia-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe