[teampractices] FYI: Measuring the effectiveness of VE process interventions

David Strine dstrine at wikimedia.org
Tue Aug 11 23:55:11 UTC 2015


RE anecdotal evidence: in my experience, if you start asking people to
bring evidence to support their complaint, they will do so more often later
and over time. People appreciate it when you want to find a success metric
for their issue.

RE low respondents:
Try keeping the survey as light as possible.

You can also game-ify it. Plant an easter egg in it. The first person to
finish the survey and say a secret phrase to you gets something.



On Tue, Aug 11, 2015 at 4:38 PM, Joel Aufrecht <jaufrecht at wikimedia.org>
wrote:

> *tl;dr:* Metrics targeted exactly to the specific things we want to
> change may be more helpful than general opinion surveys.
>
> David was extremely helpful in clarifying my thinking on how I should
> measure the effectiveness of the VE process work I've been doing.  So I
> want to brain-dump before it evaporates.  Context: Neil and I did a general
> process survey of the VE team and their stakeholders in April/May,
> identified a handful of challenges, and proposed five specific
> interventions, all of which are currently underway.  The most
> people-intensive intervention is negotiating Service Level Understandings
> between the VE team and each stakeholder group, and using these discussions
> to uncover contradictions in goals and/or resource levels.  Arthur
> suggested I use surveys to measure if these SLU discussions have any effect.
>
> David proposes getting much more specific.  For each SLU discussion,
> identify several specific actions to be taken, that are measurable, and
> measure them.  Try to capture both input and output/outcome.  For example,
> if QA complained (I'm inventing this as a fake example) VE keeps releasing
> patches with IE bugs, then the VE team might say in response, well, you
> don't test our patches in time and we have to release untested code.  So
> they might agree on a lead time and a level of QA availability for
> testing.  Then, we could measure an input (time it takes for each patch to
> be tested) and an outcome (# of critical IE bugs found after release).
>
> Because we don't know what to focus on until after the SLU meeting, we
> probably don't have any Before metrics.  Where possible we can reconstruct
> Before metrics, but we would probably have to accept having mostly
> qualitative/anecdotal Before.
>
> I will also proceed with a survey for all stakeholders included in the
> process review (~30 people), but this doesn't have to be specific to the
> SLU intervention.  Instead, I can survey for all of the ~10 challenges
> identified:
> 1) Do you agree that X is a problem?
> 2) How does this affect you?
>
> And run the survey once now, as a retroactive baseline ("As of april/may,
> did you think ...") and then in a few more months, when most of the
> recommendations are implemented and mature.
>
> However, it occurs to me that I did run a similar survey in May/June, and
> got 13 responses.  So maybe I won't do a retro baseline, but instead just
> run a future survey later this year.
>
>
>
> *--Joel Aufrecht*
> Team Practices Group
> Wikimedia Foundation
>
> _______________________________________________
> teampractices mailing list
> teampractices at lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/teampractices
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.wikimedia.org/pipermail/teampractices/attachments/20150811/64ce933d/attachment.html>


More information about the teampractices mailing list