Greetings Wiki Loves Monuments list members,
We really appreciate all your interest in this report. We regularly seek
program leader input to interpreting the data and developing next steps for
learning from case examples. This applies to volunteer program leaders and
grantees and we encourage continued participation in this shared learning
effort. All our channels are open for you to choose how to reach us. The
goal of our team and our reports is to serve movement partners, so we want
to make sure we’re hearing and responding to your main concerns about this
year’s iteration of the Evaluation Reports.
That said, a lot of this thread seemed to be based on misunderstandings. We
wanted to clear some of them up, particularly around the report’s
background and inputs:
(1) Data collection efforts
Several people in this thread have asked how we gathered the data that we
used in these reports. We got data from the voluntary data collection
survey, grant reports and their linked event pages, blogs or
supplemental reporting, and by using online tools that are also available
to the community. This year’s project was first announced in September
with a clear outline of the metrics sought. Data collection and input of
metrics was open September through December initially and then extended
through February. The extension to February was to extend direct reporting
inputs two months longer to allow for grantees, whose program data were
first mined from their grant reports, time to connect us to specific
program leaders to fill in the gaps , as well as last call to the
community along with our published list of identified programs January
(2) Data limitations
Some people in the thread have been concerned about the limitations of the
data. We agree that we should be transparent about this, so each report has
a special page that reviews the limitations of the data captured.
Importantly, the Wiki Loves Monuments evaluation report is part of an
expanded folio of the beta reports  modeled and discussed last year. As
a set of reports we present overall limitations to the reporting and issues
with data access across each program  in the overview of the reporting
. In those sections, we explicitly present the response rates of program
leaders who reported directly, for Wiki Loves Monuments, that portion is
39% who program leaders which report directly.
(3) Diversity of goals
The issue of diverse goals for programs is also included among limitations
overall and highlighted on the Wiki Loves Monuments limitations page 
where we point out that, yes, eight different goals were selected by at
least 50% of those reporting directly. These reports are part of a
discovery process with which we have engaged in on-going dialogue about
challenges with metrics for quality, tools accessibility, tracking and
privacy issues, issues with valuation across different socio-economic
contexts, varied interests and foci, and other complexities of measuring
impact across the movement. We will continue to have those conversations as
we look to improve measurement strategies for understanding movement-wide
efforts and impact.
Some of you were also concerned about over-simplification, and that nuance
is lost when writing simple summary statements such as “The average Wiki
Loves Monuments contest …” “...hurt ... to see.” We wrote these TL;DRs
explicitly in response to feedback on the beta version of these reports
last year. When we proposed the summaries then, we were told that would be
appreciated. Truthfully, these can be really painful statements to have to
write because we know they are, by definition, over-simplifications.
However, we made that compromise in order to make the information
accessible to many different audiences of readers.
Importantly, rolling up metrics across several different points of program
implementation is a difficult task. By definition it sacrifices complexity,
as does developing easy to digest snip-its of information that are
requested by so many who are inundated by information in their inboxes.
So, yes, if you want the details, please skip them and read the more
detailed narrative, or use them to help guide your interest to where you
wish to read more deeply, there is a lot of data to wade through, we have
worked to make it as accessible as possible. We have tried to format in a
linguistically and visually consistent fashion to make these different
reading routes available, but differentiated, for different reader
preferences. Please feedback on how this is working and continue to share
potential solutions as we are always open to improvements.
This email does not answer every question raised on the thread; since we
expect some of these questions will be asked again in the future we have
outlined the most important questions asked, and answered on the report
talk page . Please let us know if we have overlooked any and join us
there so that we can continue the discussion and have the information
documented in a central location in order to use it in future strategy.
In behalf of the Learning and Evaluation team, thank you for your time and
participation in learning together.
 Data Collection Announcement and Blog Announcement
 Tools (Wikimetrics, GLAMorous, CatScan, Quarry)
 Evaluation Reports (beta)
 Blog on “Filling in the Gaps”
 Overall reporting limitations and data access
 Reporting Overview (If you are new to the reports and evaluation
initiative we suggest starting at the Important Definitions page and
working your way right through the other tabs to answer your curiosities:
 Wiki Loves Monuments report limitations
 Wiki Loves Monuments evaluation report talk page
Jaime Anstee, Ph.D
Program Evaluation Specialist
Imagine a world in which every single human being can freely share in the
sum of all knowledge. Help us make it a reality!