We are looking for Wikipedians to participate in a survey. The survey is
designed to help us understand group decision-making and Wikipedia’s
Articles for Deletion (AfD) process. The research is being carried out
under the terms of the University of Western Ontario - Code of Conduct; it
will not lead to any sales follow up; no individual (or organization) will
be identified in our reporting.
If you are an adult Wikipedian, we would be grateful if you could spare
approximately 10-15 minutes to complete this survey.
As a token of our gratitude, for each completed survey we will make a
charitable donation of CAD$2 to the Wikimedia Foundation. If you have any
questions, please contact Lu Xiao at lxiao24 (at) uwo.ca.
To start the survey please click ONCE on the link below: http://
fluidsurveys.com/s/WikipediaSurvey/
Please try to complete the survey by August 1, 2014.
Thank you very much for your time, we really value your input.
Sincerely,
UWO Wikipedia Research Team
Hi there wikiresearch buddies,
I'm going to run the first in a series of three wiki-research related
technical talks tomorrow, starting at 1500 UTC (8am PDT).* I would love
your help fielding questions from the audience during and after the talk!*
The talks are* focused on helping Wikimedians without tons of technical
skills learn how to gather and manipulate Wikipedia data* using the
MediaWiki API, the slave DBs on WMFLabs, and several other useful
community-created tools. More information is here
<http://blog.wikimedia.org/2014/07/11/how-to-research-beyond-wikimetrics/>. You
will be able to view the presentation here
<https://plus.google.com/events/ccmjivinemvg2brae5db9b3jsqk>.
There are about 30 people signed up to watch the presentation as of now.
Because of the format (streaming video + a buggy and often delayed Q&A
feed), there won't be much opportunity for people get help or get their
questions answered by me during the talk.
So *I plan to point people who have questions or need other help during and
after my talk to wiki-research-l.* I will also point them to the
#wikimedia-research
channel on Freenode IRC
<http://webchat.freenode.net/?channels=wikimedia-research>.
*Implications for wiki-research-l:*
Questions may hit this list any time during or after my talk. There is no
obligation to answer, but if someone asks a question and you think you
might be able to help, it would be awesome of you could do so :)
*Implications for #wikimedia-research:*
Some people may start asking questions on IRC during the talk. Others may
show up later (if they are having trouble creating the Labs account, or
stuck on a query, etc). If you're a regular IRC user and are free at that
time, it would be great if you could hang out in the chan during my talk.
There may be no questions at all, but it never hurts to have someone there!
If you are generally interested in helping newbies with research, you might
want to make #wikimedia-research one of the IRC chans that you auto-join
when you log in.
Cheers,
Jonathan
--
Jonathan T. Morgan
Learning Strategist
Wikimedia Foundation
User:Jmorgan (WMF) <https://meta.wikimedia.org/wiki/User:Jmorgan_(WMF)>
jmorgan(a)wikimedia.org
Dear all,
Your suggestions are needed on the ways in which one can construct
some sensible baselines, most likely based on data sets *external* to
Wikipedia projects, of *expected* Wikipedia language versions development.
Such baselines should ideally indicate, given the availability of
language users and content (some numbers based on external data sets), a
certain language version should have expected number of articles/active
users.
As previous research has suggested that Wikipedia activities need
mutually-reinforcing cycles of participation, content, and readership, it
is expected that the development of a Wikipedia language version is
conditioned by the availability of (digitally) literate users and (possibly
digitized) content/sources.
So the assumption is:
Wikipedia Activities = Some function of (available users and content)
For example, the major non-English writing languages in the world
such as Arabic, Chinese, Spanish, etc., may have different numbers of
Internet users and digital content. These numbers indicate the basis on
which a Wikipedia language version can develop.
One practical use of this baseline measurement is to better
categorize/curate activities across Wikipedia language versions. We can
then better come up with expected values of Wikipedia development, and thus
categorize language versions accordingly based on the *external conditions*
of available/potential users and content.
Another use of this baseline measurement is to better compare the
development of different language versions. It should help answer questions
such as (1) whether Korean language version is *underdeveloped* on
Wikipedia platforms when compared with a language version that enjoys
similar number of available/potential users and content.
The current similar external baseline data is probably the number of
language speakers. My hunch is that it is not good enough in taking into
accounts the available/potential users and content, especially the
digitally-ready one.
So I welcome you to add to the following list, any external
indicators (and possibly data sources) that may help to construct such base
line.
==Indicators==
* Internet users for each language (probably approximate measurement based
on CLDR Territory-Language information and ITU internet penetration rates.
* Number of books published annually in different languages (suggested data
sources? Does ISBN have a database or stat report on published languages?)
* Number of web pages returned by major search engines on the queries of
"Wikipedia" in different languages, excluding results from Wikimedia
projects.
* Number of scholarly publications across languages (suggested data
sources?)
* Number of major newspaper publications across languages (suggested data
sources?)
Please share your thoughts!
--
han-teng liao
"[O]nce the Imperial Institute of France and the Royal Society of London
begin to work together on a new encyclopaedia, it will take less than a
year to achieve a lasting peace between France and England." - Henri
Saint-Simon (1810)
"A common ideology based on this Permanent World Encyclopaedia is a
possible means, to some it seems the only means, of dissolving human
conflict into unity." - H.G. Wells (1937)
(Apologies for cross-posting)
We would like to invite you to attend the *2014 SOCIAL MEDIA & SOCIETY
CONFERENCE* to be held on September 27-28, 2014 in Toronto, Canada.
We have a great line-up of presentations. The full list of accepted
submissions (4 panels, 57 papers, and 58 posters) is now available at
https://SocialMediaAndSociety.com/?page_id=1048
This year's keynotes are
. Dr. Keith N. Hampton, Rutgers University, USA -
https://SocialMediaAndSociety.com/?p=922
. John Weigelt, National Technology Officer, Microsoft Canada -
https://SocialMediaAndSociety.com/?p=932
The general registration is now open, and the "early-bird" deadline is
July 15, 2014:
https://SocialMediaAndSociety.com/?page_id=541
If you have any questions about the conference, please email to
smsociety14(a)easychair.org
Hope you can join us for this exciting event and contribute to this
emerging research area!
--
#SMSociety14 Organizing Committee
http://SocialMediaAndSociety.com
Twitter hashtag: #SMSociety14
/ Anatoliy Gruzd, Dalhousie University//
// Barry Wellman, University of Toronto//
// Philip H. Mai, Dalhousie University//
// Jenna Jacobson, University of Toronto?//
/
/
/
FYI
---------- Forwarded message ----------
From: Bogdan State <bogdanstate(a)gmail.com>
Date: 2014-07-09 5:24 GMT-04:00
Subject: DYAD 2014 - help us spread the word!
To: gciampag(a)indiana.edu
Dear Giovanni,
We would kindly ask you for your help in disseminating our Call for Papers
(enclosed below) to all interested colleagues and students. Please do
forward it widely on social media channels. Thank you in advance for
helping us make DYAD a great workshop!
Sincerely,
The DYAD organizers
Rossano Schifanella,
Bogdan State,
Yelena Mejova
===========================================================================
What's in a dyad? Interaction and Exchange in Social Media
Workshop website: http://dyad.di.unito.it
Conference website: http://socinfo2014.org/
Conference: SocInfo 2014 @ Barcelona, Spain in November 10-13, 2014
Submission deadline: September 8, 2014
A great deal of work has used computational methods to investigate the
intensity, structure, topic and sentiment of social interactions. But
neither information alone nor structure in isolation can be considered
fully responsible for the complexity of social life, whether on- or
offline. Online interactions can be conceptualized as a social exchange,
and also as a process through which meaning emerges through dialogue
between the two partners, i.e. a dyad. We aim for the DYAD workshop to
examine online interactions from a number of rich perspectives. Our invited
speakers include Cristian Danescu-Niculescu-Mizil (Max Planck Institute SWS),
Bruno Gonçalves (Aix-Marseille Université), and Carlos Diuk-Wasser (Facebook,
Inc.)
The workshop welcomes submissions on topics related to computational
approaches to the study of social interaction, relevant to fields as
diverse as social psychology, behavioral economics, sociolinguistics as
well as computational linguistics, web science, and network science.
Examples of relevant submissions include, but are not limited to, the
following topics:
-
detection of social expectations and norms
-
status relations and power imbalances
-
detection and measurement of social support, such as in critical
situations related to illness, bullying or grieving
-
self-disclosure, turn-taking, deference in interpersonal communication
-
identity
-
topic development and change in online conversations
-
emotion dynamics in conversation thread
-
social dynamics in comment thread (politics, news, interest-based
communities)
-
development of language and the self through social interaction
-
language variation across communities and social relationships (e.g.
distinguishing friends from colleagues, etc.)
-
persuasive language
-
pragmatics of language
Formatting
Submitted works can come as either full 10-page or short 4-page papers,
formatted according to Springer LNCS paper formatting guidelines (
http://www.springer.com/computer/lncs?SGWID=0-164-6-793341-0). Full papers
will be given 30 minutes and short 20 minutes for presentations. The
authors can choose to publish their papers along with the main conference
proceedings, or withhold such publication for future work considerations.
Important Dates
Submission deadline: September 8, 2014
Notification of acceptance: September 26, 2014
Camera-ready due: October 5, 2014
Workshop date: November 10, 2014
Conference dates: November 10-13, 2014
Find the latest news and PC lists on our website at http://dyad.di.unito.it
DYAD Workshop organizers,
Rossano Schifanella, University of Torino <schifane(a)di.unito.it>
Bogdan State, Stanford University & Facebook <bogdanstate(a)fb.com>
Yelena Mejova, Qatar Computing Research Institute <ymejova(a)qf.org.qa>
--
Giovanni Luca Ciampaglia
✎ 919 E 10th ∙ Bloomington 47408 IN ∙ USA
☞ http://www.inf.usi.ch/phd/ciampaglia/
✆ +1 812 287-3471
✉ gciampag(a)indiana.edu
I apologize, this is really more of a social experiment than research. I created a very simple bot (50 lines of CoffeeScript) that tweets when there has been an anonymous edit from an IP in the US Congress:
http://twitter.com/congressedits
You can find the code in the anon Github project:
https://github.com/edsu/anon
I ended up generalizing the code so you can listen to custom ip ranges, and post using a Twitter account of your choosing. So anon is also being used by the Government of Canada Edits:
https://twitter.com/gccaedits
This work was actually inspired by a similar bot built with IFTTT to monitor edits UK Parliament:
https://twitter.com/parliamentedits
The IFTTT approach works because there were only two IP addresses to watch (Web proxies), which are easy to add recipes for. I’d be interested in any feedback/suggestions you might have.
//Ed
We need overview quality-minded metrics on different language versions of
Wikipedias. Otherwise, the current "number games" played by bots across
certain language versions have distorted the direction and focus of the
editorial developments. I thereby propose an altmetric of
"do-not-spread-oneself-too-thin" to counterbalance the situation.
(Sorry I was late in engaging the conversation of "[Wiki-research-l] Quality
on different language version
<http://www.mail-archive.com/wiki-research-l@lists.wikimedia.org/msg03168.ht…>".
It is a follow-up reply and a suggestion to this discussion thread.)
For example, in the Chinese Wikipedia community, there are current
discussions talking about the current ranking of Chinese Wikipedia in terms
of number of articles, and how the *neighboring* versions (those who have
similar numbers of articles) use bots to generate new articles.
# The stats report generated and used by the Chinese community to compare
itself against neighboring language versions:
#* Link
<http://zh.wikipedia.org/wiki/Wikipedia:%E7%BB%9F%E8%AE%A1/%E4%B8%8E%E9%82%B…>
#* Google translated
<https://translate.google.com/translate?hl=en&sl=zh-CN&tl=en&u=http%3A%2F%2F…>
# One current discussion:
#* Link
<http://zh.wikipedia.org/wiki/Wikipedia:%E4%BA%92%E5%8A%A9%E5%AE%A2%E6%A0%88…>
#* Google translated
<https://translate.google.com/translate?sl=auto&tl=en&js=y&prev=_t&hl=en&ie=…>
# One recently archived discussion:
#* Link
<http://zh.wikipedia.org/wiki/Wikipedia:%E4%BA%92%E5%8A%A9%E5%AE%A2%E6%A0%88…>
#* Google translated
<https://translate.google.com/translate?hl=en&sl=zh-CN&tl=en&u=http%3A%2F%2F…>
To counterbalance the situation of such nonsensical comparison and
competition, I personally think it is better to have an altmetric in place
of the crude (and often distorting) measure of the number of articles.
One would expect a better encyclopedia to contain a set of core articles of
human knowledge.
Indeed the meta has a list of 1000 articles that "every Wikipedia should
have".
http://meta.wikimedia.org/wiki/List_of_articles_every_Wikipedia_should_have
We can use this to generate a quantifiable metric of the development of the
core articles in each language version, perhaps using the following numbers:
* number of references (total and per article)
* number of footnotes (total and per article)
* number of citations (total and per article)
* number of distinct wiki internal links to other articles
* number of good and feature articles (judged by each language version
community)
Based on the above numbers, it is conceivable to come up with a metric that
measure both the depth and breadth of the quality of the core articles. I
admit that other measurements can and should be applied, but still the
above numbers have the following merits:
* they reflect the nature of Wikipedia as dependent on other reliable
secondary and primary information couces.
* they can be applied across languages automatically without the need to
analyze texts, which requires more tools and engenders issues of
comparability.
For the sake of simplicity, let us say that one language version (possibly
English or German) has the highest number of scores, then that language
version can then be served as baseline for comparison. Say this benchmark
language version has:
# the quality-metric number of QUAL (from the vital 1000)
# the quantity number of total articles QUAN (from the existing metric)
Then the "do-not-spread-oneself-too-thin" quality metric can be calculated
as:
QUAL/QUAN
(It can be further discussed whether logarithmic scales should be applied
here.)
The gist of this "quality metric" is to reverse the obsession with the
number of articles towards the important core articles, hoping to get more
references, footnotes, citations, internal links and good/feature articles
for the core 1000. It will hopefully indicate which language version is too
"watery", or simply spreading oneself too thin with inconsequential short
articles.
Let us have a discussion here [Wiki-research-l], before we extend the
conversation to [Wikimedia-i].
Best,
han-teng liao
--
han-teng liao
"[O]nce the Imperial Institute of France and the Royal Society of London
begin to work together on a new encyclopaedia, it will take less than a
year to achieve a lasting peace between France and England." - Henri
Saint-Simon (1810)
"A common ideology based on this Permanent World Encyclopaedia is a
possible means, to some it seems the only means, of dissolving human
conflict into unity." - H.G. Wells (1937)
(in response to the wording of "Wiki research impact task force" that Aaron
Halfaker proposed and I seconded. )
The word choice between "public engagement" and "research impact" may have
several differences as to be detailed later. I was made aware of these
differences during a digital humanities summer school last year.
"Public engagement" sounds more likely to be two-way interactions where the
feedback from the public is integrated into the interactions. On the other
hand, "research impact" often entails the one-way influence from the
research itself to other social actors or records.
The similar analogy is the difference between "knowledge exchange" versus
"knowledge transfer".
The preferred word choice (public engagement vs. research impact or
knowledge exchange vs. knowledge transfer) may differs across disciplines.
My speculation is that it depends on the research spectrum of
naturalistic/neutral detachment to humanist immersion.
In this specific discussion on assessing the impact of Wiki research, I
would expect the "knowledge exchange"/"public engagement" may serve the
open knowledge community of practice better. On the other hand, to evaluate
the strength of "knowledge transfer"/"research impact" of Wiki research
outcomes has its own merit. However, this appears to me more like an
altmetrics exercise of comparing Wiki research projects among themselves.
In the more general discussion of public engagement and research impact, it
is worthwhile to note that several universities in the UK
<https://www.publicengagement.ac.uk/support-it/manifesto-public-engagement/s…>
have signed the "Manifesto for public engagement
<https://www.publicengagement.ac.uk/support-it/manifesto-public-engagement>"
(sadly my university has not signed it).
Also in 2014, the coming UK Research Excellence Framework (REF) recognised
"public engagement" as a route to impact. Thus, the need for UK researchers
to collect, measure and analyze the evidence of impact is expected to rise.
There are already university institutions (e.g. Centre for Public
Engagement in Bristol) and toolkits (e.g. TIDSR: Toolkit for the Impact of
Digitised Scholarly Resources from my current institution) that aim to help
academic researchers to measure the usage and impacts of, not just their
research publications, but also their digitised scholarly resources (e.g.
datasets, software codes, etc.)
Thus, based on the above observations, I guess that it is more specific to
use the terms "public engagement" and "public engagement", because the
focus is on the collective judgement of the Wikipedia community and the
wider public to which the Wikimedia Foundation must answer. I personally
prefer the term because it keeps the concept of "publicness".
I still remembered that Sue Gardner said the role of Wikimedia as a host of
important "public media" when I visited the Foundation years ago.
Thus, I admit that my previous suggestion to assess the two aspects of
impact: i.e. "public engagement" (potentially new users) or "community
engagement" (existing users) may be a bit "narrowed" as Aaron Halfaker
correctly pointed out. However, I do not see it as a point of weakness
because the terms actually narrow down to tangible aspects of research
impact surrounding the existing users of Wikimedia projects and the
potential users (i.e. the general public). Of course the research impact on
various academic fields and/or disciplines are overlooked this way, but I
personally do not think it is the first priority for the Wikimedia as a
open-knowledge non-profit institution. In this regard, the narrowness means
specificity to the Wikimedia's open knowledge agenda.
It does not mean that the term "research impact" is not important or
useful, and indeed many researchers try to score "research impact" points.
In this regard, I would highly recommend that the Wikimedia foundation and
community help to document/measure/publicize "public engagement" as a
*route to impact*. It would be nice if the foundation and community to give
away awards, twitter/weibo/newsletter mentions, and even web click reports
(to the mentioned/linked wiki research) as third-party evidence for impact.
"Public engagement" in this regard refers to the public impacts that are
not normally recorded and calculated by academic communities. The Wikimedia
platforms can thus fill in the gap by providing a place to
document/measure/publicize
the latest scholarly researches regarding Wiki research, and thereby
establishing a "public route" to positive impact on the global knowledge
movement.
I hope the long reply above makes some sense (if not, send me private
emails for clarifications).
Best,
han-teng liao
2014-07-02 16:41 GMT+01:00 Aaron Halfaker <ahalfaker(a)wikimedia.org>:
> Han-teng,
>
> Could you expand on what you are imagining with these two aspects of
> impact? Also, I'd like to think that impact wouldn't be so narrow as to be
> based on engagement only. Surely, researchers can produce things that are
> highly impactful without explicitly "engaging" with the volunteer
> community.
>
> -Aaron
>
>
> On Wed, Jul 2, 2014 at 10:32 AM, h <hanteng(a)gmail.com> wrote:
>
>> I second Aaron's two suggestions, with a slight change of wordings of the
>> first:
>> (1) change "impact" to "public engagement" (potentially new users) or
>> "community engagement" (existing users)
>>
>> han-teng liao
>>
>>
>> 2014-07-02 21:15 GMT+07:00 Aaron Halfaker <aaron.halfaker(a)gmail.com>:
>>
>> Given that it seems we agree with Poitr's desire for research about
>>> Wikipedia to lead to useful tools an insights that can be directly applied
>>> to making Wikipedia and other wikis better, what might be a more effective
>>> strategy for encouraging researchers to engage with us or at least release
>>> their work in forms that we can more easily work with?
>>>
>>> Here's a couple of half-baked ideas:
>>>
>>> - *Wiki research impact task force* -- contacts authors to encourage
>>> them to release code/datasets/etc. and praise them publicly when they do --
>>> could be part of the work of newsletter reviewers. There are many
>>> researchers on this list who work directly with Wikimedians to make sure
>>> that their research has direct impact and their awesomeness is worth our
>>> appreciation and public recognition.
>>> - *Yearly research award* -- for the most directly impactful
>>> research projects/researchers similar to
>>> https://meta.wikimedia.org/wiki/Research:Wikimedia_France_Research_Award.
>>> One of the focuses of the judging could be the direct impact that the work
>>> has had.
>>>
>>> -Aaron
>>>
>>>
>>> On Wed, Jul 2, 2014 at 7:05 AM, Heather Ford <hfordsa(a)gmail.com> wrote:
>>>
>>>> Apologies. You're right, Han-Teng. The reviewer looks to be Piotr
>>>> Konieczny who I think is on this mailing list?
>>>>
>>>> Heather Ford
>>>> Oxford Internet Institute <http://www.oii.ox.ac.uk> Doctoral Programme
>>>> EthnographyMatters <http://ethnographymatters.net> | Oxford Digital
>>>> Ethnography Group <http://www.oii.ox.ac.uk/research/projects/?id=115>
>>>> http://hblog.org | @hfordsa <http://www.twitter.com/hfordsa>
>>>>
>>>>
>>>>
>>>>
>>>> On 2 July 2014 12:58, h <hanteng(a)gmail.com> wrote:
>>>>
>>>>> Heather, I am not sure who contribute that. Probably not Nemo. If this
>>>>> issue of newsletter is correctly attributed, the contributors include: Taha
>>>>> Yasseri, Maximilian Klein, Piotr Konieczny, Kim Osman, and Tilman Bayer. My
>>>>> suggestion is only a personal one, and I am not sure if it is against
>>>>> policies to make a few edits once the newsletter is out.
>>>>>
>>>>> Thanks again to the contributors of the newsletter, my life is a bit
>>>>> easier and more interesting because of your work.
>>>>>
>>>>>
>>>>>
>>>>> 2014-07-02 15:35 GMT+07:00 Heather Ford <hfordsa(a)gmail.com>:
>>>>>
>>>>> +1 Thanks for your really thoughtful comments, Joe, Han-Teng.
>>>>>>
>>>>>> Nemo, would you be willing to add a note to the review and/or
>>>>>> contacting the researcher?
>>>>>>
>>>>>> Best,
>>>>>> Heather.
>>>>>>
>>>>>> Heather Ford
>>>>>> Oxford Internet Institute <http://www.oii.ox.ac.uk> Doctoral
>>>>>> Programme
>>>>>> EthnographyMatters <http://ethnographymatters.net> | Oxford Digital
>>>>>> Ethnography Group <http://www.oii.ox.ac.uk/research/projects/?id=115>
>>>>>>
>>>>>> http://hblog.org | @hfordsa <http://www.twitter.com/hfordsa>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> On 2 July 2014 05:17, h <hanteng(a)gmail.com> wrote:
>>>>>>
>>>>>>> The tone of the sentence in question
>>>>>>>
>>>>>>> 'it is disappointing that the main purpose appears to be
>>>>>>> completing a thesis, with little thought to actually improving
>>>>>>> Wikipedia'
>>>>>>>
>>>>>>> could have been written as
>>>>>>>
>>>>>>> 'It would be more useful for the Wikipedia community of practice
>>>>>>> if the author discussed or even spelled out the implications of the
>>>>>>> research for improving Wikipedia".
>>>>>>>
>>>>>>> This suggestion is based on my own impression that [Wiki-research-l]
>>>>>>> has mainly two groups of readers: community of practice and community of
>>>>>>> knowledge. It is okay to have some group tensions for creative/critical
>>>>>>> inputs. Still, a neutral tone is better for assessment, and an encouraging
>>>>>>> tone might work a bit better to encourage others to fill the *gaps* (both
>>>>>>> practice and knowledge ones).
>>>>>>>
>>>>>>> Also, the factors such as originally intended audience and word
>>>>>>> limits may determine how much a writer can do for *due weight* (similar to
>>>>>>> [[WP:due]]). If the original (academic) author failed to address the
>>>>>>> implications for practices satisfactory, a research newsletter contributor
>>>>>>> can point out what s/he thinks the potential/actual implications are. (My
>>>>>>> thanks to the research newsletter's voluntary contributors for
>>>>>>> their unpaid work!)
>>>>>>>
>>>>>>> While I understand that the monthly research newsletter has its
>>>>>>> own perspective and interests different from academic newsletters, it does
>>>>>>> not sacrifice the integrity of the newsletter to be gentle and specific. I
>>>>>>> would recommend a minor edit to the sentence as the the newsletter could be
>>>>>>> read by any one in the world, not just the Wikipedians. It is
>>>>>>> public/published for all readers, and thus please do not assume the readers
>>>>>>> know the context of Wikipedia research.
>>>>>>>
>>>>>>> Best,
>>>>>>>
>>>>>>> han-teng liao
>>>>>>>
>>>>>>>
>>>>>>> 2014-07-01 19:37 GMT+07:00 Heather Ford <hfordsa(a)gmail.com>:
>>>>>>>
>>>>>>>> Thanks so much for the newsletter [1]! Always a great read...
>>>>>>>>
>>>>>>>> But have to just say that comments like this: 'it is disappointing
>>>>>>>> that the main purpose appears to be completing a thesis, with
>>>>>>>> little thought to actually improving Wikipedia' [2] are really harsh and a
>>>>>>>> little unfair. The student is studying Wikipedia - they can hardly only be
>>>>>>>> interested in completing their thesis. We need to remember that researchers
>>>>>>>> are at very different stages of their careers, they have very different
>>>>>>>> motivations, and different levels of engagement with the Wikipedia
>>>>>>>> community, but that *all* research on Wikipedia contributes to our
>>>>>>>> understanding (even if as a catalyst for improvements). We want to
>>>>>>>> encourage more research on Wikipedia, not attack the motivations of people
>>>>>>>> we know little about - particularly when they're just students and
>>>>>>>> particularly when this newsletter is on housed on Wikimedia Foundation's
>>>>>>>> domain.
>>>>>>>>
>>>>>>>> Best,
>>>>>>>> Heather.
>>>>>>>>
>>>>>>>> [1] https://meta.wikimedia.org/wiki/Research:Newsletter/2014/June
>>>>>>>> [2]
>>>>>>>> https://meta.wikimedia.org/wiki/Research:Newsletter/2014/June#.22Recommendi…
>>>>>>>>
>>>>>>>> Heather Ford
>>>>>>>> Oxford Internet Institute <http://www.oii.ox.ac.uk/> Doctoral
>>>>>>>> Programme
>>>>>>>> EthnographyMatters <http://ethnographymatters.net/> | Oxford
>>>>>>>> Digital Ethnography Group
>>>>>>>> <http://www.oii.ox.ac.uk/research/projects/?id=115>
>>>>>>>> http://hblog.org | @hfordsa <http://www.twitter.com/hfordsa>
>>>>>>>>
>>>>>>>>
>>>>>>>> _______________________________________________
>>>>>>>> Wiki-research-l mailing list
>>>>>>>> Wiki-research-l(a)lists.wikimedia.org
>>>>>>>> https://lists.wikimedia.org/mailman/listinfo/wiki-research-l
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> Wiki-research-l mailing list
>>>>>>> Wiki-research-l(a)lists.wikimedia.org
>>>>>>> https://lists.wikimedia.org/mailman/listinfo/wiki-research-l
>>>>>>>
>>>>>>>
>>>>>>
>>>>>> _______________________________________________
>>>>>> Wiki-research-l mailing list
>>>>>> Wiki-research-l(a)lists.wikimedia.org
>>>>>> https://lists.wikimedia.org/mailman/listinfo/wiki-research-l
>>>>>>
>>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> Wiki-research-l mailing list
>>>>> Wiki-research-l(a)lists.wikimedia.org
>>>>> https://lists.wikimedia.org/mailman/listinfo/wiki-research-l
>>>>>
>>>>>
>>>>
>>>> _______________________________________________
>>>> Wiki-research-l mailing list
>>>> Wiki-research-l(a)lists.wikimedia.org
>>>> https://lists.wikimedia.org/mailman/listinfo/wiki-research-l
>>>>
>>>>
>>>
>>> _______________________________________________
>>> Wiki-research-l mailing list
>>> Wiki-research-l(a)lists.wikimedia.org
>>> https://lists.wikimedia.org/mailman/listinfo/wiki-research-l
>>>
>>>
>>
>> _______________________________________________
>> Wiki-research-l mailing list
>> Wiki-research-l(a)lists.wikimedia.org
>> https://lists.wikimedia.org/mailman/listinfo/wiki-research-l
>>
>>
>
> _______________________________________________
> Wiki-research-l mailing list
> Wiki-research-l(a)lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wiki-research-l
>
>