Pursuant to prior discussions about the need for a research
policy on Wikipedia, WikiProject Research is drafting a
policy regarding the recruitment of Wikipedia users to
participate in studies.
At this time, we have a proposed policy, and an accompanying
group that would facilitate recruitment of subjects in much
the same way that the Bot Approvals Group approves bots.
The policy proposal can be found at:
http://en.wikipedia.org/wiki/Wikipedia:Research
The Subject Recruitment Approvals Group mentioned in the proposal
is being described at:
http://en.wikipedia.org/wiki/Wikipedia:Subject_Recruitment_Approvals_Group
Before we move forward with seeking approval from the Wikipedia
community, we would like additional input about the proposal,
and would welcome additional help improving it.
Also, please consider participating in WikiProject Research at:
http://en.wikipedia.org/wiki/Wikipedia:WikiProject_Research
--
Bryan Song
GroupLens Research
University of Minnesota
I've been looking to experiment with node.js lately and created a
little toy webapp that displays updates from the major language
wikipedias in real time:
http://wikistream.inkdroid.org
Perhaps like you, I've often tried to convey to folks in the GLAM
sector (Galleries, Libraries, Archives and Museums) just how much
Wikipedia is actively edited. GLAM institutions are increasingly
interested in "digital curation" and I've sometimes displayed the IRC
activity at workshops to demonstrate the sheer number of people (and
bots) that are actively engaged in improving the content there...with
the hopes of making the Wikipedia platform part of their curation
strategy.
Anyhow, I'd be interested in any feedback you might have about wikistream.
//Ed
We are glad to announce the inaugural issue of the Wikimedia Research Newsletter [1], a new monthly survey of recent scholarly research about Wikimedia projects.
This is a joint project of the Signpost [2] and the Wikimedia Research Committee [3] and follows the publication of two research updates in the Signpost, see also last month's announcement on this list [4].
The first issue (which is simultaneously posted as a section of the Signpost and as a stand-alone article in the Wikimedia Research Index) includes 5 "in depth" reviews of papers published over the last few
months and a number of shorter notes for a total of 15 publications, covering both peer-reviewed research and results published in research blogs. It also includes a report from the Wikipedia research workshop
at OKCon 2011 and highlights from the Wikimedia Summer of Research program.
The following is the TOC of issue #1:
• 1 Edit wars and conflict metrics
• 2 The anatomy of a Wikipedia talk page
• 3 Wikipedians as "Janitors of Knowledge"
• 4 Use of Wikipedia among law students: a survey
• 5 Miscellaneous
• 6 Wikipedia research at OKCon 2011
• 7 Wikimedia Summer of Research
• 7.1 How New English Wikipedians Ask for Help
• 7.2 Who Edits Trending Articles on the English Wikipedia
• 7.3 The Workload of New Page Patrollers & Vandalfighters
• 8 References
We are planning to make the newsletter easy to syndicate and subscribe to. If you wish your research to be featured, a CFP or event you organized to be highlighted, or just join the team of contributors, head over to this page to find out how: [5] We hope to make this newsletter a favorite reading for our research community and we look forward to your feedback and contributions.
Dario Taraborelli, Tilman Bayer (HaeB)
on behalf of the WRN contributors
[1] http://meta.wikimedia.org/wiki/Research:Newsletter/2011-07-25
[2] http://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost
[3] http://meta.wikimedia.org/wiki/Research:Committee
[4] http://lists.wikimedia.org/pipermail/wiki-research-l/2011-June/001552.html
[5] http://meta.wikimedia.org/wiki/Research:Newsletter
--
Dario Taraborelli, PhD
Senior Research Analyst
Wikimedia Foundation
http://wikimediafoundation.orghttp://nitens.org/taraborelli
Hi all!
As part of our investigation of the social side of Wikipedia in SoNet
group at Fondazione Bruno Kessler (Trento - Italy), Paolo Massa and I
created Manypedia.
http://manypedia.com/
On Manypedia, you compare Linguistic Points Of View (LPOV) of different
language Wikipedias. For example (but this is just one of the many
possible comparisons), are you wondering if the community of editors in
the English, Arabic and Hebrew Wikipedias are crystallizing different
histories of the Gaza War? Now you can check “Gaza War” page from
English and Arabic Wikipedia (both translated into English) or from
Hebrew Wikipedia (translated into English).
Manypedia, by using the Google Translate API, automatically translates
the compared page in a language you don’t know into the language you
know. And this is not limited to English as first language. For example
you can search a page in the Italian Wikipedia (or in 56 languages
Wikipedias) and compare it with the same page from the French Wikipedia
but translated into Italian. In this way you can check the differences
of the page from another language Wikipedia even if you don’t know that
language.
We hope that this project will sound interesting to you and maybe you
could help us to make it better. We're really interested in any kind of
feedback! Please write us!
p.s.: If you're facebook addicted please like Manypedia ;)
http://www.facebook.com/pages/Manypedia/202808583098332
Federico Scrinzi
--
f.
"I didn't try, I succeeded"
(Dr. Sheldon Cooper, PhD)
() ascii ribbon campaign - against html e-mail
/\ www.asciiribbon.org - against proprietary attachments
Hi all,
as some of you know, Wikimedia Deutschland is a Use Case Partner in the
EU-funded research project RENDER, which concerns with the diversity of
knowledge and information.
To collect and discuss ideas and thoughts regarding knowledge diversity and
bias in Wikipedia, we established a project page on
http://meta.wikimedia.org/wiki/RENDER.
There you can find more details to the project, the main goals and our
tasks, and a bibliography of relevant literature, we observed so far.
Please, let us know on the discussion page, if there are missing information
or unclear descriptions. Maybe you have any ideas of additional literature
regarding diversity, quality and user behaviour in Wikipedia, we forgot to
realise.
Also, feel invited to contact us if you want to work with us on the topics
of the research project. We are open to external partners, and welcome their
input. We are looking forward to a fruitful discussion and cooperation.
Regards,
Angelika
--
Angelika Adam
Projektmanagerin
Wikimedia Deutschland e.V.
Eisenacher Straße 2
10777 Berlin
Tel.: +49 30 219158260
http://wikimedia.de
*Helfen Sie mit, dass WIKIPEDIA von der UNESCO als erstes digitales
Weltkulturerbe anerkannt wird.
Unterzeichnen Sie die Online-Petition<https://wke.wikimedia.de/wke/Main_Page>
!*
****Unterstützen Sie Freies Wissen mit einer SMS. Senden Sie einfach WIKI an
81190. Mit 5 Euro sichern Sie so die Verfügbarkeit und Weiterentwicklung von
Wikipedia.****
Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e.V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg unter
der Nummer 23855 B. Als gemeinnützig anerkannt durch das Finanzamt für
Körperschaften I Berlin, Steuernummer 27/681/51985.
Hi folks,
Any special:export experts out there?
I'm trying to download the complete revision history for just a few
pages. The options, as I see it, are using the API or special:export.
The API returns XML that is formatted differently than special:export
and I already have a set of parsers that work with special:export data
so I'm inclined to go with that.
I am running into the problem that, it seems when I try to use POST so
that I can iteratively grab revisions in increments of 1000, I am
denied (I get a WMF servers down error). If I use GET, it works, but
then I can't use the parameters that allow me to iterate through all
the revisions.
Code pasted below. Any suggestions as to why the server won't accept POST?
Better yet, does anyone already have a working script/tool handy that
grabs all the revisions of a page? :)
Thanks, all! (Excuse the cross posting, I usually hang out on
research, but thought perhaps folks on the developers list would have
insight.)
Andrea
class Wikipedia {
public function __construct(){ }
public function searchResults( $pageTitle = null, $initialRevision = null ) {
$url = "http://en.wikipedia.org/w/index.php?title=Special:Export&pages="
. $pageTitle . "&offset=1&limit=1000&action=submit";
$curl = curl_init();
curl_setopt( $curl, CURLOPT_URL, $url );
curl_setopt( $curl, CURLOPT_RETURNTRANSFER, 1 );
curl_setopt( $curl, CURLOPT_POST, true);
curl_setopt( $curl, CURLOPT_USERAGENT, "Page Revisions Retrieval
Script - Andrea Forte - aforte(a)drexel.edu");
$result = curl_exec( $curl );
curl_close( $curl );
return $result;
}
}
Yep, that did it, thanks! (I never tried including both export and
exportnowrap, just one or the other and it was defining things like
revisionids as element attributes instead of children.)
Andrea
On Thu, Jul 14, 2011 at 2:35 PM, Roan Kattouw <roan.kattouw(a)gmail.com> wrote:
> On Thu, Jul 14, 2011 at 6:58 PM, Andrea Forte <andrea.forte(a)gmail.com> wrote:
>> I'm trying to download the complete revision history for just a few
>> pages. The options, as I see it, are using the API or special:export.
>> The API returns XML that is formatted differently than special:export
>> and I already have a set of parsers that work with special:export data
>> so I'm inclined to go with that.
>>
> You can use api.php?action=query&export&exportnowrap&titles=Foo|Bar|Baz
> , that should give you the same format.
>
> Roan Kattouw (Catrope)
>
--
:: Andrea Forte
:: Assistant Professor
:: College of Information Science and Technology, Drexel University
:: http://www.andreaforte.net