Hello all,
I'm pleased to announce the launch of the Wikimedia Foundation
research committee, with 11 initial members. You can find more
information and bios here:
http://meta.wikimedia.org/wiki/Research_Committee
The purpose of the committee is to help organize policies, practices
and priorities around Wikimedia-related research. Thanks to everyone
who applied to join the committee.
We're using a publicly archived mailing list as our primary means of
communication, if you're interested in following what's going on:
https://lists.wikimedia.org/mailman/listinfo/rcom-l
We'll also begin working on the pages on meta.wikimedia.org to make
them more useful, and we'll have a first IRC meeting soon.
All best,
Erik
--
Erik Möller
Deputy Director, Wikimedia Foundation
Support Free Knowledge: http://wikimediafoundation.org/wiki/Donate
Robert Harris here again, the consultant looking at the
issues surrounding controversial content on Wikimedia projects. I wanted first
of all to thank all of you who have taken the trouble to once again weigh in on
a subject I know has been debated many times within the Wikimedia community. It
has been very valuable for me, a newcomer to these questions, to witness the
debate first-hand for several reasons. The first is to remind me of the
thinking behind various positions, rather than simply to be presented with the
results of those positions. And the second is as a reminder to myself to
remember my self-imposed rule of "do no harm” and to reflect on how easy
it is to break that rule, even if unintentionally.
So far, the immediate result for me of the dialogue has been to recognize that
the question of whether there is any problem to solve at all is a real question
that will need a detailed and serious response, as well as a recognition that
the possibility of unintended consequence in these matters is high, so caution
and modesty is a virtue.
Having said that, I will note that I'm convinced that if there are problems to
be solved around questions of controversial content, the solutions can probably
best be found at the level of practical application. (and I’ll note that
several of you have expressed qualified confidence that a solution on that
level may be findable). That's not to say that the intellectual and
philosophical debate around these issues is not valuable -- it is essential, in
my opinion. It's just to note that not only is the "devil" in the
details as a few of you have noted, but that the "angel" may
be in the details as well -- that is -- perhaps -- questions insoluble on
the theoretical level may find more areas of agreement on a practical level.
I'm not sure of that, but I'm presenting it as a working hypothesis at this
point.
My intended course of action over the next month or so is the following. I'm
planning to actually write the study on a wiki, where my thinking as it
develops, plus comments, suggestions, and re-workings will be available
for all to see. I was planning to begin that perhaps early in September. (A
presentation to the Foundation Board is tentatively scheduled for early
October). Between now and then, I would like to continue the kind of feedback
I've been getting, all of it so valuable for me. I have posted another set of
questions about controversy in text articles on the Meta page devoted to the
study, (http://meta.wikimedia.org/wiki/Talk:2010_Wikimedia_Study_of_Controversial_C…) because my ambit does not just
include images, and text and image, in my opinion, are quite different forms of
content. As well, I will start to post research I've been collecting for
information and comment. I have some interesting notes about the
experience of public libraries in these matters (who have been struggling with
many of these same questions since the time television, not the Internet, was
the world’s new communications medium), as well as information on the policies
of other big-tent sites (Google Images, Flickr, YouTube, eBay,etc.) on these
same issues. I haven't finished collecting all the info I need on the latter,
but will say that the policies on these sites are extremely complex (although
not always presented as such) and subject within their communities to many of
the same controversies that have arisen in ours. We are not them, by any
means, but it is interesting to observe how they have struggled with many of
the same issues with which we are struggling.
The time is soon coming when I will lose the luxury of mere
observation and research, and will have to face the moment where I will enter
the arena myself as a participant in these questions. I’m looking forward to
that moment, with the understanding that you will be watching what I do with
care, concern, and attention.
Robert Harris
emijrp says:
"I want to make a proposal about external links preservation. Many times,
when you check an external link or a link reference, the website is dead or
offline. This websites are important, because they are the sources for the
facts showed in the articles. Internet Archive searches for interesting
websites to save in their hard disks, so, we can send them our external
links sql tables (all projects and languages of course). They improve their
database and we always have a copy of the sources text to check when needed."
I would want to see the Internet Archive behave in a more ethically
accountable manner before any strong alliance is built with them on any
Wikimedia function. Namely, for the past 3 months, I have been working with
an attorney to appeal to the Internet Archive to remove a page from their
database that contains libelous information that has been expunged on the
"current" page on the original domain. The Internet Archive has been
entirely unresponsive to these mailed letters of request. I think the
Wikimedia Foundation shouldn't be lining up to cooperated with organizations
that don't even respond to important matters of defamation and libel.
Gregory Kohs
Thanks Gerard :)
we are waiting for our wikinews
Mardetanha
On Wed, Aug 25, 2010 at 4:30 PM,
<foundation-l-request(a)lists.wikimedia.org>wrote:
> Send foundation-l mailing list submissions to
> foundation-l(a)lists.wikimedia.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> https://lists.wikimedia.org/mailman/listinfo/foundation-l
> or, via email, send a message with subject or body 'help' to
> foundation-l-request(a)lists.wikimedia.org
>
> You can reach the person managing the list at
> foundation-l-owner(a)lists.wikimedia.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of foundation-l digest..."
>
> Today's Topics:
>
> 1. Re: Farsi wikipedia has reached 100 K article (Gerard Meijssen)
>
>
> ---------- Forwarded message ----------
> From: Gerard Meijssen <gerard.meijssen(a)gmail.com>
> To: Wikimedia Foundation Mailing List <foundation-l(a)lists.wikimedia.org>
> Date: Wed, 25 Aug 2010 10:46:37 +0200
> Subject: Re: [Foundation-l] Farsi wikipedia has reached 100 K article
> Hoi,
> Congratulations..
>
> It is also an auspicious moment as the localisation for the Persian
> language
> is now complete. This completes the localisation requirement for the
> requested Wikinews. At the language committee we are considering if it is
> ready to move on... I hope and expect a Wikinews in the not to distant
> future :)
> Thanks,
> GerardM
>
> On 25 August 2010 09:31, Mardetanha <mardetanha.wiki(a)gmail.com> wrote:
>
> > Farsi wikipedia has reached 100 K articles .
> >
> > Mardetanha
> > _______________________________________________
> > foundation-l mailing list
> > foundation-l(a)lists.wikimedia.org
> > Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
> >
>
>
> _______________________________________________
> foundation-l mailing list
> foundation-l(a)lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/foundation-l
>
>
Several wikis have used bots to increase their article count in the
past. Examples are the Volapük Wikipedia (vo) with 118,000 articles of
which about 117,000 are bot-created stubs or the Aromanian Wikipedia
(roa-rup) with 61,000 articles at the moment and less than 10,000 before
the bot run.
Why do they use bots? Because they have a small userbase and want to
cover as much topics as possible with small effort. Most of the
languages that use bots are small languages without much written
literature especially when it comes to non-fiction reference literature.
There are no Aromanian encyclopedias, no or few reference books, no
databases etc. An Aromanian either has to learn and use foreign
languages or he will never be able to get informations about places in
China or in America. The bot operator tried to change this by creating
stubs about places in China, America and elsewhere. (Geographic objects
are the easiest method to cover large numbers of topics without much
effort.) But he did a horrible job with really bad and uninformative
articles. I assume the reason for the bad articles is not any bad intent
but just lack of technical skills to program a more useful bot.
The easiest reaction to this is to just let them do their thing and
don't care about it. The second easiest reaction is running a delete bot
and removing the bad articles because of their negative effects. But
both methods do not address the original motivation of the bot operator:
the wish to have information about a large range of entities available
in the wiki's language.
How can this be addressed?
We need a datawiki. That's not a new proposal, proposals for datawikis
have a long history. But there never was a specific reason not to
implement it, it was just that nobody cared about it so much that it was
implemented until now.
Here's my idea about it:
When a search does not yield any matching articles on the local wiki,
the software will look up the name in the central datawiki. If the
central datawiki contains a matching entry, this entry will be loaded.
It will contain an instance of a template filled with information about
the entity. E.g.:
{{Town
|name=Fab City
|country=Awesomia
|pop=89042
|lat=42.0
|lon=42.0
|elevation=12
|mayor=Adam Sweet
}}
The software will now look for a template called "Town" on the local
wiki. The local template [[Template:Town]] will for example look like this:
{| class="infobox"
|-
! Name
|
{{{name|}}}
|-
! Country
|
{{{country|}}}
|-
! Population
|
{{{pop|}}}
|-
! Mayor
|
{{{mayor|}}}
|-
! Elevation
|
{{{elevation|}}} above sea level
|-
! Geographic position
|
{{latlon| {{{lat|}}} | {{{lon|}}} }}
|}
'''{{{name|}}}''' is a place in [[{{countryname| {{{country|}}} }}]] with a population of {{{pop|}}}.
[[Category:{{countryname| {{{country|}}} }}]]
[[Category:Towns]]
Of course this template will be localized in the language of the local
wiki. This information will now be shown to the user who entered the
name in the search. (The above examples are just, well, examples. Real
entries would most likely contain much more data.)
The datawiki can be filled with information about any entity that has a
certain set of recurring features (almost anything that has a infobox on
Wikipedia), especially geographic objects. These objects also have the
advantage that their names usually are international (at least among
Latin script languages).
The advantages are:
- when the central datawiki is filled with info (most of which can be
bot-extracted from existing Wikipedia infoboxes), every Wikipedia - how
small the userbase may be - has instant access to information about
hundreds of thousands or millions of objects, they just need to
implement some infobox templates
- this solution also erases problems with outdated information in
infoboxes (a problem even en.wp is suffering from). The data only needs
to be updated in one single place instead of every single Wikipedia
separately
With the work done by Nikola Smolenski on the Interlanguage extension
(<http://www.mediawiki.org/wiki/Extension:Interlanguage>) it shouldn't
be too hard to implement.
In view of the potential usefulness I cannot think of any argument that
speaks against this in general. The prospect of providing at least basic
information about millions of objects in all the different languages
seems really great to me.
Many native speakers of smaller languages use foreign language wikis as
their default wiki because the chance that their native wiki has an
article on the topic is small. If the number of topics where a search on
the native wiki yields results raises from "some thousands" to
"millions", there is a chance that users will finally accept their
native wiki as their default wiki. The entries will be basic, but if
interwikis (of existing articles not generated from the datawiki) will
be included in the info obtained from the datawiki, the more extensive
data is just one click away, while an unsuccessful search on the local
wiki (as you will get it as of now) is a dead end.
It certainly is worth putting some resources into it.
What do you think?
Marcus Buck
User:Slomox
I am pleased to share the position description for the new Chapter
Development Director position [1] with the Wikimedia Foundation. We are now
open for applications and aim to fill this position with an outstanding
candidate by November. Thank you to those who shared perspectives on the
Chapter Development role on the Meta page [2] that was set up a few weeks
back. The process of defining ways for chapters and WMF to work together
effectively is an ongoing one that I look forward to continuing with all of
you and with our new staff member. Please do forward on the posting and
suggest candidates. It would be great to find candidates from within the
Wikimedia movement.
The position description draws from the input received on Meta as well as
informal discussions and input received from various sources. [3] The
description provides a window into how I am thinking about the relationship
between chapters and WMF. Let me share some of my thoughts in the interest
of openness and transparency. These are just preliminary thoughts and I
welcome ongoing feedback about where I'm on point and where I might be
off-base.
Let me start by saying that I am very committed to the chapter mission to
"empower and engage people around the world to collect and develop
educational content under a free license or in the public domain, and to
disseminate it effectively and globally"[4] In my analysis (and I think most
share this conclusion), we are still a long way from realizing this mission,
even in our largest chapters. We, in the Global Development team, are
investing in Chapter Development roles (we plan to add to the team in the
future) to support the realization of the mission.
For me, success in five years would be to be in a position where 75% of
chapters would be rated “effective” and over 50% would be rated “very
effective” based on an objective set of mission-focused metrics I would like
to develop collaboratively with chapters over the next year. In addition, I
would like there to be at least 60 chapters that are reasonably [5]
representative of the global population. It would be great if we generate
some shared goals in the coming months.
I have designed the WMF role as articulated in the position description
with success metrics in mind and with an orientation toward mutual
cooperation and a strong respect for the fact that virtually all of the
chapter participants today are volunteers and that our future strength will
remain in the initiative, commitment and creativity of volunteers, even as
we add staff and spend money in chapters and at WMF. I expect that the WMF
staff (as well as capacity building support on organizational development)
will help make voluntary action easier, more scalable and more sustainable,
particularly for the leaders who have expressed a certain level of fatigue
in carrying the full weight of their chapters.
It is important to be clear that I see the Foundation and chapters as
independent organizations who are both ultimately accountable to the
Wikimedia movement. In my team's work, I would like to earn our role by
demonstrating our value. We do and should have clear agreements about the
work we do together along with our financial relationships. These may
require some trade-offs; however, these should lay out mutual expectations
that encourage positive, productive work that supports the Wikimedia
mission. As such, I would expect that we are able to negotiate and abide by
contractual agreements that help us all do our work.
A final word on openness. This note is an attempt to be open about how I'm
thinking now. I want my team's (and our) work to be open and transparent to
the movement. We should share publicly our thinking, reports on our
activities, evaluations of our successes and failures and a standard report
card of our effectiveness. I don't see this as a mechanism for reward or
punishment, but as a way to open us up to the movement at large and most
importantly to enable us to be a learning community where we can critically
assess our work and adapt. I hope this note is a reasonable start.
I look forward to working with you all, to having your feedback and to
bringing a strong Chapter Development Director on board soon.
Barry
[1] See
http://wikimediafoundation.org/wiki/Job_openings/Chapter_Development_Direct…
[2] See http://meta.wikimedia.org/wiki/Chapter_development
[3] Berlin Chapters meeting discussions, strategy project, Wikimania
discussions, generous proactive input from Delphine Menard reflecting on her
prior experience, thoughts provided by Sebastian Moleski and an email
exchange with Bence Domokos
[4] See http://meta.wikimedia.org/wiki/Wikimedia_chapters
[5] Added “Reasonably” caveat as we might not be in a position to have
chapters in places such as the People's Republic of China for policy reasons
Thought that some of you may find this interesting:
In the Weizmann Institute of Science in Rehovot, Israel, there's a
MediaWiki-based project called Proteopedia: http://proteopedia.org/ .
It is a Wikipedia-inspired database of proteins and other molecules.
The project is supervised by professors from the institute, so the
data is supposed to be reliable. Most of the data is CC-BY-SA / GFDL,
so it can be reused in Wikipedia (there are some exceptions; see Terms
of Service at the bottom of the page).
Note that the images of the molecules are animated; you can click them
and then move them to see them at different angles and zoom in and out
using the mouse wheel. Also, try clicking the green links under the
images - it shows you different views of the molecule (i'm a linguist,
so i don't know what it actually means, but if you're into Biology or
Chemistry, you'll probably understand). This is done using Jmol, an
LGPL-licensed Java applet, so maybe it can be used in Wikipedia in the
future.
--
אָמִיר אֱלִישָׁע אַהֲרוֹנִי
Amir Elisha Aharoni
http://aharoni.wordpress.com
"We're living in pieces,
I want to live in peace." - T. Moore
Today, I have been blocked for half a year on ru.wikiversity by an admin,
SergeyJ
http://ru.wikiversity.org/w/index.php?title=%D0%9E%D0%B1%D1%81%D1%83%D0%B6%…
despite the fact that I have never edited Russian Wikiversity.
Is this a normal practice? I remember the story of Russian Wikisource,
when an admin, Ramir, has been speedily desysopped by stewards for blocking
users arbitrarily. But at least these users made some edits. I am not sure
what should I do now since my SUL shows a block in one of the projects.
Note that SergeyJ has been placed by the Arbitration Committee of Russian
Wikipedia under editing limitations for gross violations. I am currently
serving as a member of Arbitration Committee (I was not involved in the
decision on his editing limitations).
Cheers
Yaroslav