Mark wrote:
>The problem with this view is that Wikipedia by nature cannot "decide"
>what to publish---we "publish" anything that anyone posts,
automatically and without review, because that >is how wikis work. What
we *continue* to publish is the result of the consensus of editors.
Well, yes and no. True, the WP community doesn't control the posting of
content, so "it" (whatever it is) can't decide, initially. But (and let
me stress, I'm no lawyer), there is something and someone behind WP: the
Wikimedia Foundation and Jimmy Wales. I'm not sure what the *exact*
relationship between the Foundation, Wales and WP is, but I would guess
that in the case of a suit the Foundation and Wales would be named as
defendants, for they in effect publish WP.
>I don't, in general, see a problem with this. If something is
incorrect in any way, it should be
>corrected or removed (whether it is libelous or not is
irrelevant---non-libelous misinformation has no
>place either). The "what if [x]" scenarios seem pretty far-fetched.
"Should" is the operative word here. Yup. The problem is that such
misinformation, often, is not changed. WP has one of the same problems
as communism: everything is everyone's responsibility, so everything is
no one's responsibility. So what "should" be done often isn't. Far
fetched? I imagine Mr. Seigenthaler has already contacted his lawyer
(but I hope not).
Best,
Marshall Poe
The Atlantic Monthly
www.memorywiki.org
Tony Sidaway wrote:
>Meanwhile if you can compile a list of important articles that have
>not fared well under the wiki (I submit that this could be a fairly
>brief list, perhaps 1,000 or so) perhaps a team of interested
>individuals can take those articles off the main wiki and hothouse
>them for a while--or even, as you suggest, rewrite from scratch. I'd
>be interested to see if this approach produces significantly better
>articles.
http://en.wikipedia.org/wiki/Wikipedia:List_of_articles_all_languages_shoul…
We don't even need to disrupt normal operation of the wiki - we need
people to get all those articles up to Featured quality and hopefully
actual Featured Article status.
(And the same for other languages.)
All those who talk about how we need to stop writing stuff and just
polish up what we have: there's a starter list, a bit over 0.1% of the
articles on en: . Start a WikiProject for it. It's thoroughly
worthwhile work and needs people to slog through it.
- d.
Marshall wrote:
> Re the first amendment, and the authors failure to edit his own
> article, etc.
> The Wikipedia project itself bears some responsibility here. If you
> are going to provide a soapbox for folks to stand on and exercise
> their first amendment rights, you are in part responsible for what
they say.
Pawel responded:
>>Many Wikipedians live in countries with no "first amendment".
And that, we all agree, is a shame. Free speech should be protected
everywhere, period. But it also needs to be used responsibly
everywhere, period.
So the point is the same: if you are going broadcast someone's speech
(free or not), you are in part responsible for what they say. I'm not a
lawyer, but Wikipedia *seems* to be acting as a publisher and perhaps
editor. What will Wikipedia (or the Wikimedia Foundation) do if it is
sued for libel? Could this happen?
All the Best,
Marshall Poe
The Atlantic Monthly
www.memorywiki.org
Hi,
I really really need some big whitepapers on Wikipedia and other
wikimedia project usage in schools (pros, cons, aims, analysis,
possibilities).
Our aim is to build a big document and use it in governmental affairs.
If anyone has such papers prepared in major languages, let me know,
thanks!
Domas
In my last email, I detailed some of the broad principles that we follow
(or should follow) in the creation of new language editions of
Wikipedia, and in this email I will sketch a basic proposal for a
solution to our problems in this area.
To recap: I think there is broad community consensus for the following
principles:
1. To encourage the creation of new Wikipedia editions in all
legitimate living natural languages, in an orderly way which involves
fostering a real community to care for wikis. We want to make it easy
for newcomers to get excited and build, while at the same time trying to
keep from having too many dead wikis around.
For real languages, then, we only want some small indication of interest
from some native (or at least fluent) speakers. We want to make this
part relatively easy.
2. To discourage the creation of new Wikipedia editions in dialects
which do not significantly differ from existing Wikipedias. We want to
keep from being hoaxed, and from falling into political traps.
For dialects, then, we want to require a much higher threshold before
allowing the wikipedia -- we need a good reason to start it. A
"Bavarian" Wikipedia proposal would need a much much stronger rationale
before we start it than "German". Obviously.
3. To discourage the creation of new Wikipedia editions in constructed
languages. I do not say "forbid" here, merely "discourage". Similar to
dialects, constructed languages pose many risks for us of politics,
hoax, etc.
-----
Therefore, I propose that we have a two-tier system and a formal
committee formed of experts (or some outside language standards body if
we can find one which is suitable) which will declare if a proposed
language is an actual language or merely a dialect. This committee will
issue for us an advisory opinion. We are not required, as a community,
to follow the advisory opinion, but the advisory opinion will set a
differential threshold for creation.
For those which are declared to be languages, we could continue with
something like the existing process.
For those which are declared to be dialects, we would have a presumption
against creation, but that presumption could be overcome by a broad
community vote.
--Jimbo
Dear Wiki*edians & especially polyglots :
In the spirit of Thanksgiving, being thankful as ever for these
polylingual projects (I just won a bet with a Chilean about his home
town, referencing spanish wp articles :), I would like to propose a
new one (gasp!) : an all-language wiki devoted to language overviews,
language proposals, interface localization (text, images, &c), and
translations of core Wikimedia and MediaWiki messages. Similar
proposals have been floated before, with little discussion; here is
another attempt on the theme.
* Name : babel.wikimedia.org ?
* Main Page : a list of languages by, say, # of native
speakers/readers [1], with prominent links to other views (by
language-cluster, by geographic region, by article-count, by reader
popularity...), information on translators & translation, and language
statistics [2].
* Content : All localizable MediaWiki strings, in 200+ languages [3].
Portals for each language, describing work being done to develop that
language, with portal-content in a few core language + the lang in
question. All localizable custom strings for Wikimedia projects, in
those languages. Key strings and messages (such as site-wide notice
templates) which are used regularly and needed in every language.
Policy and discussion pages about new language creation.
** Optional content : Other translation efforts, such as global press
releases, which work through a high volume of content (thousands of
edits in dozens of languages) in a short period of time.
** Related project : a specific "interface-translation wiki" for the
latest MediaWiki installation, which auto-updates the localizable
strings in the latest MW version.[4]
Please comment or indicate support for the idea on meta:
http://meta.wikimedia.org/wiki/Proposals_for_new_projects#Babel_Wiki
[1] Not just readers; if there's no written language, audio output and
input are excellent ways to transfer information... we've been doing
that for eons longer than we've been passing around printed bytes
[2] % completion of interface translation; # of self-identified
'translators' in and out of the lang; full list of wikimedia projects
in that lang w/origin-dates, article and active-editor counts; links
to key pages on target wikis
[3] Eventually significantly more than 200.
[3'] It may be useful to have one template/page per string per
language, to facilitate automatic conversion from wiki to other
formats; multiple views of l10n strings (e.g., 2 langs side-by-side on
one page as in Special:Allmessages). ~200 languages x (200 custom
strings x 5 Wikimedia Projects + 1000 standard strings) = O(500K)
pages, worth its own project.
[4] These ideas could be merged; or that project (say, at
language.mediawiki.org) could offer ways to pull content from the
appropriate section of the Babel wiki (and more security safeguards).
++SJ
As mentionned in the minutes of last board meeting,
the Wikimedia Foundation is considering hiring a part
time conference organizer to arrange Wikimania 2006,
the second international Wikimedia Conference.
The conference will be held in Boston in late July or
early August 2006. The organiser will not be an
employee, but a contractant. The applicant should be
experienced in organizing internaltional conferences,
be able to work online for most of the year, and have
an understanding of the goals of Wikimedia and the aim
of this conference. Fluency in English is required,
and a good level of fluency in at least one other
language is beneficial. The position implies full time
availability immediately before and during the
conference.
Please make applications, with quotations and
estimates of costs to board-private(a)wikimedia.org
before sunday the 4th of december, midnight UTC.
Signature : after much hesitation... us.
__________________________________
Yahoo! Mail - PC Magazine Editors' Choice 2005
http://mail.yahoo.com
Hoi,
We have discussed the subject of single login many times. There are many
scenario's that we can take to get to a solution. There is also the
potential to do some "future proofing". At this moment in time all our
security for users is pretty minimal; it relies on knowing a password or
having a cookie on your system. For gaining read only access we do not
require any authentication. There are several scenario's where
(technically available) additional authentication possibilities will
help us.
* When a range of IP numbers is blocked because of frequent vandalism,
we want to allow access for authenticated editors. These can be schools
or proxies.
* When we host educational content, we want to ensure that it is only
the student who accesses his material
* When we host educational content, we want to give access to a subset
of data to a teacher of a student
* When we collaborate with another web services like Kennisnet, we allow
users authenticated by such an organisation to use our resources as an
authenticated editor
The point that I am trying to make is that future proofing makes sense.
When we have the potential to do this and make use of proven open source
technology, we should consider this as an option in stead of "rolling
our own". A-Select http://a-select.surfnet.nl/ is a project run by
"Surfnet", it is available under a BSD license. Scalability has been
very much part of their existing projects. It is used as the engine for
many big projects; DigiD http://www.digid.nl/ is a project to give
people living in the Netherlands access to their personal information.
Strong authentication like used by banks for on-line transactions are
provided for. The Dutch library system, Dutch education .. they use it.
I will make sure that material about all this will become available on
Meta. I start by posting here because there is a need for discussing the
issues that come up when you introduce the potential for more
authentication to our growing list of services.
Thanks,
GerardM
Hello, just a shameless copy-paste from meta (http://
meta.wikimedia.org/wiki/Cluster_report%2C_September-November%2C_2005)
These months were yet again amazing in Wikimedia growth history.
Since September request rates doubled, lots of information added,
modified and expanded, more users came.
To deal with that site had to improve both software and hardware
platforms again.
Of course, more hardware was thrown at the problem.
In mid-September three new database servers (thistle,ixia,lomaria)
were added to the pool, removing ancient type of hardware from the
service.
With data growth rates 'old' 4GB-RAM boxes could not keep up with
operation, except quite limited one.
40 dual-opteron application servers have been deployed, conserving
our limited colocation space, as well as providing lots of
performance for a buck.
One batch of them (20) was deployed just this week.
They're equipped with larger drives and more memory, thus allowing to
place various unplanned services on them (9 apache servers are
storing old revisions as well), some servers participate in shared
memory pool, running memcached.
One of really efficient purchases was 12k$ worth image server
'amane', providing us with storage space and even ability to to
backup at current loads.
It is running now highly efficient and lightweight HTTP server -
lighttpd.
So far images are served, but growth of Wikimedia Commons will force
us to find a really scalable and reliable way to handle lots of media.
Additionally 10 more application servers are ordered together with a
new Squid cache server batch.
These 10 single-opteron boxes will have 4 small and fast disks and
should enable efficient caching of content.
As all this gear was bought for donated money, we really appreciate
community help here, thank you!
Yahoo supplied cluster in Seoul, Korea has finally got into action,
bringing cached content closer to Asian locations, as well as having
master databases and application cluster for Japanese, Thai, Korean
and Malaysian Wikipedias.
For internal load balancing Perlbal was replaced by LVS, and we've
got a nice flashy donated load balancing device that may be deployed
into operation soon as well.
LVS has to be handled with care and several tiny misconfiguration
incidents seriously affected site performance.
Lately the cluster has became quite big and complex and now we need
more sophisticated and extensive sanity checks and test cases.
There are lots of work in establishing more failover capabilities -
we will be having two active links to our main ISP in Florida.
Static HTML dump is (becoming) nice and usable and may help us in
case of serious crashes. It can be served from Amsterdam cluster as
well!
As for last several days we managed to bring cluster into quite
proper working shape, now it's important to fix everything and
prepare for more load and more growth and yet another expansion.
We hope that we will be able with the help of community to solve all
our performance and stability issues and avoid being Lohipedia :)
Lots of various problems were solved so far in order to achieve what
we have now, and lots of low hanging fruits have been picked.
What is dealt now with is complex and needs manpower and fresh ideas
as well.
Discussions are always welcome on #wikimedia-tech in Freenode (except
during serious downtimes :).
And, of course, Thanks Team (or rather, Family)! It is amazing to
work together!
Cheers,
Domas