Hoi,
There is a request for a Wikipedia in Ancient Greek. This request has so far
been denied. A lot of words have been used about it. Many people maintain
their positions and do not for whatever reason consider the arguments of
others.
In my opinion their are a few roadblocks.
- Ancient Greek is an ancient language - the policy does not allow for
it
- Text in ancient Greek written today about contemporary subjects
require the reconstruction of Ancient Greek.
- it requires the use of existing words for concepts that did
not exist at the time when the language was alive
- neologisms will be needed to describe things that did not
exist at the time when the language was alive
- modern texts will not represent the language as it used to be
- Constructed and by inference reconstructed languages are effectively
not permitted
We can change the policy if there are sufficient arguments, when we agree on
a need.
When a text is written in reconstructed ancient Greek, and when it is
clearly stated that it is NOT the ancient Greek of bygone days, it can be
obvious that it is a great tool to learn skills to read and write ancient
Greek but that it is in itself not Ancient Greek. Ancient Greek as a
language is ancient. I have had a word with people who are involved in the
working group that deals with the ISO-639, I have had a word with someone
from SIL and it is clear that a proposal for a code for "Ancient Greek
reconstructed" will be considered for the ISO-639-3. For the ISO-639-6 a
code is likely to be given because a clear use for this code can be given.
We can apply for a code and as it has a use bigger then Wikipedia alone it
clearly has merit.
With modern texts clearly labelled as distinct from the original language,
it will be obvious that innovations a writers needs for his writing are
legitimate.
This leaves the fact that constructed and reconstructed languages are not
permitted because of the notion that mother tongue users are required. In my
opinion, this has always been only a gesture to those people who are dead
set against any and all constructed languages. In the policies there is
something vague "*it must have a reasonable degree of recognition as
determined by discussion (this requirement is being discussed by the language
subcommittee <http://meta.wikimedia.org/wiki/Language_subcommittee>)."* It
is vague because even though the policy talks about a discussion, it is
killed off immediately by stating "The proposal has a sufficient number of
living native speakers to form a viable community and audience." In my
opinion, this discussion for criteria for the acceptance of constructed or
reconstructed languages has not happened. Proposals for objective criteria
have been ignored.
In essence, to be clear about it:
- We can get a code for reconstructed languages.
- We need to change the policy to allow for reconstructed and
constructed languages
We need to do both in order to move forward.
The proposal for objective criteria for constructed and reconstructed
languages is in a nutshell:
- The language must have an ISO-639-3 code
- We need full WMF localisation from the start
- The language must be sufficiently expressive for writing a modern
encyclopaedia
- The Incubator project must have sufficiently large articles that
demonstrate both the language and its ability to write about a wide range of
topics
- A sufficiently large group of editors must be part of the Incubator
project
Thanks,
GerardM
it seems that people enter articles into quality assurance more often
than before having the flags - which at the end leads to higher
quality for these articles. but i am unsure if this feeling can be
better prooved somehow.
one thing seems to be a bug: with ff3 on linux i always get the
flagged revision and not the most current one, even if i unchecked
"show flagged revision" in the preferences.
rupert.
On Sun, Jun 8, 2008 at 2:02 PM, THURNER rupert
<thurner.rupert(a)redleo.org> wrote:
> it seems that people enter articles into quality assurance more often
> than before having the flags - which at the end leads to higher
> quality for these articles. but i am unsure if this feeling can be
> better prooved somehow.
>
> one thing seems to be a bug: with ff3 on linux i always get the
> flagged revision and not the most current one, even if i unchecked
> "show flagged revision" in the preferences.
>
> rupert.
>
> On Thu, May 8, 2008 at 11:14 PM, Andre Engels <andreengels(a)gmail.com> wrote:
>> 2008/5/7 Lars Aronsson <lars(a)aronsson.se>:
>>> Erik Moeller wrote:
>>>
>>>> In a nutshell, FlaggedRevs makes it possible to assign
>>>> quality tags to individual article revisions, and to alter default
>>>> views based on the available tags.
>>>
>>>> Aka hacked up a nice script that shows how many pages have been
>>>> "sighted" (basic vandalism check) on the German Wikipedia:
>>>> http://tools.wikimedia.de/~aka/cgi-bin/reviewcnt.cgi?lang=english
>>>>
>>>> Given that FlaggedRevs has just been live for a day or so, a review
>>>> rate of 4.41% is quite impressive!
>>>
>>> Wait now. When FlaggedRevs was first mentioned, the press started
>>> to announce that censorship was being planned for Wikipedia.
>>> This was countered with the explanation that flagging was a more
>>> open regime than page locking. We no longer have to lock pages on
>>> controversial topics, because we can allow free editing as long as
>>> the non-logged-in majority gets to see the flagged/approved
>>> version.
>>>
>>> Is it really "impressive" to have this new "soft locking"
>>> mechanism applied to a large number of pages? Wouldn't it be
>>> better to show how few pages were in need of this protection?
>>> And at the same time, to mention how many previously locked pages
>>> have now been unlocked in the name of increased openness?
>>
>> No, I don't think so. Having a flag on a page is just a way of saying
>> "this version is ok". Would it not be much better to have a version
>> that is 'ok' for ALL pages rather than just the controversial ones?
>> Would it really be a good thing to say "Only these few pages have
>> versions that are okay, we have no idea about the others, but we see
>> no reason to think they're not okay?"
>>
>>
>> --
>> Andre Engels, andreengels(a)gmail.com
>> ICQ: 6260644 -- Skype: a_engels
>>
>> _______________________________________________
>> foundation-l mailing list
>> foundation-l(a)lists.wikimedia.org
>> Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
>>
>
Hey folks,
As you know, the board recently created a Nominating Committee to help
it identify, research and recommend candidates for the appointed Board
of Trustee positions involving "specific expertise." The members of the
committee are me, Michael Snow, BirgitteSB, Milos Rancic, Melissa
Hagemann and Ting Chen.
We've brainstormed a list of selection criteria here
http://meta.wikimedia.org/wiki/Nominating_Committee/Selection_criteria -
and now need to cut it back from about two dozen to eight.
If you're interested, we'd like your help. Please comment on the talk
page re which criteria you think are most important, and also let us
know if you feel anything is missing.
Thanks,
Sue
Our rough timeline, in case you're interested:
1.Michael Snow, on behalf of the Board, will brief the Nominating
Committee regarding its role, the restructuring, and the board's
assessment of its own strengths and skills gaps. By August 30 DONE
2.Based on that briefing, the Nominating Committee will generate a set
of criteria for potential “specific expertise” board members. By
September 15
3.The staff of the Foundation will deliver to the Nominating Committee
the list of potential candidates that has been developed by the staff,
current Board members and supporters and friends of Wikimedia. By
September 15 DONE (by Michael)
4.The Nominating Committee will brainstorm and solicit additional names,
and add them to the total list. By September 30 IN PROGRESS
5.The Nominating Committee will research the names which have been put
forward, and assess their fit against the selection criteria developed
earlier. This will result in a midlist of candidates. By October 30
6.The Nominating Committee will initiate discussions with midlist
candidates to gauge their interest, provide them with information, and
respond to questions or concerns. By November 14
7.The Nominating Committee will cull the midlist and deliver to the
board a final list of interested candidates who fit the criteria for the
"specific expertise" roles. The goal will be to give the board a full
briefing on the top eight candidates for the four "expertise" seats,
along with a recommendation for the four who the Nominating Committee
thinks would be the best fit. By November 14
8.The community board members (Michael, Kat, Frieda, Domas, Ting, and
Jimmy) will vote to determine who will fill the four seats. By December 15
9.Nominating Committee orients new board members. January and February
10. Nominating Committee supports the board with other board development
tasks as requested. March, April, May, June
--
Sue Gardner
Executive Director
Wikimedia Foundation
Imagine a world in which every single human being can freely share in
the sum of all knowledge. That's our commitment: help us make it a
reality! http://wikimediafoundation.org/wiki/Donate
Is there any strategy document that describes what kind of functionality
we want in Mediawiki, and why we want it? For the moment it seems like
the development are drifting in some general direction but without any
real specific goal.
It seems like there are no such document at mediawiki.org or on meta,
yet there are some rather old docs about specific hardware issues. Same
on wikimediafoundation.org, there are some references on pages about job
openings but thats all.
The closest I could get to such a document is "Update of Foundation
organization (March 07)"
(http://wikimediafoundation.org/wiki/Update_of_Foundation_organization_(Marc…)
and the document "10 wishes for 2008"
(http://wikimediafoundation.org/wiki/10_wishes_for_2008)
I believe that some kind of document, describing whats important to add
to Mediawiki, and why, is very important for the overall community. This
would give us an opportunity to clarify why we want to do something and
how we would like to do such a thing. It will also make it possible to
approach specific benefactors, patrons and donors, especially those that
share a common goal with us.
Where should such a document be made, and who should write it? I guess
it should clearly state why we want a specific functionality. To make an
example, the 2007-document talks about wap functionality; why do we want
it and where is it documented in full detail. The page about the
functionality should point to all relevant bugs, code and discussions,
while the overall document should clearly describe why we want it.
Note that this not a document to block all those that want to write some
kind of funny extension, it is a document to describe what we think is
important to do.
John
I mentioned earlier that I wanted to discuss open standards and file
formats in advance of the next board meeting. I'd especially like to
look at how these issues relate to our mission. There are a variety of
questions involved, which I'll summarize in terms of freedom - the
freedom that providing access to knowledge can give the recipient, and
the freedom that avoiding intellectual property restrictions can give
our culture generally. I trust we'd all agree both of these are positive
things in line with the Wikimedia Foundation's mission, which is what
makes it difficult if we have to choose between them.
The more we move beyond simple text, the more intellectual property
restrictions expand beyond simple copyright to increasing complexity
(multiple rightsholders, patents, DRM, trademarks, database rights).
Sometimes these things can be fairly benign, to the extent of being at
least gratis-free, especially at the "consumer" level. Perhaps in terms
of our effort to provide access to knowledge, they might not impose any
real restrictions, except in extreme edge cases. But so far, we have a
pretty strong commitment to absolute freedom, even with respect to areas
that don't directly impact our work.
To illustrate this with an example, maybe not the best but one that
comes up often enough, consider video file formats. (Some of this is
beyond my technical expertise, so please forgive any misstatements.)
Adobe Flash has widespread adoption to the point of being
near-universal. The company has also been moving to make it more open
for people watching, distributing, and working on content in this
environment. It's close to free, but I understand there are still some
issues like patent "encumbrances" around Flash. Meanwhile, there are
pure free software formats that do similar things but have pretty
limited adoption.
This brings up a number of questions. First of all, how important is
multimedia content to us in general? Considering both the investment to
create it and the environment in which it's produced, historically it's
a lot less amenable to free licensing. It's still useful, no doubt, but
what measures should we take to promote it?
Back to the two manifestations of freedom I mentioned, how should we
balance those? One possibility that's been raised is to allow Flash
content so long as we require that it be encoded and distributed in a
truly free format as well. Is that sort of approach an acceptable
compromise? It would make it much easier to achieve wide distribution of
free content, while still making sure that it's also available
completely without restrictions, for those who find that important. Are
there situations in which this compromise doesn't work out for some
reason? Why? (And none of this has to be limited to the Flash video
example, discussion of other formats and standards is welcome.)
In dealing with the limited adoption of certain free formats, some
people have advocated a more evangelistic approach, if you will. Given
the reach of Wikipedia in particular, it's suggested that our policy
could push wider adoption of these formats. That may be, but the
question is, how much is that push worth? What are the prospects for
making those formats readable in the average reader's environment, and
encouraging wider use as a standard? Does an uncompromising approach
result in significant progress, or would we simply be marginalizing the
impact of our work? And is it worth the "sacrifice" of the many people
who would miss out on some of the knowledge we're sharing, because the
free format isn't accessible to them? (That's also partly a problem of
disseminating knowledge, of course.) If we adopt a compromise position
as described earlier, how much do we lose in terms of promoting the
freer formats?
Before I joined the board, I understand the board considered a
resolution to create a file format policy. These are the kinds of
questions we need to consider before we can set such a policy. We're not
going to be passing anything at next week's meeting, though, the
discussion isn't far enough along and it wouldn't be right to push it
through with so little consultation. But we need to have the
conversation, so I would like the community's feedback on this list,
both now and feel free to continue during and after our meeting.
--Michael Snow
In Norway a university has a large collection of newspapers, the
collection is claimed to cover around 3000 running meters in the store
house - without the norwegian and nordic newspapers, whats left is
international newspapers from the last 150 years. If no one is coming up
with a solution the collection is going to be destructed (actually burned)
I think the best thing to do is to scan them and make them publicly
available. Of course neither I or WM Norway can set forth to do such a
task, but if there should be some wealthy person out there that might be
able to involve himself in such a task, I think it would be a very
worthy gift to the mankind (where is the women!) to do such a thing.
When I heard of this I was shocked. Most of us are. I've infact studied
with the university that attempted tu burn the newspapers. The plans
have been stalled for now, but some permanent solution has to be found.
John
I fully disagree with Schlottmann.
1. Nicholas Baker has shown in its book "Double Fold"
http://delicious.com/Klausgraf/doublefold that microfilms are not a
substitution for the original newspapers. And digitization isn't, too.
2. National Libraries might have the duty to digitize newspapers but
if they don't do it or if they cooperate with toll access companies
like the British Library http://newspapers.bl.uk/? The Public Domain
belongs to us all!
Klaus Graf
http://archiv.twoday.net
http://consumerreportingusa.org/wikipedia.htm
Who knew that "Wikipedia was hiring website processing specialists"?
Not me, that's for sure! Easy money, here we come!
Any chance someone can check this out? Seems like a complicated spam
farm of sorts.
Phoebe
--
* I use this address for lists; send personal messages to phoebe.ayers
<at> gmail.com *
Hmm. Is there any practical help the WMF could provide in this
endeavour? Aside from buckets of money, which appears to be the thing
the endeavour is most in need of.
Are there other countries where the law is not easily available and a
word from us would help?
- d.
http://www.nytimes.com/2008/09/29/business/media/29link.html
Link By Link
Who Owns the Law? Arguments May Ensue
By NOAM COHEN
IN a time when scientists are trying to patent the very genetic code
that creates life, it may not be too surprising to learn that a
variety of organizations — from trade groups and legal publishers to
the government itself — claim copyright to the basic code that governs
our society.
Carl Malamud runs PublicResource.org, which provides the text of
statutes, court decisions and construction codes at no charge.