Yes I know, but editing the language file is an hard job, actually I'm
translating Wikipedia's GUI through MediaWiki:Allmessages; there you can't
change the namespace, so I wonder if there is a way to change it through the web
thanks
When you say levels, you do mean 30 would have the privileges of 0 and 10 as
well as the privilege to protect pages? Wouldn't it be better to be able to
define groups with different sets of privileges? I know someone was working
on a more fine grained permission system, is that now dead in the water?
Ryan Lane
> -----Original Message-----
> From: wikitech-l-bounces(a)wikimedia.org
> [SMTP:wikitech-l-bounces@wikimedia.org] On Behalf Of Dori
> Sent: Thursday, June 09, 2005 7:57 AM
> To: Wikimedia developers
> Subject: Re: [Wikitech-l] User groups changed again
>
> On 6/9/05, Brion Vibber <brion(a)pobox.com> wrote:
> > I've reworked the user_groups system, again, into something that seems
> > to actually more or less work for now.
> >
> > * user_groups ur_group is now a short string key ('sysop' etc)
>
> I think it'd probably be better to get rid of the strings and use
> levels instead. Level 0 (reader), level 10 (editor), level 20 (page
> mover), level 30 (can protect pages) level 40 (can block) etc. It's
> going to be a lot more work to get this to work, but as long as it's
> getting reworked. Optionally another table could have mappings that
> are easier to understand: i.e. sysop <= 50, bureaucrat <= 100, etc
> for backwards compatibility.
>
> Maybe it would be too much for getting this release ready though...
> _______________________________________________
> Wikitech-l mailing list
> Wikitech-l(a)wikimedia.org
> http://mail.wikipedia.org/mailman/listinfo/wikitech-l
A few notes:
* We had a pretty, multilingual "down for maintenance" page all set up
to be served for requests to the site during the downtime, but this was
foiled for three or four hours because our offsite DNS in .nl hadn't
been actually set up quite the way we thought it was, and our onsite DNS
server in Florida was taken down earlier than planned by mistake.
We did get it mostly working for *.wikimedia.org for some people by
partway through the downtime, but were not able to get the other domains
(such as wikipedia.org) updated at the time. Once Zwinger was back
online in the new rackspace, we had DNS again and the downtime page was
visible for the remainder of the time, unless you were unlucky and it
didn't work anyway.
* The downtime message was experimentally running Lighttpd+FastCGI
instead of Apache. For no apparent reason it stopped understanding its
404 error handler page directive some time in the middle of things, so I
switched it to Apache.
* The Paris squids were I think still sending requests to the offline
Florida machines instead of the downtime page in .nl. Not totally sure
what was the issue here.
* When bringing lots of web server machines online we have an issue with
synchronization of time and configuration: the machines are set to
automatically start the web server on boot, and the load balancers are
set to automatically put work on them when they come up. But some
machines have clock trouble and come up in the wrong time, and if the
configuration has changed they'll have settings out of sync until
changed. We need to resolve this; either by requiring a manual start or
by some sort of sanity-check against the master clocks and config.
For massively wrong clocks (eg, BIOS reset to 2003) we can easily sanity
check by comparing the current time against $wgCacheEpoch to make sure
it's later. :)
* Things are very not happy booting without DNS or the LDAP server up.
We should make bloody sure this is not as big of a problem; LDAP needs
to be well-replicated, and important internal addresses should be
resolvable without DNS.
-- brion vibber (brion @ pobox.com)
hi,
is it possible to use a TOC generated in a page in
another page? (I cannot find any reference to the
generated TOC)
thanks
__________________________________
Discover Yahoo!
Stay in touch with email, IM, photo sharing and more. Check it out!
http://discover.yahoo.com/stayintouch.html
Brion Vibber wrote:
> The main Wikimedia servers in Florida will be moving to larger rackspace
> today. Among other things, this will take the mailing lists and Bugzilla
> offline for some or all of the day; sorry for any inconvenience!
Mailing lists are back online; the websites and bugzilla are still down
until the rest of the servers are back up.
-- brion vibber (brion @ pobox.com)
The first meeting of the Research Team on Sunday, June 6 focused on
defining its mission and planning some of the tasks to come. 25 people
attended and participated. Essentially, we agreed that the Team would be
a network of special interest groups focused on particular issues such as:
* Wikimedia sociology
* MediaWiki development tasks
* Content analysis
Members of the Team are also encouraged to keep abreast of activities
outside their own work, and to participate in high priority projects.
Some general work that has been started:
* http://meta.wikimedia.org/wiki/Research_projects has been created to
collect ideas for worthwhile projects. Two specific ideas, a user survey
and a distributed quality comparison of Wikipedia with other
encyclopedias, have been proposed.
* http://meta.wikimedia.org/wiki/Wikimedia_Research_Team/Interests lists
members of the team by interests; if you are a member and you haven't
checked your interests here yet, please do so.
The following have been suggested as high priority tasks:
* Reorganize and update http://meta.wikimedia.org/wiki/Development_tasks
(I will personally begin to work on this soon). It has not yet been
finalized to what extent we will use Bugzilla, but we will try not to
add information to it that would increase the workload of the developers.
* Organize community meetings (Wikibooks, Wikinews, Wikisource etc.) to
better determine what every community's specific needs are.
* Improve communications within the Team (possibly make use of
wikiresearch-l, or a new mailing list specifically for logistics).
* Specifically, work on the GUI and workflows for single login migration
to assist Brion with the implementation.
Please contact me or comment on the [[m:Wikimedia Research Team]] talk
page if you want to help with any of these tasks but don't know how yet.
Besides these issues, many specific features and ideas were discussed
during and after the meeting.
The next meeting will likely happen on June 18 or June 19; the specific
date is being decided on:
http://meta.wikimedia.org/wiki/Wikimedia_Research_Team#Next_meeting
I will try to organize the next meeting specifically so that smaller
groups can work on separate issues. I also want to bring peer review and
article validation into the debate at that point.
Read the full log at:
http://scireview.de/wiki/research/channel.log
Join the team at:
http://meta.wikimedia.org/wiki/Wikimedia_Research_Team
On a related note, I've proposed to change the name from "Team" to
"Network". Please comment on
http://meta.wikimedia.org/wiki/Talk:Wikimedia_Research_Team about this
suggestion.
Best regards,
Erik Möller
Chief Research Officer
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
> what should I do if someone has different usernames on different wikis? for
> example username "Monk" on en: is taken by someone (was inactive for a
> long time). I will be unable to create global user name "Monk" at all,
> if your proposal is implemented.
>
> It seems not too fair way to me.
This sort of dispute would be resolved via arbitration. The preferred
(or at least in my point of view) solution, implements the technical
solution before handling the messy username conflicts.
Hmm... given the increased scope of usernames this will entail, perhaps
we should set up a system for ruling on user name disputes? How do the
large companies do it?
- --
Edward Z. Yang Personal: edwardzyang(a)thewritingpot.com
SN:Ambush Commander Website: http://www.thewritingpot.com/
GPGKey:0x869C48DA http://www.thewritingpot.com/gpgpubkey.asc
3FA8 E9A9 7385 B691 A6FC B3CB A933 BE7D 869C 48DA
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.1 (MingW32)
iD8DBQFCpNSYqTO+fYacSNoRAn4cAJ95nytzFbAN7ZZab2CsdvZc/vX5egCff4fr
Pgx3NRVInISekstai3+W9Yg=
=hMXz
-----END PGP SIGNATURE-----
Hello,
I'm student and for the next months i'll have some free time to spend for free
developement. If the announcment of this article is still relevant, please can u
describe what u want exactly ?
I'm skilled in mysql, php stuffs, and i know a little about LAMP, but no more
things usefull... ready to learn perl :)
First i can try to do that from the _Weekly database dumps_, Do you think it's
feasable and usefull ? Say what u think about that plz.
Sorry for my bad english.
Leuch.
Not only history blobs can benefit from splitting revision
texts into sections and sorting them. The sizes of XML exported
pages (with complete page histories) can also be reduced.
This is the current structure (only relevant tags):
<page>
<revision><text>text0</text></revision>
<revision><text>text1</text></revision>
</page>
This would be the new structure:
<page>
<section>sectiontext0</section>
<section>sectiontext1</section>
<section>sectiontext2</section>
<revision><text type="sectionlist">0 1</text></revision>
<revision><text type="sectionlist">0 2</text></revision>
</page>
Maybe it's not a problem for the wikimedia servers to store
50 or 100GB of badly compressed history blobs, but I guess
most people who download the dumps wish for smaller file
sizes. I'll write the code if there is consensus on such a
format.
--
Geschenkt: 3 Monate GMX ProMail gratis + 3 Ausgaben stern gratis
++ Jetzt anmelden & testen ++ http://www.gmx.net/de/go/promail ++