Hi,
I'd like to ask for a broader spectrum of user rights.
Not only sysop but also
* SQL-User
* delete articles
* edit protected articles
* ban user
We have a request on de: from a experienced user who didn't like the
sysop-job but now, after being de-sysoped, misses the SQL functions very
much.
ciao, tom
--
http://www.tomk32.de - just a geek trying to change the world
-.- http://de.wikipedia.org/wiki/Benutzer:TomK32
/|> http://tomk32.bookcrossing.com
/ \ http://tinyurl.com/u6de
I don't suppose that there is a history graph of the donations to Wikimedia.
I was currious
how much of the US$31k was raised as a result of Jimmy Wales' December 28
letter and the
associated Slashdot coverage. I was also wondering if Wikimedia's
accounting ledegers are
open to the public. If not, why (this isn't a flame, I'm sure that there is
a good enough
reason)?
Thanks for the great site. In my opinion Wikimedia deserves at least
another US$30k in 2004.
--adam
--
"Any society that would give up a little liberty to
gain a little security deserved neither and
will lose both." --Benjamin Franklin
"Necessity is the plea for every infringement of
human freedom. It is the argument of tyrants; it
is the creed of slaves." - William Pitt
--
+++ GMX - die erste Adresse für Mail, Message, More +++
Neu: Preissenkung für MMS und FreeMMS! http://www.gmx.net
Jason just got off the phone with Penguin, and they are sending a new
motherboard for Geoffrin. Jason's testing revealed that the ram was
o.k., but that one of the ram *slots* on the motherboard was bad.
I'm doing a lot of shopping today for commodity webservers for our new
architecture, and exploring all the options that are open to us.
--Jimbo
On the dev site for Internet-Encyclopedia which has mediawiki-1.1.0
installed articles imported from Wikipedia which have foreign language links
do not display at the top of the page as links, but just as inert material
within the article. See:
http://dev.internet-encyclopedia.org/wiki/wiki.phtml?title=2003
On the live site it displays normally at the top of the page, see:
http://www.internet-encyclopedia.org/wiki.phtml?title=2003
This is not absolutely necessary to the operation of Internet-Encyclopedia
but it is nice to have as it aids both users and Wikipedia.
I'm thinking that the correct line in LocalSettings.php would fix this.
Fred
I've got an hourly batch rsync job running to make a local (to me)
backup of the uploaded images and all the UseMod-based wikis so in case
pliny explodes or something we only lose an hour of stuff.
Uploads are enabled again for all wikis.
Pliny has logged a couple of SCSI oddities lately; 0/0/0 is the primary
hard drive:
scsi0: ERROR on channel 0, id 0, lun 0, CDB: 0x28 00 00 3e e3 d8 00 00
08 00
Info fld=0x3ee3d8, Current sd08:02: sns = f0 3
ASC=11 ASCQ= 0
Raw sense data:0xf0 0x00 0x03 0x00 0x3e 0xe3 0xd8 0x18 0x00 0x00 0x00
0x00 0x11 0x00 0x00 0x80 0x00 0x35 0x00 0x00 0x10 0x66 0x00 0x00 0x02
0xbe 0x04 0xff 0x01 0x23 0x00 0x00
I/O error: dev 08:02, sector 4019160
scsi0: ERROR on channel 0, id 0, lun 0, CDB: 0x28 00 00 3e e3 d8 00 00
08 00
Info fld=0x3ee3d8, Current sd08:02: sns = f0 3
ASC=11 ASCQ= 0
Raw sense data:0xf0 0x00 0x03 0x00 0x3e 0xe3 0xd8 0x18 0x00 0x00 0x00
0x00 0x11 0x00 0x00 0x80 0x00 0x35 0x00 0x00 0x10 0x66 0x00 0x00 0x02
0xbe 0x04 0xff 0x01 0x23 0x00 0x00
I/O error: dev 08:02, sector 4019160
Not the quantity of stuff we got before but it's worrying.
-- brion vibber (brion @ pobox.com)
For the past few weeks, I and some others have been in the process of
setting up a new MediaWiki-based wiki site devoted to the "Star Trek"
television series, called Memory Alpha. We have had few problems after
the initial pains of actual installation; however, in trying to adapt
the MW framework for our site, we've been running into a few snags.
The main problem from my perspective is documentation for users of the
wiki software in sites which are not straight-up mirrors of Wikipedia.
We've been doing a rather haphazard job of copying over the various
bits of documentation for using the site -- stuff like how to write an
article, how to edit a page, the policies and guidelines, and other
material.
Our problem is that so many of these pages will definitely be useful,
and although they'll most likely be adapted and changed as time goes on
and we set up our own policies, I think it would still be useful if we
could have something that we could start with.
Currently, MediaWiki uses the website's name to create a separate
namespace for documentation and other "meta" type pages. (I'm not
referring to the Meta-Wikipedia, but rather the "Wikipedia:" pages on
Wikipedia itself.) And I realize that this is probably a workable
system...
But, considering that the MediaWiki software is made publicly available
for download and for establishing other websites, I wonder if it might
be useful to have some kind of help "module" -- that is, a collected
copy of the documentation pages that can be easily copied and set up on
other sites. Possibly, this could also add a new "Help:" namespace
which would help distinguish those pages for the general users. But
that last isn't really necessary as far as the content goes, although
it would make linking between the various help pages simpler if the
namespace wasn't changed on each site.
I'm not a programmer myself, although I've got a basic grasp of coding
and I'm very slowly learning PHP. I could probably try to help if
someone else wanted to try to implement this or something similar, but
I thought that I would propose this idea to the group first, seeing as
how many of you are a lot more familiar with the MW's inner workings.
Thanks,
Dan Carlson
Administrator, Memory Alpha
http://memoryalpha.st-minutiae.com/
My test box at home is running Apache 2.0.48 on FreeBSD 5.2rc2, with
PHP 4.3.4 installed as an apache filter. For the most part it works
fine, but I've noticed an oddity on this system that I haven't seen on
the production boxes (always running Apache 1.3.x): with file cache and
gzip compression on, the Content-Encoding header is missing on the
first send of a newly cached page, so you see raw binary gibberish.
As a workaround I had dropped an extra header() call into
Article::tryFileCache() but I've disabled this on pliny (which is not
affected by the original bug) due to a secondary bug it causes with
very long page titles (#870290). This could probably be worked around
again, but I'd rather solve the initial issue. There may be other
related problems with missing headers.
Is anyone else testing with Apache 2 who can confirm this problem
(particularly on Linux)? Enable $wgUseGzip and $wgUseFileCache, disable
$wgShowIPinHeader, and set $wgFileCacheDirectory to an apache-writable
directory, and comment out the header() call in
Article::tryFileCache().
We ought to make sure it works, and also see if the dreaded
Ampersand-in-Path-Rewritten-to-Query-String issue can be dealt with
reasonably (ie, without patching Apache as we do for 1.3; over time
I've received several requests for the patch from people who had the
same problem and stumbled on my newsgroup posting about it.)
-- brion vibber (brion @ pobox.com)
>> It's not our fault if Norton PF is that stupid.
>
> It's a pretty stupid filter. Those sites that use a path /ad/ will
> figure it out eventually and start using something else.
Although both these observations are correct, it might still be wise
to make sure Wikipedia doesn't look to ad blockers like a naïve user
of adverts.
--
Allan Crossman - http://dogma.pwp.blueyonder.co.uk
PGP keys - 0x06C4BCCA (new) || 0xCEC9FAE1 (compatible)
With all of the server problems lately I couldn't be sure whether
Wiktionary was ignored in the confusion. I still can't access it, but
don't seem to have that problem with any of the other projects.
Ec
It's past midnight here, so if the following is rubbish, apologies :-)
Suppose we had a few separate MySQL databases for a single language
running on different servers around the world. As far as I can tell, the
only real problem would be edit conflicts; nothing else would have to be
in "real-time". So why not have something like this:
* Someone hits the "Save" button below his new edit
* The server which handles the request/the database informs all other
servers that it is about to change that article
* The other servers return "OK", or "nope" if that article was changed
* If any of the other servers answer "nope", the others are informed
that the article won't be changed, and the local user gets an edit conflict
* If all others answer "OK", the change is commited to the database. The
other servers know that they have an old version of that article and
sync it sooner or later
Summary: By *not* replicating each and every byte in real-time, but
rather exchanging yes/no information, this might be fast enough to keep
the wait upon saving to a minimum. Yes, the details would prove to be
tricky (think page deletions), but think a server on each continent -
the ultimate redundancy! :-)
Magnus