Dear developers,
I've been doing RC patrolling on de.wikipedia this morning using the
great CrytoDerk's Vandal Fighter 2.1. I find myself in a difficult
situation.
From 1 to 2 out of 10 clicks on diffs of recent changes, I get an error
stating that the version I just clicked on does not exist. The reason
seems to be obvious, it's the replication lag on some servers.
While the technical issues are I guess fully present in your minds, I
would like to point to to a different side effect of this software
behaviour: I am loosing any motivation for RC patrolling.
This morning, I more or less made 50 reverts to vandalism in
de.wikipedia. I am sure there were more vandal edits but I was unable to
keep myself concentrated on that topic.
It's not about losing 5 seconds for the replication to keep up, it is
simply a frustrating experience. When I do rc patrolling, I like to idea
of "doing good work".
Whatever you do, please keep in mind to avoid situations where an klick on
http://de.wikipedia.org/w/index.php?title=Lucas_Cranach_der_%C3%84ltere&dif…
reveals that:
Fehler
aus Wikipedia, der freien Enzyklopädie
Der Text für den Artikel „Lucas Cranach der Ältere (Diff: 6123802, 0)“
wurde nicht in der Datenbank gefunden. Wenn es sich um eine alte Version
eines Artikels handelt, kann es sein, dass sie wegen einer Verletzung
von Urheberrechten gelöscht wurde. Es ist auch möglich, dass es ein
Problem mit dem Zugriff auf die Datenbank gibt. In diesem Fall versuche
es später bitte noch einmal.
I am unable to provide a technical solution for that (why don't you
delay rc messages until the replication is done?) but I am willing to
give any feedback nescassary to help to abolish this situation where
vandalizing something is easier than fixing it.
Thanks,
Mathias
--- Delirium <delirium(a)hackish.org> wrote:
>I've talked to a few other people who feel similarly, though I don't know
> what the most common preference would be. If there's queueing, at least you
know your request will get served eventually, even if you have to go get a
> cup of coffee first; with the error-message responses, you just have to keep
> hitting reload, which is more irritating.
Also irritating is the fact that nobody seems to care about locking the
database from edits when things slow down like this. So those who are are on RC
patrol have the choice of letting vandalism through, or sitting there
frustrated seeing all their rollbacks fail 3 or 4 times before sticking.
Everybody else gets frustrated as well ; seeing their edits fail.
When service gets this bad PLEASE lock the database from edits until service
improves.
-- mav
__________________________________
Do you Yahoo!?
Yahoo! Small Business - Try our new Resources site
http://smallbusiness.yahoo.com/resources/
Hi folks,
I'm starting to take a more active interest in MediaWiki access control
features, primarily in the area of integration with other applications
and authenticaion systems. My initial interest in this was picqued by
administrating a GForge site (helixcommunity.org) where there have been
frequent requests for some form of wiki integration. However, my
interest is probably more abstract than that now (i.e. I want to solve
the general problem, rather than writing a one-off MediaWiki/GForge glue
layer).
As I've started my investigation, I've been looking at what
conversations have occurred in the past. I've tried to document what I
know so far here:
http://meta.wikimedia.org/wiki/Access_control
I've wanted to get a good idea for what's already been discussed and how
the code currently works before coming in swinging with my own ideas for
improvements that can be made.
A couple of questions:
1. I've included links to several bugs. I'd like to file a tracking
bug in BZ which is blocked by those bugs so that I get the benefit of
two-way links to all of the bugs. However, it doesn't appear that
metabugs are standard practice here. Is that by policy? More to the
point, would anyone mind if I filed an "Access control" tracking bug?
2. Is there anything major I'm missing?
I'll be on #mediawiki IRC semi-regularly (as "robla"), but you may have
to /msg me to get my attention.
Thanks
Rob
Hello!
> I am unable to provide a technical solution for that (why don't you
> delay rc messages until the replication is done?) but I am willing to
> give any feedback nescassary to help to abolish this situation where
> vandalizing something is easier than fixing it.
Our special.ops site operation team currently
is dealing with lots of issues, with main one being relieving
stockholder and user frustration levels.
Replication lag means that servers have to deal
with too much data under too much of load.
First of all, new db servers are coming soon.
They have been planned quite long time ago, and ordered at
the beginning of this month. We are running on same DB server
setup as we were running on December. This will quite increase
our database capacities, and we will be able to work with other
bottlenecks.
On the other hand, since December deployment of MediaWiki 1.4,
there has been great deal of work in software development.
MediaWiki 1.5 will have lots of cosmetic changes, some nice
features, but most important of all, it will be a bigger pleasure for
DBA to maintain the environment. It will have less data to shuffle,
less data to scan, less data to work with. What would mean better
scalability and performance. It has data rearrangement. But still,
before going live with 1.5, we'll have to take site offline (or readonly),
and restructure it.
Moreover, we're moving archive data out from our expencive DB servers
into small and neat apaches. This way we won't have to deal with large
arrays of data sitting in forgotten history and will have much more
effective DB space/cache/performance usage. In future it will be even
better if we could solve our current file storage bottleneck this way.
Extracts from p0rn movies renamed into some classic symphonies do take
quite a lot of space on our single image server. There are efforts now
here as well - we already can sync our image servers and make other nice
stuff.
With less data and more requests we will sure face other bottlenecks,
but... haha, that's what Wikimedia Operations Team is for! :)
No frustration, no vacation, total world domination! To WebServe and Protect!
Cheers,
Domas
> * Give them the PROCESS privilege
Would work.
> * Upgrade to MySQL 4.1. Not easy due to the need to dump and reimport,
> and we don't need to do it for any other reason.
Would we _really_ need that? Sure, downtimes will happen with altering column collations and stuff...
> * Use "SELECT max(rc_timestamp)" like servmon.
> It wouldn't work well for a site like wikicities
> but it should work for us most of the time.
We could run 'pinger' process on master, that would
simply update some HEAP table with current timestamp
every second (or every half of it).
> * Use a daemon running as wikiadmin to pass information
> to the apache threads via memcached. Maintenance and
> administration is an issue. A variation on this is for
> the daemon to set load ratios based on lag information.
Well, if we implement load management, then it could work.
Without that feature I guess it would not be that optimal.
Domas
Hi there,
I just tried to insert the old table for the german wiki. It failed due
to a lack of storage.
There was 22 GB available while the sql file is about 16GB. Can anybody
tell how much disk space the table needs on a mysql 4.x installation?
Thanx, Merlin
Given an image of width say 100, the code "[[Image:Name|100px]]"
causes MediaWiki to generate a 100px thumbnail. Besides wasting disk
space, this bug is annoying since the thumbnail is often larger than
the original image.
Attached is a 1-character patch against CVS head that should fix the problem.
(This is my first time submitting a patch for MediaWiki. Is this the
right place? Or should I file a bug report on bugzilla instead?)
-- [[en:User:Dbenbenn]]
We're going to be moving some images around tonight to relieve disk load
on the central image file server.
Uploads may be briefly disabled on en.wikipedia.org during parts of this
process to ensure that files aren't lost. Sorry for any inconvenience
this may cause.
-- brion vibber (brion @ pobox.com)
Hi, there...
If you look at http://en.wikipedia.org/w/index.php?title=Algeria&action=edit
you'll see a surplus </font> tag, which doesn't have a start tag
(<font>). At wikipedia.org it's hided but at my site it showed and
damage page design.
Does anybody know how to solve this problem?
Thanks
Sergey
Hi there,
I am wondering how long it would take to insert the 15GB old dump into a
mysql db on a athlon 2000 system on suse 8.1 if the system does only
this job.
As far as I remember there was something like 200 inserts per minute. So
this could take weeks, right?`
Did somebody already do that and can provide some figures?
Thanx in advance,
Merlin