In the new design, [[Special:Statistics]] has a field for the
number of "active users", supposedly the number of unique
usernames of logged-in users in the last 30 days of Recent
Changes. However, this number is frozen at 10,263 for the English
Wikipedia, 2551 for the German Wikipedia, and 233 for the Swedish.
On the IRC channel #wikimedia-tech, Siebrand believed that a shell
user needs to run updateSpecialPages.php. Ialex added that this
is probably done every 3 days.
Is there a way this could be done at more regular intervals, e.g.
daily or hourly? It would be very interesting to see the number
change over time as we try to recruit new contributors, and
persuade existing ones to remain active.
If the window is 30 days ($wgRCMaxAge = 30*86400), it means that
when a Saturday's (low) activity is added, the (high) activity of
a Thursday four weeks ago is removed. That will cause an
oscillation over the week, since all weekdays are not equal.
>From a statistical point of view, 28 or 35 days would be a better
measurement window.
--
Lars Aronsson (lars(a)aronsson.se)
Aronsson Datateknik - http://aronsson.se
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Quick report on today's downtimes...
http://leuksman.com/log/2008/09/22/wikipedia-downtime-2x-today/
Well, today was exciting! Wikimedia’s sites experienced two downtime
events today.
The first, which lasted about 30 minutes, was due to a power problem.
While Rob was performing maintenance fixing up power in rack B2, power
was inadvertently shut off to an access switch serving another rack of
servers, which took a chunk of our core text storage offline.
The second, which also lasted about 30 minutes, was caused by a file
server failure. The file server that holds our NFS home directories and
misc files and logs experienced a kernel crash, then turned up some disk
errors on reboot. (Possibly two failed drives, which may hose the array.)
Ideally this wouldn’t disturb production web serving, but various
debugging logs were being saved onto this server, and this caused the
web servers to hang waiting for NFS to come back up.
We’ve disabled the internal debug logging for now, and the site’s back
up and running while we poke at recovering or replacing the file server.
Both of these problems can be ameliorated in the future with some more
failure-proof design:
* Spreading text storage clusters across multiple racks will protect
against localized power or network failures
* Moving debug logs to a UDP system will have a more graceful
failure mode for centralized logging than hanging NFS shares
- -- brion
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.8 (Darwin)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
iEYEARECAAYFAkjX6hwACgkQwRnhpk1wk45keQCeIjGLygMHk5/8Uk2JmpYyCS9y
FygAoI2XFgVEmIvEiA0sTw2No8qo57a3
=xmub
-----END PGP SIGNATURE-----
Hi,
Is there any approved (i.e. non future breaking) way of finding out the
stack of what transcluded the page. E.g.
"Top page":
{{:Cool template in state 1}}
"Cool template in state 1":
{{SubDisplay|Hello}}
"SubDisplay":
Do a function {{#myparsefn:{{{1}}}}}
I want to find out the name of "Cool template in state 1" from myparsefn
(and preferably also the "Top page", though that is easy to get cos its
$wgTitle). Does anyone know of an extension that does this correctly or have
any pointers how I might achieve this? Basically I'd like to inspect the
parser stack in a safe way...
Kind regards,
Alex
--
Alex Powell
Exscien Training Ltd
Tel: +44 (0) 1865 876562
Mob: +44 (0) 759 5048178
skype: alexp700
mailto:alexp@exscien.com
http://www.exscien.com
Registered in England and Wales 05927635, Unit 10 Wheatley Business Centre,
Old London Road, Wheatley, OX33 1XW, England
simetrical(a)svn.wikimedia.org schreef:
> Revision: 41085
> Author: simetrical
> Date: 2008-09-21 02:53:24 +0000 (Sun, 21 Sep 2008)
>
> Log Message:
> -----------
> Prohibit empty page titles at a low level
>
> This adds a sanity check to EditPage::doEdit() that throws an exception if the Title's name (sans namespace) is empty. Apparently the API edit module doesn't handle this error correctly at a high level, as evidenced by page 19405691 on enwiki. I didn't try to test whether this extra check stops the particular error, but it doesn't hurt in any case.
>
> <snip>
> + if( $this->mTitle->getText() == '' ) {
>
This makes me wonder: I thought the Title constructor returned null when
asked to create such titles? Has this behavior changed? If not, I'm very
curious how on earth an invalid Title object manages to 1) exist and 2)
find its way into EditPage::doEdit().
Roan Kattouw (Catrope)
aaron(a)svn.wikimedia.org wrote:
> Revision: 40752
> Author: aaron
> Date: 2008-09-12 15:03:46 +0000 (Fri, 12 Sep 2008)
>
>
[...]
> @@ -1519,7 +1519,6 @@
> }
> }
> $user->incEditCount();
> - $dbw->commit();
> }
> } else {
> $revision = null;
> @@ -1541,6 +1540,7 @@
>
> # Update links tables, site stats, etc.
> $this->editUpdates( $text, $summary, $isminor, $now, $revisionId, $changed );
> + $dbw->commit();
> }
> } else {
> # Create new article
>
This causes a regression to a situation we saw a few years ago, when as
a team, we were still working out how to optimise MySQL locking. Updates
to the site_stats table are serialized while waiting for various other
updates and PHP code to complete, limiting the total possible edit rate
of the wiki and risking telescoping lock contention and total site
failure. This is exactly what happened on Saturday, except that we were
luckier than we were in the past, and the ES master failed before the
core master, so we still had r/o service.
There are good reasons why the transaction brackets are the way they
are. Please don't change any more of them without discussing the issues
in depth with Brion or Domas or me.
This revision apparently also caused bug 15656. doEdit() is vulnerable
to duplicate revision insertion between the first getContent() call and
the commit. The duration of this critical section was greatly increased
by this revision. The root problem is that commitRollback() determines
the last editor using Title::newFromRevision(), which only uses the
slave, so the window in which duplicate rollbacks are possible can
easily stretch out to seconds. But with a short main transaction, the
effect was a silently ignored null edit, rather than a duplicate entry
in the page history.
-- Tim Starling
This is a triple-crosspost. I suggest you reply to wikitech-l only.
A mistake I made caused the loss of 496 full-resolution images from
Wikimedia servers.
I have recovered as many images as I can, drawing on the following sources:
* Squid cache (pmtpa, knams and yaseo)
* May 8 backup of some wikis on storage1
* Duplicates with the same signature, found on the same or other wikis
That brought the number lost down from about 3000 to the current 496. For
the remaining files, I made a copy of their thumbnail directories:
http://upload.wikimedia.org/lost-image-thumb-backup/
A list of missing images can be found here:
http://noc.wikimedia.org/~tstarling/missing-images-2008-09
If anyone has any ideas about where to find more backup files, I'd be
willing to hear them. Otherwise, the community will just have to reupload
as many as possible.
The technical details were as follows: I fixed a bug in File.php, and
without checking what other changes were made to it, deployed the most
recent version of the file on the Wikimedia servers, without also updating
the rest of MediaWiki. Because FileRepo::$thumbDir was unset,
LocalFile::migrateThumbFile() had the effect of deleting the source image
for any thumbnail request which reached the backend. I reverted the change
after about 20 minutes, following a report on IRC.
My sincere apologies.
-- Tim Starling