-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Hi,
We're going to be doing some network maintenance this afternoon, which
might cause some intermittent (but hopefully short) service
interruption.
- river.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (HP-UX)
iEYEARECAAYFAkvBxWYACgkQIXd7fCuc5vK+dQCdGUfVd1lqh0LBpMix/eEciWm8
jSYAoLSYEFK2Kabku2JlXzz3dCcy/wf/
=ILRy
-----END PGP SIGNATURE-----
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Hi,
JIRA has been upgraded to 4.1. Please report any issues in the TS
project, or to ts-admins@ if that's not possible.
- river.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (HP-UX)
iEYEARECAAYFAku+vdAACgkQIXd7fCuc5vKWWgCfXDtlidwdL+PmCitDPrtv0dY3
inwAn1uiEo7iVTWABsZGWBrPfg4glT92
=2WzG
-----END PGP SIGNATURE-----
Wikimedia Mediawiki version is upgraded to 1.16wmf3 from 1.16alpha-wmf. Diff
is
http://svn.wikimedia.org/viewvc/mediawiki/branches/wmf/1.16wmf3/includes/sp…
r64679 by tstarling.
So, Login Failed in Many Bot scripts include Pywikipediabot, API.php,
CommonsDelinker, DotNetWikiBot, AWB (I think).
How can we fix it?
--devunt
Tim Starling just annouced a fix to a recently noticed security flaw in
MediaWiki on wikitech-l. This fix involves a non-backwards-compatible
change to the MediaWiki API login action.
Details here: https://bugzilla.wikimedia.org/show_bug.cgi?id=23076
While this does not _directly_ affect the toolserver, a large number of
bots running here will be affected. As the fix is already live on
Wikimedia sites, any bot that has not been updated will be unable to log
in using the API. (Some old bots logging in via Special:Userlogin may
also be affected, depending on how they construct the login request.)
The necessary fix is not particularly complex. I only had to add one
extra line of Perl code to my own bot to make it work again:
http://commons.wikimedia.org/w/index.php?diff=37368315&oldid=36496675
I expect that most commonly used bot frameworks will soon be updated to
be compatible with the new login syntax. In the mean time, operators of
long-running bots may wish to avoid logging them out until they've been
fixed so that they can log back in.
--
Ilmari Karonen
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Hi,
To resolve the previously announced problems with s4, we will be
switching these clusters to a different server on Tuesday morning UTC
(in a few hours). This will involve a period of read-only while user
databases are moved, but access to replicated databases should not be
interrupted.
After this is complete, we will reimport the databases on the current
s3/s4/s6 server and use this as the redundant/fast server for s3/s4/s6.
These issues are being tracked in JIRA as MNT-444 (server switch) and
MNT-445 (fast server).
- river.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (HP-UX)
iEYEARECAAYFAku6Z2oACgkQIXd7fCuc5vLxZACfdhNbIu76a7/ZTQZ3SjLEKUuW
3IgAoJBVhQsnFi010G+ouujJ85pocGl1
=pNoZ
-----END PGP SIGNATURE-----
Hello, all.
I'm from ru-wiki, i'm one of active members in Connectivity project.
I was very concerned when I learned that Golem can't work anymore because of
new limitation on Toolserver. Golem's data is a key part of Connectivity
project, which works about improving of Wikipedia quality. Connectivity
project works mostly in russian and ukrainian editions of WP, but Golem also
collects very useful information for every other language, except english,
which is too huge yet to analyse. Project's code is improving continuously,
for example two years ago, when ruwiki has about 250k of articles and
project's tools were few in number, analyse of ruwiki took about 2 hours,
and now, when ruwiki has 500k of articles and number of connectivity tools
is increased several times, it's required about 1 hour 40 minutes for
analyse. Improvement may go faster: there is only one programmer now in
project - Mashiah, and if anybody wants to help him and participate in code
improving, he is free to join. Our project needs any help from programmers.
We have noted that the number of isolated articles is directly related to
the authors' awareness of the lack of referencing articles . At certain
periods of time due to toolserver problems in February 2009 we were unable
to obtain timely data. During these periods the number of isolated articles
usually grows, and the growth gradually turns to decline once the Golem
being started to work again. So, it means, that any idle period of Golem
leads to a deterioration in the quality of articles.
The code will be improved in any case, sooner or later, but we want to try
all the ways for keeping Golem running during the optimization process. I
want to ask if hardware upgrade can resolve this problem? And if it's
possible, can you please estimate models and cost of required equipment?
Please, help us to help Wikipedia.
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Hi,
We're about to do some hardware maintenance on cassia, which will cause
about 20 minutes of downtime, during which s3/s4/s6 will be unavailable.
Sorry for the short notice.
- river.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (HP-UX)
iEYEARECAAYFAku2Mw8ACgkQIXd7fCuc5vLrPACgiae4PKcdzYFMcF8lx0pr4JxY
nngAnAj+3Y/tOcTXA46podNmAp3MBWxz
=9RSa
-----END PGP SIGNATURE-----
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Hi,
Yesterday we noticed an issue with the ruwiki database similar to the
previous issue with commons (specifically, missing rows in
ruwiki.revision). It's not clear what causes this, but the most likely
candidate seems to be mydumper, the tool we use to produce database
dumps. We will therefore be reimporting s3, s4 and s6 (which were all
recently imported with mydumper) to hyacinth using mysqldump instead,
then switching these clusters from the current server (cassia) to
hyacinth.
This issue is being tracked in JIRA is MNT-438.
- river.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (HP-UX)
iEYEARECAAYFAku1tLMACgkQIXd7fCuc5vINkQCePP4qMqYzGBEmUdSliagoscr5
ZpAAnjGOLq6M5e7z9RDBmN3rEYci371Y
=9lf2
-----END PGP SIGNATURE-----
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Hi,
As our master copy of s4 is missing parts of the database (TS-583), I
will reimport the database today. While this is in progress, there will
be no commonswiki_p database on cassia, the server for s3 and s6. The
import should only take an hour or two. After the import, s4 will be
switched back to cassia, which will fix the problem with user databases
being on the wrong server.
This issue is being tracked in JIRA as MNT-436.
- river.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (HP-UX)
iEYEARECAAYFAkuzN2gACgkQIXd7fCuc5vIGNACgwHNE6wqPA6KuYk+BDtXjoCzL
nQIAn1ymlXYcU+0T2EUyhSKZrFdQ/k7R
=xVyB
-----END PGP SIGNATURE-----
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Hi,
On Monday morning we will switch these clusters from the current server
(hyacinth) to cassia due to previously announced problems with hyacinth.
This will involve a couple of hours read-only time while the user
databases are copied. There should be no interruption to wiki database
access.
This issue is being tracked in JIRA as MNT-423.
- river.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (HP-UX)
iEYEARECAAYFAkuvYLoACgkQIXd7fCuc5vKpjQCfU/0Ag+woVV/HRaxKCh5EGKWl
o8gAoKBLtifIjI6RpM6163MNxoYpdCZ4
=bhXV
-----END PGP SIGNATURE-----