the WMF had some problems with one for their servers (db45).
Unfortunately that’s the server we replicate s5 and wikidata of. Nosy
fixed s5 already, but for wikidata it looks not that good – AFAIS a
re-import is needed.
For that reason I will stop the s5-replication
WEDNESDAY, 20:00 UTC
to create a dump and use Thursday to import wikidata everywhere again.
You can follow the process at .
Just a short notice to invite you to the workshop taking place at
Wikimania in Hongkong from 7th to 11th of August 2013:
There will be a 90 minutes workshop (2x45 min) on Saturday August 10th
from 16:00 to 17:30:
Part 1: Presenting the Tool Labs
Part 2: Migrating from the Toolserver to Tool Labs
The makers of Wikimedia Labs and Tool Labs will explain the
infrastructure and answer your questions.
If you are at Wikimania, drop in! I'd be glad to meet you!
If you have a full schedule at Wikimania and won't make it to the
workshop come talk to me at WMDE's booth or whereever you see me.
Internes IT-Management und Projektmanagement Toolserver
Wikimedia Deutschland e.V. | Obentrautstr. 72 | 10963 Berlin
Tel. (030) 219 158 260
Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e.V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg
unter der Nummer 23855 B. Als gemeinnützig anerkannt durch das
Finanzamt für Körperschaften I Berlin, Steuernummer 27/681/51985.
I'm having login issues since some days ago. I'm trying ssh
emijrp(a)login.toolserver.org and get password failures. I'm almost sure I
have not changed my password...
Can you verify that my account is OK?
Hi all, I noticed today I had several large core dumps in my home
directory, and two buried under my public_html directory. The directories I
found them in contained no executables and as far as I know were not
working directories for executables (just PHP scripts used by the web
server), so I can't imagine how they got there.
-rw------- 1 dcoetzee users 191463424 May 5 06:56 core
-rw------- 1 dcoetzee users 38797312 Sep 29 2012 core
This exceeded my hard quota without my knowledge. Any idea what's up with
this? Can I prevent it from reoccurring? Thanks.
I have to enhane the networking for one of the xen hosts.
That for I need to reconfigure the corresponding switch and network config of the host.
This means the s2 database cluster will not be avaible for some time - worst case 2 hours.
I will do this tomorrow, Wednesday 19:00 UTC - 21:00 UTC.
Commons copy on s3, s4 and s6 are corrupted. The versions at s1 and s5
are not affected.
mysql -h $server commonswiki_p -e "SELECT page_title FROM page where
| page_title |
| Dworzec_główny_w_Gdyni.jpg |
| page_title |
| Dworzec_główny.jpg |
It looks like a range of statements from November 2012 weren't replicated.
Other tests are:
SELECT log_id, log_title, log_type, log_action FROM
logging_ts_alternative where log_namespace=6 AND
The wrong copies don't have «| 48917341 | Dworzec_główny.jpg | move |
move |» (the other 7 entries appear in both)
SELECT rev_id, rev_timestamp, rev_comment FROM revision where
The wrong copies don't have the last 4 revisions (all of them from
'Dworzec_główny_w_Gdyni_2.jpg' are similarly affected, shown as just
Dworzec_główny_0X.jpg in the wrong dbs.
Comparing the number of revisions per day, it seems to have fixed on
2013-11-04, with the 3 being the most noticeable (27746 revisions where
they should have been 53974), a difference of 80 revisions the previous
day, 200 on 2012-10-27...
It should be possible to just copy the db from s1.
s1 and s5 are probably sane due to the reimports on January and April.
As an unexpected thing, sql-s2 and sql-s7 don't have a commonswiki_p
copy. sql-s3 and sql-s6 don't have a commonswiki_p.revision table/view.
I created https://jira.toolserver.org/browse/TS-1667