Hello all,
between 19:07 and 19:38 (UTC) the toolserver was non-accessable from outside
and sql-using inside was also not possible (I guess several other things
failed too). The reason was that one of our high-availability-servers half-
crashed and the other didn't notice it and some services were splitted between
them.
A rebooting of the half-crashed fixed this.
FYI.
Sincerly,
DaB.
--
Userpage: [[:w:de:User:DaB.]] — PGP: 2B255885
Hello all,
the master-switch of s5 [1] last night didn't go as smooth as normal. I had to
restart the replication several times; this COULD have resulted in a
corruption of 1 or more databases on the cluster. If you notice something
strange, please open a JIRA-bug and I will look at it. In worst case I have to
reimport s5 at some time in the future.
Sincerly,
DaB.
[1] https://jira.toolserver.org/browse/MNT-1099
--
Userpage: [[:w:de:User:DaB.]] — PGP: 2B255885
Would love to know how that managed to take 22 minutes to search 17k
rows, call it a server hiccup or somebody else's query locking the DB?
---------- Forwarded message ----------
From: <no-reply(a)toolserver.org>
Date: Sun, Sep 25, 2011 at 10:19 AM
Subject: [TS] Killed SQL-Task 36551515 on db-server thyme
To: overlordq(a)toolserver.org
Hello overlordq,
a MySQL-query of yours was killed because you didn't mark it as
SLOW_OK and it have run for 1328 seconds which was longer than
allowed.
You can find the query below. Please have also a look at [1] to find
information how you can avoid killings of your queries. Maybe you can
optimze the query too?
The replication lag at kill-time was 204s.
Sincerly,
Query-Killer.
This eMail was sent automaticaly, please don't reply.
SELECT ns_id FROM toolserver.namespace WHERE ns_name = '' AND dbname =
'enwiki_p'
[1] https://wiki.toolserver.org/view/Database_access#Slow_queries_and_the_query…
I have a question about something that's always annoyed me (not actually a problem so not worthy of a JIRA request IMO).
My
MyISAM tables are collated as latin1_swedish_ci. As far as usage in my
HTML reports goes, everything works great, so I'm happy with this setup.
However,
if I browse one of these tables in phpMyAdmin, (and under the very
unnoticeable +Options link in the browse view, uncheck "Show binary contents as HEX") I don't get the proper display of values.
For instance a table whose column is stored as
dab_title varchar(255) binary NOT NULL default ''
will display as
Jarosław_Dąbrowski
when I want
Jarosław Dąbrowski
and indeed that's what I get when I display it in a report. Again, not a
problem, but I was wondering if anyone knows an easy way to get
phpMyAdmin to display the contents properly.
Thanks,
Jason
Hey all,
I've gotten this failure message the last two time my bot has tried to
run. Does any one know what might be causing this? I don't recognize the
script it mentions.
----
User:Hersfold
hersfoldwiki(a)gmail.com
-------- Original Message --------
Subject: Cron cronsub -s HersfoldArbClerkBot
Date: Thu, 22 Sep 2011 00:00:31 +0000 (UTC)
From: root(a)toolserver.org (Cron Daemon)
To: hersfold(a)toolserver.org
Unable to run job: got no response from JSV script "/sge62/default/common/jsv.sh".
Exiting.
hello,
I can't connect anymore to frwiki_p with the command line
*
sql frwiki_p
*
but I am able to connect to enwiki_p. Is this due to some grants removed, or
is it a problem with the frwiki_p database ?
Regards
Hercule
Hello all,
on Tuesday 12 o'clock UTC I will move S4-user from its temporary host back to
the correct place. It will take a few hours to do that, but I can't say how
many exactly (it depends how much data is in the user-databases). During this
time the user-databases on s4 (commons) will be read-only. After the import is
done it will also need a config-change which will kill all database-connections
(on every host to every database).
Not affected will be s4-rr (beside the config-change-killing). So if you have a
tool which only reads in commons, but doesn't use a user-database, make sure
to use
commons-p.rrdb.toolserver.org instead of
commons-p.userdb.toolserver.org (see [1] for details) and you should not be
affected (beside the config-change-killing).
I will send an eMail when the moving is done.
If you have any questions, feel free to ask on the mailinglist.
Sincerly,
DaB.
[1] https://wiki.toolserver.org/view/Database_access#By_database_name
--
Userpage: [[:w:de:User:DaB.]] — PGP: 2B255885
Hi there,
I have been asked to help with the porting of media wiki to hiphopphp and
setting up an example,
it requires a large amount of cpu and disk and would like to know if we can
use the toolserver for development? This is for compiling and testing, not a
full deployment, I would like to see if it can be run with some form of
stability.
Ideally we would have a debian based GNU/Linux os for running on.
thanks,
mike
--
James Michael DuPont
Member of Free Libre Open Source Software Kosova http://flossk.org
A few hours ago I issued the following command in commonswiki_p
mysql> select * from logging where log_namespace=6 and
log_title='Estatuas_y_fuentes_de_La_Granja_de_San_Ildefonso_1.jpg' limit 10;
Empty set (1 hour 5 min 33.53 sec)
Why does it take so long? There should be an index on it, which could be
used to resolve the whole query as empty:
> CREATE INDEX /*i*/page_time ON /*_*/logging (log_namespace,
log_title, log_timestamp);
If we explain the select
> mysql> explain select * from logging where log_namespace=6 and
> log_title='Estatuas_y_fuentes_de_La_Granja_de_San_Ildefonso_1.jpg'
> limit 10;
> +----+-------------+---------+------+---------------+-----------+---------+-------+----------+-------------+
> | id | select_type | table | type | possible_keys | key |
> key_len | ref | rows | Extra |
> +----+-------------+---------+------+---------------+-----------+---------+-------+----------+-------------+
> | 1 | SIMPLE | logging | ref | page_time | page_time |
> 4 | const | 13607245 | Using where |
> +----+-------------+---------+------+---------------+-----------+---------+-------+----------+-------------+
> 1 row in set (0.00 sec)
it does show the index, but the key_len is only 4. It seems it is using
page_time only for the log_namespace,
and not for the log_title, so it needs to scan 13607245.
Compare that with my local server, where that query provides a key_len
of 261.
Is the index set correctly? Why is mysql not taking log_title from the
index into account?
I first thought that perhaps the master wasn't using a full index, and
that's what the toolserver replicated,
but the index in the master looks complete: http://pastebin.com/vjyjZ7Y2
I also tried fetching the page_id from page using page_namespace and
page_title, which is fast, and then
searching logging using log_page (indexed by log_page_id_time), but
page_id is missing from the view.
Hello all,
soon the partion where /home is stored will be full (only 5GB are left). Below
you will find the list of the Top-25-space-user. If you are on the list, please
realy look why you need so much space and clean-up if possible (the normal
quota for home is 500MB or 1GB). If there is no process until sunday (or if
/home runs full), the roots will look into these homes and clean themself.
If you are not on the list, it doesn't hurt to have a look at your home too
;-).
Clean-up means: Delete or compress or truncate or move it to the user-store if
other people can use it too (the user-store is no waste dump for big files no-
one need of corse ;-)).
Sincerly
DaB.
2.9G joegazz84
2.9G mzmcbride
3.0G danny_b
3.6G flacus
3.7G hydriz
3.9G rriver
4.8G saper
4.9G tparscal
5.1G cbm
6.8G werdna
7.8G mjbmr
8.0G hippietrail
8.4G voj
9.5G myst
11G kolossos
11G rsumi
12G wikitanvir
13G baptiste
17G dschwen
21G prolineserver
40G project
44G sk
70G daniel
90G grimlock
106G alebot
--
Userpage: [[:w:de:User:DaB.]] — PGP: 2B255885