Hi,
just a quick heads up that the replication lag for the enwiki database on the analytics s1 slave (s1-analytics-slave.eqiad.wmnet, db1047.eqiad.wmnet) is again >12 hours [1].
If you run jobs that rely on current enwiki data, you can temporary switch to using analytics-store.eqiad.wmnet, which /currently/ does not suffer replication lag. The usual research user has access to the enwiki database there.
Best regards, Christian
P.S.: This time I did not file an RT ticket yet, as we already knew that the s1 slave is suffering replication lag for EventLogging tables. The new thing is only that now also enwiki is affected.
[1] Do not trust Ganglia saying mysql_slave_lag being 0 for this host. Since the start of EventLogging database migration, Ganglia is saying 0 for all mysql metrics of this host. Even for those that are not expected to be 0. That's likely a separate issue.
Hi,
On Fri, May 02, 2014 at 07:56:58PM +0200, Christian Aistleitner wrote:
just a quick heads up that the replication lag for the enwiki database on the analytics s1 slave (s1-analytics-slave.eqiad.wmnet, db1047.eqiad.wmnet) is again >12 hours [1].
during the Zürich Hackathon, replication caught up, and we're back to normal for the enwiki replication lag on analytics s1 slave (s1-analytics-slave.eqiad.wmnet, db1047.eqiad.wmnet).
Best regards, Christian