Of course, I do not mean current s3 replication problems. The issue is that the replag value was too unstable during last period of time after new array has been plugged in.

The reason I'm asking is that according to original River's post roots are participating decision making for hw upgrades. So, the point here is do we really need to split s1 and s3 on different servers or there is an ability to speedup yarrow for better replication.


2008/3/8, MinuteElectron <minuteelectron@googlemail.com>:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1


Mashiah Davidson wrote:
> Do you have an idea, Mark, on how to improve the replication process in
terms of hw or sw, especially for s3 replication?

Another point is that currently the replication problems are mainly due
to problems caused by the main Wikimedia servers. Replication usually
works fine until something happens on the main servers (such as
masters\slaves being switched or binlogs deleted) then everything
breaks. Such problems are difficult to combat, presumably what is needed
is better communication between the toolserver and main server teams --
outside the realms of hardware and software.

MinuteElectron.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.8 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkfSYmkACgkQkJvUlhoE3wRrigCcDd8nyZDPUHnvCGKkekFFs+4d
DhEAoJGv3gDoKQAwlTNB3rnjaEKwyNoT
=jn+6
-----END PGP SIGNATURE-----



_______________________________________________
Toolserver-l mailing list
Toolserver-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/toolserver-l