Since everybody's so frustrated about this, I'm going to go ahead and force the issue with the upload server. I'll be disabling uploads and turning off the upload.wikimedia.org web server for a few hours so we can get everything moved over and totally copied once and for all.
Alas this'll mean not seeing images for a few hours, but it should finally be nicer after this. :D
http://meta.wikimedia.org/wiki/November_2005_image_server
-- brion vibber (brion @ pobox.com)
Brion Vibber wrote:
Since everybody's so frustrated about this, I'm going to go ahead and force the issue with the upload server. I'll be disabling uploads and turning off the upload.wikimedia.org web server for a few hours so we can get everything moved over and totally copied once and for all.
Alas this'll mean not seeing images for a few hours, but it should finally be nicer after this. :D
Data's finally all done copying (gar, what overloaded servers those were :P), arranging stuff now to set up the new server in use.
-- brion vibber (brion @ pobox.com)
Brion Vibber wrote:
Data's finally all done copying (gar, what overloaded servers those were :P), arranging stuff now to set up the new server in use.
Up and running!
There's 2.5 terabytes free on the server's disk array so we should be set for a few more months at least. ;)
There is somewhat more load on the server from NFS work that we'd like; there may be some bugs in the image data caching, or extra checks are being made that don't have to. But for now that's not a big problem as the machine handles it well.
When we rearrange the image storage system there should be much less need to hit the disk over NFS, so that should go down in the future to make further room for real growth.
The server's currently pumping out about 200 objects per second between 8 lighttpd worker threads. (Multiple worker threads keeps I/O blocking from halting everything; with 4 CPU cores and a lot of spindles you don't want to halt on just one file!)
Filesystem Size Used Avail Use% Mounted on /dev/mapper/rootvg-striped 3.0T 521G 2.5T 18% /export
procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 5 1 192 27844 1218704 5617736 0 0 3296 108 6282 16359 1 7 52 40 4 0 192 27328 1217796 5618904 0 0 3500 161 6308 14198 1 7 53 39 0 0 192 25832 1217348 5620912 0 0 2905 75 6027 14952 1 7 61 31
-- brion vibber (brion @ pobox.com)
Brion Vibber wrote:
Data's finally all done copying (gar, what overloaded servers those were :P), arranging stuff now to set up the new server in use.
Up and running!
There's 2.5 terabytes free on the server's disk array so we should be set for a few more months at least. ;)
That reads fine and the experience is quite fast. Thanks for the work! What I don't understand is why Ganglia only shows ~ 3.3 TB of disk_total while there should be about 4.8 TB.
When we rearrange the image storage system there should be much less need to hit the disk over NFS, so that should go down in the future to make further room for real growth.
Can I read something about that planned rearrange anywhere?
Regards, Jürgen
Jürgen Herz wrote:
Brion Vibber wrote:
There's 2.5 terabytes free on the server's disk array so we should be set for a few more months at least. ;)
That reads fine and the experience is quite fast. Thanks for the work! What I don't understand is why Ganglia only shows ~ 3.3 TB of disk_total while there should be about 4.8 TB.
The RAID eats up some of the raw disks' space with the redundancy. Either that or I've lost track of something. ;)
When we rearrange the image storage system there should be much less need to hit the disk over NFS, so that should go down in the future to make further room for real growth.
Can I read something about that planned rearrange anywhere?
There's some notes scribbled at http://www.mediawiki.org/wiki/1.6_image_storage
-- brion vibber (brion @ pobox.com)
Brion Vibber wrote:
That reads fine and the experience is quite fast. Thanks for the work! What I don't understand is why Ganglia only shows ~ 3.3 TB of disk_total while there should be about 4.8 TB.
The RAID eats up some of the raw disks' space with the redundancy. Either that or I've lost track of something. ;)
Hm, shouldn't 6 of 8 disks (8 - 1 for parity - 1 spare) per RAID be available in a RAID 5? So each array should provide 2400 GB (2232 GiB, don't know if Ganglia's GB really are GB or wrongly named GiB).
Can I read something about that planned rearrange anywhere?
There's some notes scribbled at http://www.mediawiki.org/wiki/1.6_image_storage
Thanks
Jürgen
wikitech-l@lists.wikimedia.org