I've tweaked pliny's apache config a bit; keepalive is now off and the max connections setting is turned up to the max of 255. This should keep connections from piling up and forcing new ones to queue up quite so often.
Pliny's still having those 'bad sector' errors occasionally. Obviously it's a little worrying. :) Someone suggested a nice thorough fsck could get it appropriately marked bad and thus worked around; this could help once we've got something to take over for pliny sensibly and we can take it offline...
Ursula is still straining with the databases; this is mostly disk-bound, on a slow disk. I don't really want to add the web serving on top of it.
If we get geoffrin back online and working soon, ursula can take over for pliny while it's being fixed.
And, of course, there's the New Machine; if the overheating problems are resolved and it works in the near future, I'd like to get it to replace larousse in serving en.wikipedia.org. This will get rid of the need for en2 until we get an all-around server farm set up.
-- brion vibber (brion @ pobox.com)
"BV" == Brion Vibber brion@pobox.com writes:
BV> Pliny's still having those 'bad sector' errors BV> occasionally. Obviously it's a little worrying. :) Someone BV> suggested a nice thorough fsck could get it appropriately BV> marked bad and thus worked around; this could help once we've BV> got something to take over for pliny sensibly and we can take BV> it offline...
Gotta replace the disk. Modern hard disks mark bad sectors internally, and remaps requests to "spare" sectors. Only when a disk runs out of spare sectors will it report any sectors as "bad".
So, if you're getting bad sector errors, this means the disk has seriously degraded, and it's time to replace it.
~ESP
On Fri, 09 Jan 2004 08:36:51 -0500, Evan Prodromou wrote:
Gotta replace the disk. Modern hard disks mark bad sectors internally, and remaps requests to "spare" sectors. Only when a disk runs out of spare sectors will it report any sectors as "bad".
So, if you're getting bad sector errors, this means the disk has seriously degraded, and it's time to replace it.
XFS is a good thing to move to once you're at it- much less disk trashing, better performance than ext2 and ext3, and very stable. Not as quick as ReiserFS at accessing small files but more stable than all other filesystems. I'm running it on all my machines, never had any trouble. Even on a notebook that often gets a mistreatment ;-)
Gabriel Wicke
On Fri, 9 Jan 2004, Evan Prodromou wrote:
Gotta replace the disk. Modern hard disks mark bad sectors internally, and remaps requests to "spare" sectors. Only when a disk runs out of spare sectors will it report any sectors as "bad".
Only for WRITES. Read remapping is almost always disabled -- otherwise, you'll never know the disk is failing or have a chance to repair damaged data. If the sector is readable without error, it's not an error and thus never would be remapped. (See Also: The Chicken and the Egg.)
So, if you're getting bad sector errors, this means the disk has seriously degraded, and it's time to replace it.
I doubt it. I'd replace it as soon as possible, but a full surface scan (e2fsck -c -c) will force those sectors to be remapped. Of course, one could also low-level format the drive and keep on truckin', but that'll destory all the data on the drive and take like 3 hours.
(When the disk is out of spare space, it'll return a different error. And most drives have 2 spare sectors per physical track which amounts to a lot of space.)
--Ricky
On Fri, 09 Jan 2004 05:30:52 -0800, Brion Vibber wrote:
Ursula is still straining with the databases; this is mostly disk-bound, on a slow disk. I don't really want to add the web serving on top of it.
We should get a lot of Ram for the new machines!
If we get geoffrin back online and working soon, ursula can take over for pliny while it's being fixed.
And, of course, there's the New Machine; if the overheating problems are resolved and it works in the near future, I'd like to get it to replace larousse in serving en.wikipedia.org. This will get rid of the need for en2 until we get an all-around server farm set up.
Great!
On Fri, 09 Jan 2004 05:30:52 -0800, Brion Vibber wrote:
Brion,
how much ram is taken up by Apache on Larousse and Pliny? We won't need any disk buffering in the Apache cluster, the it's important to know how much Apache/php itself takes up.
Do we need 512Mb or 1Gig on the Apaches?
On Fri, Jan 09, 2004 at 03:09:41PM +0100, Gabriel Wicke wrote:
Brion,
how much ram is taken up by Apache on Larousse and Pliny? We won't need any disk buffering in the Apache cluster, the it's important to know how much Apache/php itself takes up.
Do we need 512Mb or 1Gig on the Apaches?
This is from larousse: 15:23 TimStarling: total used free shared buffers cached 15:23 TimStarling: Mem: 2065112 2000424 64688 0 361848 745996 15:23 TimStarling: -/+ buffers/cache: 892580 1172532 15:23 TimStarling: Swap: 1020088 506284 513804
So it's using 900 MB of real memory. Swap is used, but the vmstat's I've seen all had very small si/so rates (0 most of the time, 5 maximum), so this is no issue.
Thinking of moving more things over to memcached, 1 GB seems to be the absolute minimum to me, I'd prefer 2GB.
Regards,
JeLuF
On Fri, 09 Jan 2004 15:30:50 +0100, Jens Frank wrote:
So it's using 900 MB of real memory. Swap is used, but the vmstat's I've seen all had very small si/so rates (0 most of the time, 5 maximum), so this is no issue.
Ok, but this is on a dual Cpu- a single one apache would run fewer threads. Would this halve Apaches's memory?
Thinking of moving more things over to memcached, 1 GB seems to be the absolute minimum to me, I'd prefer 2GB.
What exactly does memcached do? If it's caching the full page- then we have the Squids. If it caches common elements- then it won't need a lot of ram. Each page should be produced once, and then randomly-picked by any of the Apaches (especially if we move to ESI at some stage), so i don't see a lot of gain in memcached. Shure it would be faster for a single machine, but i doubt it will be faster than two small machines that cost the same.
From the above i would expect 1Gb to be really on the safe side.
On Fri, Jan 09, 2004 at 03:48:22PM +0100, Gabriel Wicke wrote:
On Fri, 09 Jan 2004 15:30:50 +0100, Jens Frank wrote:
So it's using 900 MB of real memory. Swap is used, but the vmstat's I've seen all had very small si/so rates (0 most of the time, 5 maximum), so this is no issue.
Ok, but this is on a dual Cpu- a single one apache would run fewer threads. Would this halve Apaches's memory?
right, I forgot about the dual CPU. 1 Gig per new box should do the job.
wikitech-l@lists.wikimedia.org