On Wed, Nov 23, 2005 at 08:28:37PM +0100, J?rgen Herz wrote:
Brion Vibber wrote:
That reads fine and the experience is quite fast. Thanks for the work!
What I don't understand is why Ganglia only shows ~ 3.3 TB of
disk_total
while there should be about 4.8 TB.
The RAID eats up some of the raw disks' space with the redundancy.
Either that
or I've lost track of something. ;)
Hm, shouldn't 6 of 8 disks (8 - 1 for parity - 1 spare) per RAID be
available in a RAID 5? So each array should provide 2400 GB (2232 GiB,
don't know if Ganglia's GB really are GB or wrongly named GiB).
Sorry, the wikitech page about amane was not up to date.
Current layout:
There are three SATA RAID controllers:
Ctl Model Ports Drives Units NotOpt RRate VRate BBU
------------------------------------------------------------------------
c0 9500S-8 8 8 1 0 4 4 OK
c1 9500S-8 8 8 1 0 4 4 OK
c2 7006-2 2 2 1 0 2 - -
c2 is used for the OS, RAID 1, 60 GB.
c0 and c1 are the big array controllers:
Unit UnitType Status %Cmpl Stripe Size(GB) Cache AVerify IgnECC
------------------------------------------------------------------------------
u0 RAID-10 OK - 64K 1490.07 ON OFF OFF
Port Status Unit Size Blocks Serial
---------------------------------------------------------------
p0 OK u0 372.61 GB 781422768 WD-WMAMY12171
p1 OK u0 372.61 GB 781422768 WD-WMAMY12037
p2 OK u0 372.61 GB 781422768 WD-WMAMY12168
p3 OK u0 372.61 GB 781422768 WD-WMAMY12168
p4 OK u0 372.61 GB 781422768 WD-WMAMY12168
p5 OK u0 372.61 GB 781422768 WD-WMAMY12168
p6 OK u0 372.61 GB 781422768 WD-WMAMY12025
p7 OK u0 372.61 GB 781422768 WD-WMAMY12168
Name OnlineState BBUReady Status Volt Temp Hours LastCapTest
---------------------------------------------------------------------------
bbu On Yes OK OK High 0 xx-xxx-xxxx
On top of the two 1.5 TB RAID10's there's LVM, generating a striped RAID0
volume with 3TB capacity.
Bonnie reported these figures for this configuration:
--------Sequential Output--------- ---Sequential Input--- --Random--- -Per Char- ---Block--- --Rewrite-- -Per Char- ---Block--- --Seeks---- Machine MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU amane 10000 59312 99.9 337711 84.0 101595 21.8 38235 66.8 179973 20.2 1583.2 2.6
real 10m46.529s
user 5m23.604s
sys 1m36.116s
This was the fastest available configuration with redundancy. Only RAID0 was reading faster. The server has 8GB memory, so 10GB test file size were chosen.
Regards,
JeLuF
Jens Frank wrote:
The RAID eats up some of the raw disks' space with the redundancy. Either that or I've lost track of something. ;)
Hm, shouldn't 6 of 8 disks (8 - 1 for parity - 1 spare) per RAID be available in a RAID 5? So each array should provide 2400 GB (2232 GiB, don't know if Ganglia's GB really are GB or wrongly named GiB).
Sorry, the wikitech page about amane was not up to date.
[...]
Ah yes, that's an explanation and matches the 3,257 GB from Ganglia better.
This was the fastest available configuration with redundancy. Only RAID0 was reading faster. The server has 8GB memory, so 10GB test file size were chosen.
That gives less space available but actually I was uneasy about RAID5 from the security point of view anyway.
Thanks, Jürgen
wikitech-l@lists.wikimedia.org