As the servers are already strugling, have you wondered what serverside compression will cost in server side CPU consumption?
With the new upload limit of 5Mb this is a more serious issue. What could be a middle ground is to only server 1 Mb max pictures when the thumbs are used and leave the 5Mb pictures to a detail screen. The 1 Mb pictures would be saved. This would cost a lot of harddisk but it is easier to upgrade harddisks than processors.
On that subject, has a SAN ever been contemplated for wikimedia ?
Thanks. GerardM
Gerard.Meijssen wrote:
As the servers are already strugling, have you wondered what serverside compression will cost in server side CPU consumption?
Are you referring to scaling of images or compression of HTML output?
With the new upload limit of 5Mb this is a more serious issue. What could be a middle ground is to only server 1 Mb max pictures when the thumbs are used and leave the 5Mb pictures to a detail screen. The 1 Mb pictures would be saved. This would cost a lot of harddisk but it is easier to upgrade harddisks than processors.
A particular image only has to be scaled to a particular size once, ever. At that point the scaled file already exists and is simply shoved out as bits which is not really CPU intensive; these files are also cached on the squids, so if accessed frequently should not burden the Apache machines.
On that subject, has a SAN ever been contemplated for wikimedia ?
Don't know anything about that.
-- brion vibber (brion @ pobox.com)
On Sun, 2004-07-25 at 10:16 +0200, Gerard.Meijssen wrote:
On that subject, has a SAN ever been contemplated for wikimedia ?
Yes, it has to some extend. I hope that RH gets the network-raid-1 extensions to clvm ready soon though, so that one can do a GFS RAID-1 across the apaches' HD's. Much cheaper than a SAN (essentially free), easier to scale, same availability.
wikitech-l@lists.wikimedia.org