Hi,
On Sun, Aug 26, 2012 at 5:53 PM, Dan Fisher danfisher261@gmail.com wrote:
My situation: I'm on a shared server where they don't want me to go above certain CPU limits (cpu seconds/per hour). I'm not able to install Squid, APC or memcached. Lately I've been having problems with CPU usage due to traffic surges and malicious bots. I don't want to spend more money on hosting if I don't have to but that option is open if the server company thinks I should upgrade. I want to be a good client and not effect other users on the server.
Sounds like it's not a suitable host (or plan at that host) for a publicly accessible mediawiki instance.
What's your budget? there are services like linode where you can fairly cheaply get root access to your own virtual server (and therefore have no limits on memcache, APC, squid, varnish, etc.) and if you don't have the resources to manage that yourself then there are services that will host a mediawiki instance for you.
But I'm thinking if those calls to Load.php were cut down, it would make it possible for Wikipedia to use less servers and would also make everyone else's sites run faster.
For Wikimedia wikis, those load.php URLs are already served by a cluster of caching proxies and have a high hit rate and php is run only for cache misses.
I don't know offhand what the state of the file caching feature is (or whether it's actively maintained) but any improvements to it would probably be welcomed. But even if it is improved, you're probably better off using caching strategy that's already used by a significant number of existing deployments; I think it's safe to say deployments using the file caching feature are not common. (but may be wrong too! please correct me if I am)
-Jeremy