A few quick questions you can ask yourself and basic things to look into:
1. Are you using a PHP opcode cache like APC? Is it configured properly (enabled, enough memory, etc...)? 2. Are you using Memcached (or something similar) for $wgMainCacheType? http://www.mediawiki.org/wiki/Manual:$wgMainCacheType 3. Is your Squid caching everything it can? Look at your average Squid cache hit rates. Depending on your anonymous/logged in traffic it should be around the 80-90% hit rate. 4. Do you have the MySQL query cache enabled and configured properly? We're around the 60-70% hit rate on the query cache. https://dev.mysql.com/doc/refman/5.0/en/query-cache-configuration.html 5. Are you giving MySQL enough memory? On a busy site MySQL can and will use pretty much as much memory as you give it. On a server shared with other services I'd be concerned about having enough memory for it especially considering you mention that MySQL appears to be the bottleneck.
Regarding point 3: Looking at your site and the X-Cache/X-Cache-Lookup header responses you are not caching the actual page content for anonymous users which is the most expensive part of a page load. I remember having the same issue on our site and it required mucking about in the MediaWiki code (I don't remember exactly but something to do with setting the cache content for pages) as well as Squid to implement the X-Cache-Vary header (might not be required). I know after doing this our Squid cache hit rate went from 50% to 80-90%. Actually, looking at my site just now we're at only a 50% Squid cache hit rate...it appears we somehow reverted our custom changes so I'll have to look at what changed recently. If I find anything I can post it here but it may not be for a few weeks depending when I have time and how long it takes.
On 18 February 2014 17:22, David Chamberlain david@alaskawiki.org wrote:
I am sorry if this seems obvious, I am new to these lists but it looks like you are not leveraging cache.
On Tue, Feb 18, 2014 at 5:11 PM, David Gerard dgerard@gmail.com wrote:
rationalwiki.org is getting hammered again. It looks like MySQL is the busiest portion - seriously just doing a lot of work.
Our current arrangement is: one box for MySQL, Apache, Lucene (the latter reindexing weekly); two Squids; a load balancer. These are all virtual machines on Linode (who we like). Apache and Squid boxes are Ubuntu 12.04 servers.
The *usual* thing when we get hammered is that Reddit discovers an amusing tumbleweed article. The squids take care of this, of course. But then something like the Bill Nye/Ken Ham debate happens, we score pretty highly in Google for skeptical material and a wide variety of articles gets hit and MySQL has to work for a living.
So, what's a good approach to scaling up MySQL on a VM? Add more memory? Add more cores? (How's MySQL 5.5-ubuntu do for multicore?) We can trivially add more Squids, and we haven't doubled up on Apache but shirley that won't be entirely unfeasible.
- d.
MediaWiki-l mailing list MediaWiki-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/mediawiki-l
-- David Chamberlain http://alaskawiki.org/ http://alaskawiki.org/index.php?title=Alaska http://about.me/david.chamberlain Mission: To be the largest and most accurate source of information about Alaska. _______________________________________________ MediaWiki-l mailing list MediaWiki-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/mediawiki-l