On Thu, Jun 26, 2008 at 12:02 PM, Travis (wikiHow) <travis(a)wikihow.com> wrote:
I did some more digging and something odd came up. I
loaded the same set of
100 pages in 1.9 and 1.12 with profiling enabled, and in the profiling data,
on average in 1.12 Database::doQuery took up 42.48% of the total time to
process the page view, while in 1.9 it was more like 11.56%. This seems like
a big jump. Is this expected for the new version?
I can't see why it would be. Are you running the two on the same
machine? If so, is one of them being run against a production
database and one against a testing database? If one is being used
more than the other, that one is going to have more cache hits and
work faster, so your production 1.9 site is of course going to run
faster than your testing 1.12 site, even on the same server.
1.12 seemed also to have more DB calls per page view,
I put in some
debugging info and 1.12 made about 72 calls on average while 1.9 made 60.
Well, the current figure on Wikipedia seems to be under 10 queries per view:
So maybe you should be looking into what extra queries are being run.
There are some features that can cause nasty query spam if things
aren't configured right (like interwikis).
I repaired all of the tables in the 1.12 database
Are you using MyISAM? If so, why? MediaWiki isn't really designed
for it at all; it uses InnoDB-specific features like primary key
and looked through the
tables and they appear to have the proper indices set. To create the 1.12
db, I just dumped the 1.9 db, imported it into a new db, and ran the
Why don't you use maintenance/update.php from a shell? It's almost
certain to be more reliable.
I don't know if this is a valid test, but I
captured about 100 queries that
1.9 made, and ran them against both the 1.12 db an the 1.9 db:
# time mysql wikidb_19 < /tmp/queries.sql > /dev/null
# time mysql wikidb_112 < /tmp/queries.sql > /dev/null
It seemed to take about twice as long on 1.12, so maybe this is an
Not sure, that difference might be noise. Is it consistent? Make
sure you repeat them both until you get a consistent figure,
preferably on a server doing nothing else; otherwise you can get
issues with caches getting messed up. Also, again, make sure that the
databases are *both* cold, not production databases.