Cool, thanks for all of the info, it's been very helpful.
You could try hitting the 1.12 install with artificial traffic to
populate the caches, but it's still not
necessarily going to behave
quite the same. For performance benchmarking you really want exactly
the same load patterns on everything you're testing. How far you want
to go in achieving this is up to you. At a minimum I'd copy the 1.9
DB twice to two separate dev databases, and upgrade one of them. You
could then do benchmarking with ab or a similar tool.
I'd like to be sure it's going to stand up to traffic before releasing it to
production. We've had upgrades before that had to be aborted because the
load on the servers spiked as soon as they were upgraded. :) Making multiple
copies of an 10GB DB is a bit of a pain, but I'll give it a shot! :)
It's almost certainly a bogus statistic. It could
just reflect a long
time waiting for a lock or something.
Cool. Would this be the same issue likely if I'm occasionally seeing
LoadBalancer::getConnection take over 4 seconds in total on 1 page view?
For replaceLinkHolders function, should $res and $varRes eventually be
passed to a freeResult call? They don't seem to be, not sure if this is
intentional, and it seems to be the same way in 1.9.
It's mostly our skin calling getParentCategoryTree on wgTitle, one call to
set the breadcrumbs at the top of the page, another to set any meta tags
associated with the category, another to determine the top level category
for some UI features. Depending on the category structure, iterating over
the tree can make as many as 10 calls to the DB, so we'll likely start
caching that somewhere. I guess the Monobook skin doesn't show the category
tree or breadcrumbs, so this isn't as much of an issue for a typical
install.
Thanks again!