On Fri, Aug 8, 2014 at 4:15 PM, Giuseppe Lavagetto <glavagetto@wikimedia.org
wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 08/08/14 14:36, Ori Livneh wrote:
On Tue, Aug 5, 2014 at 6:53 PM, Ori Livneh <ori@wikimedia.org mailto:ori@wikimedia.org> wrote:
On either Thursday or Friday of this week, Giuseppe Lavagetto (of the Wikimedia TechOps team) and I are planning to migrate https://test.wikipedia.org/ (testwiki) to HHVM. [snipped]
{{done}} :)
Some poor man's benchmarks, just to show ourselves if we're on the good path...
I ran some tests on both testwiki and one "traditional" appserver running Zend PHP. As the appserver was currently out of rotation, it was getting exactly zero traffic; testwiki receives negligible traffic and that won't affect our results.
We decided to run the simplest test of all, by requesting a lot (a LOT) of times the same page, testwiki's main page, bypassing all the outer cache layers to test exactly the performance and throughput of the appservers. When reading the results, keep into account that the hhvm appserver is heavily underoptimized at the moment and I'm confident in the coming weeks we'll be able to squeeze quite some performance out of it. Also keep in mind we still have to road-test hhvm for bugs and stability, so we are not going to roll out everywhere over the weekend :)
So, here are the results:
- Speed test: measure the time taken to request the page 1000 times
over just 10 concurrent connections:
HHVM Zend diff
Mean time (ms): 233 441 -47% 99th percentile (ms): 370 869 -57% Request/s: 43 22.6 +90%
HHVM is clearly faster, a lot faster (its 99th percentile is below the mean response time for zend...), Note that the load generated in this situation is comparable to the everyday load of one appserver.
- Load test: measure how much thoughput we obtain when hogging the
appserver with 50 concurrent requests for a grand total of 10000 requests. What I wanted to test in this case was the performance degradation and the systems resource consumption
HHVM Zend diff
Mean time (ms): 355 906 -61% 99th percentile (ms): 791 1453 -45% Request/s: 141 55.1 +156% Network (Mbytes/s) 17 7 +142% RAM used (GBs): 5(1) 11(4) CPU usage (%): 90(75) 100(90)
for RAM show the total ram occupied and the one actively occupied by mediawiki, respectively; for CPU the total and user-dedicated cpu usage. Here numbers show that the Zend appserver is clearly over capacity, while the HHVM one is only nearing its limits.
This benchmark is very crude and I repeated measurements just a few times (but the results were pretty stable across runs). But I think we can safely conclude that HHVM delivers the kind of performance boost we expected - the boost in request/s in the load test is probably the most important thing to highlight here. Still, I won't take these numbers as projections on real-world usage of mediawiki, but we're prettty close to an accurate test.
Cheers,
Giuseppe
Giuseppe Lavagetto Wikimedia Foundation - TechOps Team -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iEYEARECAAYFAlPk6XYACgkQTwZ0G8La7IAZNgCgizdLmtYzlVoMSwLZiCcY8lxL rbAAn0/LOkUx7JEkxs3EQQWRV5x1CO6D =nQBt -----END PGP SIGNATURE-----
Engineering mailing list Engineering@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/engineering