... that makes sense .. ( on the side I was looking into a fall-back ogg
video serving solution that would hit the disk issue) .. but in this
context your right .. its about saturating the network port ....
Since network ports are generally pretty fast, a test on my laptop may
be helpful: (running PHP 5.2.6-3ubuntu4.2 & Apache/2.2.11 Intel
Centrino 2Ghz )
Lets take a big script-loader request running from "memory" say the
firefogg advanced encoder javascript set
(from the trunk...I made the small modifications Tim suggested ie (don't
parse the javascript file to get the class list)
#ab -n 1000 -c 100
"http://localhost/wiki_trunk/js2/mwEmbed/jsScriptLoader.php?urid=18&class=mv_embed,window.jQuery,mvBaseUploadInterface,mvFirefogg,mvAdvFirefogg,$j.ui,$j.ui.progressbar,$j.ui.dialog,$j.cookie,$j.ui.accordion,$j.ui.slider,$j.ui.datepicker"
result is:
Concurrency Level: 100
Time taken for tests: 1.134 seconds
Complete requests: 1000
Failed requests: 0
Write errors: 0
Total transferred: 64019000 bytes
HTML transferred: 63787000 bytes
Requests per second: 881.54 [#/sec] (mean)
Time per request: 113.437 [ms] (mean)
Time per request: 1.134 [ms] (mean, across all concurrent requests)
Transfer rate: 55112.78 [Kbytes/sec] received
So we are hitting near 900 request per second on my 2 year old laptop.
Now if we take the static minified combined file which is 239906 instead
of 64019 bytes we should of-course get much higher RPS going direct to
apache:
#ab -n 1000 -c 100
http://localhost/static_combined.js
Concurrency Level: 100
Time taken for tests: 0.604 seconds
Complete requests: 1000
Failed requests: 0
Write errors: 0
Total transferred: 240385812 bytes
HTML transferred: 240073188 bytes
Requests per second: 1655.18 [#/sec] (mean)
Time per request: 60.416 [ms] (mean)
Time per request: 0.604 [ms] (mean, across all concurrent requests)
Transfer rate: 388556.37 [Kbytes/sec] received
Here we get near 400MBS and around 2x times the Request per second...
At a cost of about 1/2 as many requests you can send the content to
people 3 times as small (ie faster). Of course none of this applies to
wikimedia setup where these would all be squid proxy hits. \
I hope this shows that we don't necessarily "have to" point clients to
static files, and that php pre-processing the cache is not quite as
costly as Tim outlined (if we setup an entry point that first checks the
disk cache before loading in all of mediaWiki php )
Additionally most mediaWiki installs out there are probably not serving
up thousands of request per second. (and those that are are probably
have proxies setup).. So the gziping php proxy of js requests is worth
while.
--michael
Aryeh Gregor wrote:
On Wed, Sep 30, 2009 at 3:32 PM, Michael Dale
<mdale(a)wikimedia.org> wrote:
Has anyone done any scalability studies into
minimal php @readfile
script vs apache serving the file. Obviously apache will server the file
a lot faster but a question I have is at what file size does it saturate
disk reads as opposed to saturated CPU?
It will never be disk-bound unless the site is tiny and/or has too
little RAM. The files can be expected to remain in the page cache
perpetually as long as there's a constant stream of requests coming
in. If the site is tiny, performance isn't a big issue (at least not
for the site operators). If the server has so little free RAM that a
file that's being read every few minutes and is under a megabyte in
size is consistently evicted from the cache, then you have bigger
problems to worry about.
_______________________________________________
Wikitech-l mailing list
Wikitech-l(a)lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l