Marc Schütz wrote:
Am Samstag, 29. Juli 2006 12:29 schrieb Tim Starling:
There were 10000 "foo bar" strings output by each one. On my laptop, with no bytecode caching, string interpolation was 12 times faster than concatenation.
I believe this is because you are concatenating a large number of variables together into one string, which is quite unrealistic for a real-world application. The more frequent case is where you have only a few variables:
foo = 'foo'; $bar = 'bar'; $start = microtime(TRUE);
for($j = 0; $j < 100; $j++) for($i = 0; $i < 100000; $i++) $s = $foo.' '.$bar; //$s = "$foo $bar";
$stop = microtime(TRUE); echo $stop-$start;
On my laptop this example needs about 14 secs with interpolation, but only 9-10 secs with concatenation.
That test measures execution time, not parse time. You're measuring 1us per iteration for concatenation and 1.4us per iteration for interpolation, for execution only. I'm measuring 74us per iteration for concatenation and 6us for interpolation, for parse and execution combined. Hashar's justification for replacing double quotes with single quotes throughout the MediaWiki codebase was to speed up execution for environments with no oparray caching, since the measured difference for oparray cache hits was said to be negligible.
Which brings me to this:
Jay R. Ashworth wrote:
"If a programmer can simulate a construct more efficiently than the compiler can implement it, then the compiler writer has blown it *badly*". --Guy L Steele, in Harbisone & Steele.
It's not so easy to simultaneously optimise compile speed and execution speed. Improving one often means trading off the other. An intermediate representation, even one hand-written in the source language, can escape that tradeoff.
-- Tim Starling