2011/1/14 Tim Starling tstarling@wikimedia.org
However, I'm not sure how that obtained that result, since {{loop!|100|x}} just expands to {{loop|100|x}}, since it hits the default case of the #switch. When I try it, I get a preprocessor node count of 1069, not 193.
:-)
The {{loop!|100|x}} is deprecated; loop! only manages by itself numbers between 1 and 10, larger numbers as 100 should be obtained by nesting or appending, as suggested into the doc of the template.For back-compatibility, {{loop!|100|x}} simply calls {{loop|100|x}} instead of raising an error. So, it's not surprising that optimization is only met into the range 1..10.
Preprocessor count 193 comes from suggested syntax for 100 (1 nesting) + 101 (1 nesting and 1 appending):
{{loop!|10|{{loop!|10|x}}}}
{{loop!|10|{{loop!|10|x}}}}{{loop!|1|x}}
This call is running into en:s:Wikisource:Sandbox now, and I got same metrics from that page's html:
<!-- NewPP limit report Preprocessor node count: 193/1000000 Post-expand include size: 2300/2048000 bytes Template argument size: 680/2048000 bytes Expensive parser function count: 0/500 -->
Nevertheless, loop is mainly an example and a test, not so a useful template. Dealing with the trouble of main metadata consistency into wikisource, deeply undermined by redundancy, our way to fix things really produces higher metrics when compared with other projects results.... but now I have a small "toolbox" of tricks to evaluate such a difference (rendering time + existing metrics).
Obviously it would be great to have better metrics for good, consistent perfomance comparison; but, as I told, I don't want to overload servers just to produce new metrics to evaluate server overload. ;-)
Alex