On 14/01/14 10:55, George Herbert wrote:
On Mon, Jan 13, 2014 at 3:33 PM, Tim Starling tstarling@wikimedia.orgwrote:
In fact, it would slow down individual requests by a factor of 7, judging by the benchmarks of Calxeda and Xeon CPUs at
http://www.eembc.org/coremark/index.php
So instead of a 10s parse time, you would have 70s. Obviously that's not tolerable.
Question - is that 10s linear CPU core time for a parse, or 10s of average response time given our workloads?
Just an arbitrary number chosen to be within the range of CPU times for slower articles. On average, it is much faster than that.
For actual data, you could look at:
http://tstarling.com/stuff/featured-parse-boxplot.png
If it is the linear one-core parse processing time, how much of that is dependencies on DB lookups and the like, externalities within the infrastructure rather than the straight-line CPU time needed for the parse itself?
WikitextContent::getParserOutput() profiles at around 1.25s real and 1.17s CPU.
-- Tim Starling