Do you really want to say that reading from disk is faster than
processing the text using CPU? I don't know how complex syntax of mw
actually is, but C++ compilers are probably much faster than parsoid,
if that's true. And these are very slow.
What takes so much CPU time in turning wikitext into html? Sounds like
JS wasn't a best choice here.
On Fri, Nov 6, 2015 at 11:37 PM, Gabriel Wicke <gwicke(a)wikimedia.org> wrote:
We don't currently store the full history of each
page in RESTBase, so your
first access will trigger an on-demand parse of older revisions not yet in
storage, which is relatively slow. Repeat accesses will load those
revisions from disk (SSD), which will be a lot faster.
With a majority of clients now supporting HTTP2 / SPDY, use cases that
benefit from manual batching are becoming relatively rare. For a use case
like revision retrieval, HTTP2 with a decent amount of parallelism should
be plenty fast.
Gabriel
On Fri, Nov 6, 2015 at 2:24 PM, C. Scott Ananian <cananian(a)wikimedia.org>
wrote:
I think your subject line should have been
"RESTBase doesn't love me"?
--scott
_______________________________________________
Wikitech-l mailing list
Wikitech-l(a)lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l
--
Gabriel Wicke
Principal Engineer, Wikimedia Foundation
_______________________________________________
Wikitech-l mailing list
Wikitech-l(a)lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l