Michael Dale wrote:
*snip*
  
Yes, it's been filed before and WONTFIXed because parsing dozens or 
hundreds of pages in one request is kind of scary performance-wise
    

but clearly it would be more resource efficient than issuing 30 separate 
additional requests... maybe we could enable it with a low row return 
count say 30 ? It should be able to grab the output from the parse cache 
no?

With my use case of returning search result descriptions...it does not 
really need html it just needs striped wikitext or even a striped 
segment of wikitext.

So here are a few possible ways forward:

* I can switch on 30 extra requests if we need to highlight the problem....
* I could try and use one of the javascript wikitext -> html converters
* Maybe we could support the output striped wikitext (really what we 
want for search results) ...

It appears Lucene and the internal mysql store the index in striped form 
if we could add access to that from the api that would be ideal way 
forward I think.

--michael

  
As you only want to display a small amount of text from each page you could get just the text you need from each page and send them all together with some sort of separator to
http://en.wikipedia.org/w/api.php?action=parse&format=xml&text=This is some [[text]] to parse
Of course this turns "[[text]]" into an html anchor tag and expands templates. If this is not what you want, stripping the text yourself would probably be the best.

Ben aka Humbug