I am writing a Java program to extract the abstract of the wikipedia page
given the title of the wikipedia page. I have done some research and found
out that the abstract with be in rvsection=0
So for example if I want the abstract of 'Eiffel Tower" wiki page then I am
querying using the api in the following way.
and parse the XML data which we get and take the wikitext in the tag <rev
xml:space="preserve"> which represents the abstract of the wikipedia page.
But this wiki text also contains the infobox data which I do not need. I
would like to know if there is anyway in which I can remove the infobox data
and get only the wikitext related to the page's abstract Or if there is any
alternative method by which I can get the abstract of the page directly.
Looking forward to your help.
Thanks in Advance
The documentation on the score property says "Adds the score (if any) from
the search engine". This would be a very useful property for me to get the
relevance of each page to the search string, but it never seems to be
returned. Why so? Is there any way I can get the query to return it?
Secondly, I'd like the option to search titles only with the addition of
the &srwhat=title argument but whenever I try this I get the error result
code "srsearch-title-disabled". Is this temporary or permanent? How can I
search on titles only?
I want to search the mediawiki database and get the page with the closest
match to my search text. For example, on entering the search string "igor
stravinsky persephone" I want to get a link to this page:
The following query
almost gets what I want but the data returns does not include the pageid:
<p ns="0" title="Perséphone (Stravinsky)" snippet="(<span
class='searchmatch'>Persephone</span> ) is a
musical work (mélodrame) for speaker, solo singers, chorus, dancers
and orchestra with music by <span
class='searchmatch'>Stravinsky</span> and a
<b>...</b> " size="2500" wordcount="313"
Is this the right way to go about it? Is the title returned a unique ID
which I can use to get more detail of the page?
Today I approved and merged a change  by Brad Jorsch that changes
the way continue parameters are used in some API modules. These
changes do not break backwards compatibility unless you were doing
things you weren't supposed to, but I figured I should announce them
There were a few modules that were using continuations like
<query-continue><allcategories acfrom="Foobar" /></query-continue> ;
some of these were changed to use a dedicated accontinue parameter
instead of reusing acfrom (keeping acfrom, it has a legitimate
purpose). This shouldn't be a problem for clients that use values from
query-continue verbatim and don't make any assumptions about which
parameters will be used for continuations. The following modules are
* allcategories (acfrom -> accontinue)
* allimages (aifrom -> aicontinue)
* alllinks now consistently uses alcontinue instead of alfrom in all
cases; previously, alfrom was used in some cases and alcontinue in
* allpages (apfrom -> apcontinue)
* filearchive (fafrom -> facontinue)
Additionally, the values provided in query-continue will no longer be
normalized for presentation, and the values read from the continue
parameters will no longer be normalized for database usage. This won't
be a problem if you're just passing query-continue values back in
verbatim, but it might present problems if you were using continue
parameters in hacky ways to jump to certain parts of the result. This
change was needed to fix bug 36987 and bug 29290 . The following
modules and parameters are affected:
* allcategories (accontinue)
* allimages (aicontinue)
* alllinks (alcontinue)
* allpages (apcontinue)
* categories (clcontinue)
* deletedrevs (dlcontinue)
* duplicatefiles (dfcontinue)
* filearchive (facontinue)
* iwbacklinks (iwblcontinue)
* iwlinks (iwcontinue)
* images (imcontinue)
* langbacklinks (lblcontinue)
* links (plcontinue)
* templates (tlcontinue)
* watchlistraw (wrcontinue)
This change will be deployed as part of the 1.20wmf8 deployment. This
will be deployed to different wikis on different days over the course
of the next two weeks, see the deployment schedule .
Mediawiki-api-announce mailing list
I just subscribed because I'm interested in mirroring a few Mediawiki
installations where I'm contributing. Do you know of any well maintained and
stable software that already implements a read-only mirroring service and that
I could install on my own server?
Thomas Koch, http://www.koch.ro