I am writing a Java program to extract the abstract of the wikipedia page
given the title of the wikipedia page. I have done some research and found
out that the abstract with be in rvsection=0
So for example if I want the abstract of 'Eiffel Tower" wiki page then I am
querying using the api in the following way.
and parse the XML data which we get and take the wikitext in the tag <rev
xml:space="preserve"> which represents the abstract of the wikipedia page.
But this wiki text also contains the infobox data which I do not need. I
would like to know if there is anyway in which I can remove the infobox data
and get only the wikitext related to the page's abstract Or if there is any
alternative method by which I can get the abstract of the page directly.
Looking forward to your help.
Thanks in Advance
When list=allusers is used with auactiveusers, a property 'recenteditcount'
is returned in the result. In bug 67301 it was pointed out that this
property is including various other logged actions, and so should really be
named something like "recentactions".
Gerrit change 130093, merged today, adds the "recentactions" result
property. "recenteditcount" is also returned for backwards compatability,
but will be removed at some point during the MediaWiki 1.25 development
Any clients using this property should be updated to use the new property
name. The new property will be available on WMF wikis with 1.24wmf12, see
https://www.mediawiki.org/wiki/MediaWiki_1.24/Roadmap for the schedule.
Brad Jorsch (Anomie)
Mediawiki-api-announce mailing list
Hello! Sorry for my bad English.
Through the implementation of the current API impossible to get a random
article in category. This greatly reduces the potential of education
like API, and Wikipedia as a whole.
This is usually not a problem withcategories in which less than 500
catmembers, but to select a random article in categories, wich have more
than hundreds of thousands of catmembers becomes impossible!
Special: RandomInKategory is not a convenient solution for the
developer, since it does not allow to receive pageid and makes writing a
lot of redundant code to get things that are otherwise easily accessible
via the api.
Can we expect that this feature will be available through the API?
We have decided to officially retire the rest.wikimedia.org domain in
favor of /api/rest_v1/ at each individual project domain. For example,
Most clients already use the new path, and benefit from better
performance from geo-distributed caching, no additional DNS lookups,
and sharing of TLS / HTTP2 connections.
We intend to shut down the rest.wikimedia.org entry point around
March, so please adjust your clients to use /api/rest_v1/ soon.
Thank you for your cooperation,
Principal Engineer, Wikimedia Foundation
We are planning to enable automatic redirect following in all REST API
 HTML entry points on April 25th. When responding to a request for
a redirected title , the response headers will contain:
For most clients, this means that their HTTP client will automatically
follow redirects, simplifying common use cases. The few clients with a
need to retrieve the redirect page content itself have two options:
1) Disable following redirects in the client. For HTML and
data-parsoid entry points, the response still includes the HTML body &
regular response headers like the ETag.
2) Send a `?redirect=false` query string parameter. This option is
recommended for browsers, which lack control over redirect behavior
for historical security reasons.
If you do have a need to avoid following redirects, you can make these
changes before the feature is enabled. Internally, we have already
done so for VisualEditor and the Mobile Content Service. See also
https://phabricator.wikimedia.org/T118548 for background & related
Let us know if you have any concerns or questions about this.
Gabriel Wicke for the Wikimedia Services Team
: https://en.wikipedia.org/api/rest_v1/?doc (using en.wikipedia.org
as an example)
Mediawiki-api-announce mailing list
Is it possible to obtain the full HTML of a wiki page via the Mediawiki API? I’m looking for the API equivalent of:
Reason: Our wiki requires a login in order to read articles. So "wget" produces only a "Login required" page. I can log in via API, but can't figure out how to obtain the HTML, just the wikitext (action=query, prop=revisions, rvprop = content).
Other solutions are welcome too....