Hello,
I am writing a Java program to extract the abstract of the wikipedia page
given the title of the wikipedia page. I have done some research and found
out that the abstract with be in rvsection=0
So for example if I want the abstract of 'Eiffel Tower" wiki page then I am
querying using the api in the following way.
http://en.wikipedia.org/w/api.php?action=query&prop=revisions&titles=Eiffel…
and parse the XML data which we get and take the wikitext in the tag <rev
xml:space="preserve"> which represents the abstract of the wikipedia page.
But this wiki text also contains the infobox data which I do not need. I
would like to know if there is anyway in which I can remove the infobox data
and get only the wikitext related to the page's abstract Or if there is any
alternative method by which I can get the abstract of the page directly.
Looking forward to your help.
Thanks in Advance
Aditya Uppu
When list=allusers is used with auactiveusers, a property 'recenteditcount'
is returned in the result. In bug 67301[1] it was pointed out that this
property is including various other logged actions, and so should really be
named something like "recentactions".
Gerrit change 130093,[2] merged today, adds the "recentactions" result
property. "recenteditcount" is also returned for backwards compatability,
but will be removed at some point during the MediaWiki 1.25 development
cycle.
Any clients using this property should be updated to use the new property
name. The new property will be available on WMF wikis with 1.24wmf12, see
https://www.mediawiki.org/wiki/MediaWiki_1.24/Roadmap for the schedule.
[1]: https://bugzilla.wikimedia.org/show_bug.cgi?id=67301
[2]: https://gerrit.wikimedia.org/r/#/c/130093/
--
Brad Jorsch (Anomie)
Software Engineer
Wikimedia Foundation
_______________________________________________
Mediawiki-api-announce mailing list
Mediawiki-api-announce(a)lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-api-announce
Hi!
I know that I'm able to get the "linkshere" prop for multiple pages like
this:
https://en.wikipedia.org/w/api.php?format=json&action=query&prop=linkshere&…
My problem is with the "lhlimit" parameter. I'd like to get 10 results for
each page, but it's currently gives my 10 results for ALL of the pages,
which means I have to call "lhcontinue" each time.
Is it possible to get 10 results for each page when asking for multiple
pages?
Or should I make a different call for each article in order to achieve
this?
Thanks,
Tal
Hi!
I have a question regarding the random articles generation in the API.
Let's say I'm using this URL to retrieve two random articles:
https://en.wikipedia.org/w/api.php?format=xml&action=query&generator=random…
I got an API response which contains articles A and B in this order:
1. A
2. B
Is there a chance (I know it's very small, but it's just for my
understanding), that I'll get the exact articles but in the opposite order?
Like this:
1. B
2. A
I'd like to know whether there is certain order for the random articles in
the API response, or it's also random.
Thanks!