Hello,
I am writing a Java program to extract the abstract of the wikipedia page
given the title of the wikipedia page. I have done some research and found
out that the abstract with be in rvsection=0
So for example if I want the abstract of 'Eiffel Tower" wiki page then I am
querying using the api in the following way.
http://en.wikipedia.org/w/api.php?action=query&prop=revisions&titles=Eiffel…
and parse the XML data which we get and take the wikitext in the tag <rev
xml:space="preserve"> which represents the abstract of the wikipedia page.
But this wiki text also contains the infobox data which I do not need. I
would like to know if there is anyway in which I can remove the infobox data
and get only the wikitext related to the page's abstract Or if there is any
alternative method by which I can get the abstract of the page directly.
Looking forward to your help.
Thanks in Advance
Aditya Uppu
When list=allusers is used with auactiveusers, a property 'recenteditcount'
is returned in the result. In bug 67301[1] it was pointed out that this
property is including various other logged actions, and so should really be
named something like "recentactions".
Gerrit change 130093,[2] merged today, adds the "recentactions" result
property. "recenteditcount" is also returned for backwards compatability,
but will be removed at some point during the MediaWiki 1.25 development
cycle.
Any clients using this property should be updated to use the new property
name. The new property will be available on WMF wikis with 1.24wmf12, see
https://www.mediawiki.org/wiki/MediaWiki_1.24/Roadmap for the schedule.
[1]: https://bugzilla.wikimedia.org/show_bug.cgi?id=67301
[2]: https://gerrit.wikimedia.org/r/#/c/130093/
--
Brad Jorsch (Anomie)
Software Engineer
Wikimedia Foundation
_______________________________________________
Mediawiki-api-announce mailing list
Mediawiki-api-announce(a)lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-api-announce
Hi all,
I'm working on a webservice that use the Mediawiki API and I'd like to have some stats about the traffic of my api calls to the commons.wikipedia.org domain.
In particular, I'd like to have the number of GET Requests by Api-User-Agent, the number of views/edit by Api-User-Agent and the stats of the Wikipedia traffic from inbound links by a specif domain or url.
Is this possible somehow?
Thank you in advance,
Kind regards,
Viviana
Hi,
I work on project using API of portal based on Media Wiki.
Unfortunately they changed media wiki version from 1.16 to 1.30.
Earlier in endpoint list: logevents I got response with field move.
Unfortunately, we don't have 'move' field in response now. I don't know
what is it (one thing I know: move field had 2 fields: new_ns and
new_title).
I now also that now in response we can find 'params' field, which has also
2 fields: target_ns and target_title.
So I have a few questions:
1) What was 'move'
2) What is 'params' - is it the same thing as 'move'?
3) Can I get documentation for another version of Media Wiki than current?
Thanks,
Hi MediaWiki API mailing list!
I have a question about how the pageview metrics for Wikimedia commons
images accumulate– are they the total view count of that image anywhere, or
just when embedded in commons?
More detail:
I'm working with WikiLovesAfrica on a project to calculate how many times
each image has been viewed over a period of time, and I've been prototyping
data collection using the following endpoint (with the mwviews.api
<https://github.com/mediawiki-utilities/python-mwviews/> PageviewsClient):
https://wikimedia.org/api/rest_v1/metrics/pageviews/per-article/commons.wik…
*IMAGE_FILENAME*/monthly/2014010100/2019011800
My assumption is that since commons images are hosted from the commons, the
returned value will be the total view count of that image, wherever it is
being embedded, including all wikis. Is that correct?
If it's not the across-wiki count, is there a way to get that across-wiki
count?
All the best,
--
J. Nathan Matias <http://natematias.com/> : Princeton University :
CivilServant <http://civilservant.io> : MIT Media Lab : @natematias
<http://twitter.com/natematias> : blog
<http://civic.mit.edu/blog/natematias/>
Hello Everyone,
Just a reminder: the MediaWiki Action API Technical Documentation Survey
will close at 12:00 AM PST / 8:00 AM UTC.
If you have a few moments today to share your knowledge and opinions to
help improve technical documentation for the MediaWiki Action API, we would
appreciate it!
https://goo.gl/forms/Y5PGILb6b3awC3OJ2
*Notes about the Mediawiki Action API Survey:*
*Survey Period: *December 6, 2018 - January 14, 2019
*Privacy Policy:* This survey will be conducted via a third-party service,
which may subject it to additional terms. For more information on privacy
and data-handling, see the survey privacy statement
https://foundation.wikimedia.org/wiki/MediaWiki_Action_API_Survey_Privacy_S…
.
Thanks for your participation!
Kindly,
Sarah R. Rodlund
Technical Writer, Developer Advocacy
<https://meta.wikimedia.org/wiki/Developer_Advocacy>
srodlund(a)wikimedia.org
Hi all,
For my research (related to reverse engineering) I am interested in traffic
logs of an API.
It would help me a lot if I could get access to traffic logs of a
web-service. My question is, could I acquire such traffic logs of for
example Wikipedia.org? I'm interested in a collection of API requests, for
example a list of calls like this:
https://en.wikipedia.org/w/api.php?format=json&action=query&titles=Pigeon
This can simply be a list of urls, but can also be a server log or a
Wireshark capture.
I understand potential data-privacy issue, but a curated or filtered list
would already be very helpful.
Any pointers who to contact for such a request? Or other pointers for
gathering such data?
Thank you in advance.
Kind regards,
Willem