I am writing a Java program to extract the abstract of the wikipedia page
given the title of the wikipedia page. I have done some research and found
out that the abstract with be in rvsection=0
So for example if I want the abstract of 'Eiffel Tower" wiki page then I am
querying using the api in the following way.
and parse the XML data which we get and take the wikitext in the tag <rev
xml:space="preserve"> which represents the abstract of the wikipedia page.
But this wiki text also contains the infobox data which I do not need. I
would like to know if there is anyway in which I can remove the infobox data
and get only the wikitext related to the page's abstract Or if there is any
alternative method by which I can get the abstract of the page directly.
Looking forward to your help.
Thanks in Advance
When list=allusers is used with auactiveusers, a property 'recenteditcount'
is returned in the result. In bug 67301 it was pointed out that this
property is including various other logged actions, and so should really be
named something like "recentactions".
Gerrit change 130093, merged today, adds the "recentactions" result
property. "recenteditcount" is also returned for backwards compatability,
but will be removed at some point during the MediaWiki 1.25 development
Any clients using this property should be updated to use the new property
name. The new property will be available on WMF wikis with 1.24wmf12, see
https://www.mediawiki.org/wiki/MediaWiki_1.24/Roadmap for the schedule.
Brad Jorsch (Anomie)
Mediawiki-api-announce mailing list
I am new to the media wiki. Any help on this below query is highly
I have created a media wiki server and update a section of one particular
page (a table of values) dynamically.
I am planning to do this using rest APIs referring to
What is the flow I should follow to edit a page? Should I use APIs to login
first and then get the csrf token using another rest API and then finally
update the page.
What is the exact sequence??
Thanks in advance
I am not sure if anyone met this problem before. I am calling the
contributors API (https://en.wikipedia.org/w/api.php?action=query&format=
json&prop=contributors& ... ) with a list of titles as the input (less than
50), and then I continue until nothing more is available. However, in the
responses I gather for that single request, I am not getting the same
number of distinguished pages that I input. I cannot really figure out why.
Could somebody provide some suggestion? Thanks!
I am using Wikipedia api
<https://www.mediawiki.org/wiki/API:Parsing_wikitext#parse> to convert
wiki text into html.
Everything works fine except that in the output there are also warning
messages which are normally seen in preview mode. To exclude them, I
set 'preview' parameter to False in the query but warning messages are
still in the html output.
Here is an exmaple query:
Is there any idea how to get html output without preview warning messages.
Thanks a lot,
GESIS - Leibniz Institute for the Social Sciences
Computational Social Science (CSS)
Team Social Analytics and Services
M.Sc. Kenan Erdogan
Unter Sachsenhausen 6-8, 50667 Cologne, Germany
Tel: + 49 (0) 221-47694-211
I'm trying to log into a MediaWiki 1.28 site using the API and a standalone PHP script, but it keeps failing with the error "invalid token." I can successfully retrieve a login token:
$tokenRequest = array(
'action' => 'query',
'format' => 'json',
'meta' => 'tokens',
'type' => 'login',
But when I issue my "action=clientlogin" request, I always get the error "code=badtoken, info = invalid token":
$loginRequest = array(
'action' => 'clientlogin',
'format' => 'json',
'logintoken' => $token,
'loginreturnurl' => 'https://example.com/',
'username' => $username,
'password' => $password,
'domain' => 'mydomain',
'rememberMe' => 1,
I suspect the problem is that the two requests are not explicitly being made in the same session. That is, I'm not adding the header "Cookie: <session cookie>" to my second HTTP POST. How do I retrieve the session cookie after issuing my meta=tokens request so I can hand it to the client login request? In earlier versions of MediaWiki, I could get the cookie information from an API call, "action=login". This has been deprecated but I haven't seen any examples of the new way to do it, just generic instructions like "Clients should handle cookies to properly manage session state."
I'm not operating inside the MediaWiki codebase with its WebRequest, SessionManager, etc. classes -- this is a standalone script.