I am writing a Java program to extract the abstract of the wikipedia page
given the title of the wikipedia page. I have done some research and found
out that the abstract with be in rvsection=0
So for example if I want the abstract of 'Eiffel Tower" wiki page then I am
querying using the api in the following way.
and parse the XML data which we get and take the wikitext in the tag <rev
xml:space="preserve"> which represents the abstract of the wikipedia page.
But this wiki text also contains the infobox data which I do not need. I
would like to know if there is anyway in which I can remove the infobox data
and get only the wikitext related to the page's abstract Or if there is any
alternative method by which I can get the abstract of the page directly.
Looking forward to your help.
Thanks in Advance
I found in some of the UPLOAD update, there is no page id:
<rc type="log" ns="6" title="File:Lucian A. Sperta- Nunez.jpg" rcid="
114549183" pageid="0" revid="0" old_revid="0" user="Azarel63"oldlen="0"
newlen="0" timestamp="2014-01-05T11:09:38Z" comment="User created page with
UploadWizard" logid="77242320" logtype="upload"logaction="upload" img_sha1="
<rc type="log" ns="6" title="File:Gingerbread spices (annotated).jpg" rcid="
114549185" pageid="30485540" revid="0" old_revid="0"user="SKopp" oldlen="0"
newlen="0" timestamp="2014-01-05T11:09:37Z" comment="User created page with
UploadWizard" logid="77242318"logtype="upload" logaction="upload" img_sha1="
The first one has no page id but the second one has.
Does anybody can tell me the differences?
Hi, in response to bug 54607 , we've changed the semantics of the
mobileformat parameter to action=parse
== Summary ==
Previously, it used to accept strings 'html' or 'wml', later just
'html' and modify the structure of output (see below). This was problematic
because you needed to retrieve the HTML from output in different ways,
depending on whether mobileformat is specified or not. Now,
mobileformat is a boolean parameter, that is if there's a 'mobileformat'
parameter in request, it will be treated as "the output should be
mobile-friendly", regardless of value. And the output structure will
be the same. For compatibility with older callers,
mobileformat=(html|wml) will be special-cased to return the older
structure at least for 6 month from now. These changes will start
being rolled out to the WMF sites starting from tomorrow, Tuesday
October 24th and this process will be complete by October 31st.
== Examples ==
=== Non-mobile parse ===
<parse title="..." displaytitle="...">
=== Parse that outputs mobile HTML, old style ===
<parse title="..." text="foo" displaytitle="...">
=== Parse that outputs mobile HTML, new style ===
Same as for non-mobile parses.
== FAQ ==
Q: I didn't use mobileformat before, does anything change for me?
Q: I use mobileformat=html, will my bot/tool be broken now?
A: No, you will have 6 months to switch to new style.
Q: I'm only planning to use mobileformat, what should I do?
A: Just use the new style.
Q: How did this format discrepancy appear in the first place?
A: To err is human.
Max Semenik ([[User:MaxSem]])
Greetings! I'm back with another question. :)
I know that there are several MediaWiki configuration values available in
fair bit as necessary. Right now, I want to access the enums for
exposed anywhere in the mediawiki object?
I'm trying to make a simple extension that moves a page on my mediawiki
site, but whether using curl or FauxRequest, I always get bad token
response. Tried urlencoding, not encoding, %2B/, without +/, etc, doesn't
matter. The token shown in the output looks identical to the one when
requesting it from the api. I'm a total programming noob, so could be
something simple, but I feel like I've been through everything at this
Code looks like this currently using FauxRequest, with param1/2/3 coming
from the parser function I'm creating.
$token = $token = $wgUser->editToken();
$params = new FauxRequest(
'action' => 'move',
'from' => "$param1",
'to' => "$param2",
'format' => 'php',
'reason' => "$param3",
'token' => "$token")
$api = new ApiMain( $params, true);
$data = & $api->getResultData();
$output = "moved $param1 to $param2 - $token";
also tried the below code using curl instead, which results in bad token as
$token = $wgUser->editToken();
$url = 'http://www.wikiocity.com/api.php?';
$myvars = 'action=move&format=xml&from=' . "$param1" . '&to=' .
"$param2" . '&reason=' . "$param3" . '&token=' . urlencode($token);
$ch = curl_init( $url );
curl_setopt( $ch, CURLOPT_POST, 1);
curl_setopt( $ch, CURLOPT_POSTFIELDS, $myvars);
curl_setopt( $ch, CURLOPT_FOLLOWLOCATION, 1);
curl_setopt( $ch, CURLOPT_USERAGENT, 'MyCoolTool/1.1
curl_setopt( $ch, CURLOPT_RETURNTRANSFER, 1);
$response = curl_exec( $ch );
$output = "moved $param1 to $param2 - $myvars - $response";
Am I missing something in the code, or could I have a setting wrong
Any help would be hugely appreciated!
I am mayank desai working on query
disambiguation.I am using your mediaWiki API to get articles.How to
increase number of pages returned by the API.As its default value is 10
but i want it to increase it.Can you please guide me how to do it?
Greetings! I am a relatively new MediaWiki extension developer, and I'm
having trouble with getting the MediaWiki API to capture all links coming
into and out of a given wiki page.
I know we can use *action=query&prop=extlinks *to get *external *links from
our wiki page to other (non-wiki) web pages, *action=query&prop=links *to
get internal links from our wiki page *out *to other wiki pages, and
*to get internal links from other wiki pages *into *our wiki page. However,
in the case where a link is *external *but points *to an internal wiki page*,
I can't find the proper API call. These links seem to be falling through
the cracks somehow.
I'm inclined to believe that I'm just missing something simple, especially
because this MediaWiki help page includes "external links to internal
pages" as a category of links:
Can anyone shed light on this issue? Is there an easy way to capture
external links to internal wiki pages through an API call?
Remember that clients should not be depending on the specific query string
data returned inside the query-continue or continue nodes. Clients should
be treating the returned key-value pairs as opaque data to be returned to
the server with the subsequent query.
Interested users may also wish to review the discussion thread from when a
similar change was made last year.
To finish fixing bug 24782, Gerrit change 103589 changes the names
and values of various items under the continue or query-continue node in
the API response. Specifically,
* list=allimages will always use aicontinue, rather than sometimes using
aistart. The formatting of the value for the modes that formerly used
aistart will differ from that used by the modes that already used
* list=blocks will now use bkcontinue rather than bkstart.
* list=categorymembers will always use cmcontinue, rather than sometimes
using cmstart. The formatting of the value for the modes that formerly used
cmstart will differ from that used by the modes that already used
* list=deletedrevs will always use drcontinue, rather than sometimes using
drstart. The formatting of the value for drcontinue is also changing for
the modes that did use it, and the formats differ between modes.
* list=logevents will now use lecontinue rather than lestart.
* list=protectedtitles will now use ptcontinue rather than ptstart.
* list=recentchanges is changing the formatting of the value of rccontinue.
* list=usercontribs will always use uccontinue, rather than sometimes using
ucstart. The formatting of the value for uccontinue is also changing for
the modes that did use it previously, and the formats differ between modes.
* list=watchlist will now use wlcontinue rather than wlstart.
These changes should be deployed to WMF wikis with 1.23wmf23, see
https://www.mediawiki.org/wiki/MediaWiki_1.23/Roadmap for the schedule.
Also, if anyone is aware of other modules that use a bare timestamp or
other non-unique value as their continuation, please reopen bug 24782 (if
they are in MediaWiki core) or file a new bug (if they are in an
: Note prop=imageinfo uses a timestamp, but this is OK as in this case
it is apparently a unique identifier.
Brad Jorsch (Anomie)
Mediawiki-api-announce mailing list
It was noticed recently that these blockinfo properties return the internal
value from the database for the blockexpiry field, rather than formatting
them as does list=blocks. This means the formatting is more difficult to
parse, and that it may differ between installations of MediaWiki depending
on the database backend in use, and it doesn't match the formatting of
other timestamps returned by the API.
If anyone objects to this field being changed to be formatted in the same
manner as the expiry field returned by list=blocks, speak up on the
Brad Jorsch (Anomie)
Mediawiki-api-announce mailing list