Hello,
I am writing a Java program to extract the abstract of the wikipedia page
given the title of the wikipedia page. I have done some research and found
out that the abstract with be in rvsection=0
So for example if I want the abstract of 'Eiffel Tower" wiki page then I am
querying using the api in the following way.
http://en.wikipedia.org/w/api.php?action=query&prop=revisions&titles=Eiffel…
and parse the XML data which we get and take the wikitext in the tag <rev
xml:space="preserve"> which represents the abstract of the wikipedia page.
But this wiki text also contains the infobox data which I do not need. I
would like to know if there is anyway in which I can remove the infobox data
and get only the wikitext related to the page's abstract Or if there is any
alternative method by which I can get the abstract of the page directly.
Looking forward to your help.
Thanks in Advance
Aditya Uppu
Hello. I'm using version 1.16, and I'm trying to use curl to call the
API to upload files specified in a form by the user. So, the user
submits via a POST the file they want to upload, and the php copies that
file to a temporary spot on the server. I then use curl to tell the API
to upload that file, but it says
<error code="uploaddisabled" info="Uploads are not enabled...
Even though I can totally upload through Special:Upload just fine.
Here's the curl (this is after obtaining an edit token in code that
works fine as a setup for creating new articles):
$ch = curl_init();
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_POST, true);
curl_setopt($ch, CURLOPT_COOKIEFILE, 'cookiefile.txt');
curl_setopt($ch, CURLOPT_COOKIEJAR, 'cookiefile.txt');
curl_setopt($ch, CURLOPT_HTTPHEADER, Array("Content-Type:
multipart/form-data"));
curl_setopt($ch, CURLOPT_URL,
"http://$server_name/w/api.php?action=upload&token=$edittoken&filename=$
filename&url=http://$server_name/upload_tmp/$filename");
I'm also confused as to how to call it using the 'file' option instead,
as the manual says if takes 'file contents', and the examples has
'file=file_contents_here'. When I try that method with the curl
curl_setopt($ch, CURLOPT_URL,
"http://$server_name/w/api.php?action=upload&token=$edittoken&filename=$
filename&file=file_contents_here");
then I get the error:
<error code="missingparam" info="One of the parameters sessionkey, file,
url is required"
So, what's 'file contents' supposed to contain? Thanks for the help.
Will Preston
Network Administrator
OLSON KUNDIG ARCHITECTS
206.624.5670
olsonkundigarchitects.com <http://www.olsonkundigarchitects.com/>
Hi all,
How can I use the wikipedia "query" API to retrieve the display title of a page with the proper lowercase / uppercase? For instance, if I call:
http://en.wikipedia.org/w/api.php?action=query&prop=info&titles=IPad&format…
I get back:
<?xml version="1.0"?>
<api>
<query>
<pages>
<page pageid="25970423" ns="0" title="IPad" touched="2010-05-26T13:22:43Z" lastrevid="364291567" counter="0" length="56739" />
</pages>
</query>
</api>
The raw page title is indeed "IPad" but the display one should be "iPad"? Is there any way I can get the display title?
I don't want to use the "parse" API (which has a nice "displaytitle" property) as it returns way too much data.
In any case, it seems a "prop" option for "displaytitle" in the "query" API would be a nice addition and it would be consistent with the properties you can retrieve with the "parse" API.
-Pol
________________________________
Pierre-Olivier Latour
pol(a)cooliris.com
Hi,
I 've setup mediawiki-1.15.1 with the following hook in LocalSettings.php:
$wgHooks['EditPage::attemptSave'][] = 'foo';
Foo is a simple function:
function foo($editpage) {
return true;
}
With this setup I 've noticed that I cannot save any page with mwclient (ver
0.6.4). The call:
page.save(newText)
raises the following exception:
File "/lib/python2.6/json/decoder.py", line 338, in raw_decode
raise ValueError("No JSON object could be decoded")
ValueError: No JSON object could be decoded
When I comment out the hook then the save command works properly. Any
suggestions?
My system is a OSX 10.6.3
Thank you
Alex.
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Hello,
I cannot figure out how to delete old versions of an image through the
API. http://www.mediawiki.org/wiki/API:Edit_-_Delete only talks about
deleting pages. Is this possible?
Thanks,
- -Mike
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (GNU/Linux)
iEYEARECAAYFAkv4DOoACgkQst0AR/DaKHv7PwCggVKQV+vWJ2bWQt46IzIYZuiS
B9AAoI0dOXrz4wpkrq8kW9ibiyEvvFkb
=HowF
-----END PGP SIGNATURE-----
________________________________
hi,
Sorry, I've been
through the documentation, but I can't seem to find a method to return a rendered section of a page.
I see the methods for editing a section, but not obtaining it.
Any help would be greatly appreciated.
Many apologies if this is a really basic question I'm too dim to figure
out.
Thanks very much
Don