Hello,
I am writing a Java program to extract the abstract of the wikipedia page
given the title of the wikipedia page. I have done some research and found
out that the abstract with be in rvsection=0
So for example if I want the abstract of 'Eiffel Tower" wiki page then I am
querying using the api in the following way.
http://en.wikipedia.org/w/api.php?action=query&prop=revisions&titles=Eiffel…
and parse the XML data which we get and take the wikitext in the tag <rev
xml:space="preserve"> which represents the abstract of the wikipedia page.
But this wiki text also contains the infobox data which I do not need. I
would like to know if there is anyway in which I can remove the infobox data
and get only the wikitext related to the page's abstract Or if there is any
alternative method by which I can get the abstract of the page directly.
Looking forward to your help.
Thanks in Advance
Aditya Uppu
Hello,
How can I fetch images from Wikimedia Commons ? I'm able to get a list of all images in an article through this query:
http://en.wikipedia.org/w/api.php?action=parse&page=Norway&prop=images&form…
What query should I use to fetch thumbnails and full size images of listed image titles ?
Regards,
Siteshwar Vashisht
Hi everyone,
It appears that since today, the "iiurlwidth" and "iiurlheight" parameters of the query API / imageinfo are ignored and these elements are missing from the returned XML:
- thumburl
- thumbwidth
- thumbheight
The doc still says:
iiurlwidth - If iiprop=url is set, a URL to an image scaled to this width will be returned.
Only the current version of the image can be scaled
Default: -1
iiurlheight - Similar to iiurlwidth. Cannot be used without iiurlwidth
Default: -1
I don't remember seeing any "BREAKING CHANGE" email on this list about this: did I miss anything?
In any case, this breaks badly our free Discover app for iPad which had close to 1,000,000 downloads and a ton of enthusiastic users (http://itunes.apple.com/us/app/id384224429?mt=8). Any help on what's happening would be very valuable. Thanks in advance!
PS: Sample API request done by the app and observed result:
http://en.wikipedia.org/w/api.php?action=query&prop=imageinfo&titles=File:B…
<?xml version="1.0"?>
<api>
<query>
<normalized>
<n from="File:Bundesarchiv_Bild_146-2004-0099,_Kaiser_Friedrich_III..jpg" to="File:Bundesarchiv Bild 146-2004-0099, Kaiser Friedrich III..jpg"/>
</normalized>
<pages>
<page ns="6" title="File:Bundesarchiv Bild 146-2004-0099, Kaiser Friedrich III..jpg" missing="" imagerepository="shared">
<imageinfo>
<ii timestamp="2008-12-12T22:17:03Z" size="44861" width="553" height="800" url="http://upload.wikimedia.org/wikipedia/commons/7/79/Bundesarchiv_Bild_146-20…" descriptionurl="http://commons.wikimedia.org/wiki/File:Bundesarchiv_Bild_146-2004-0099,_Kai…" metadata="" mime="image/jpeg"/>
</imageinfo>
</page>
</pages>
</query>
</api>
-Pierre
________________________________
Pierre-Olivier Latour
pol(a)cooliris.com
Hello,
I'm trying to learn how to convert MediaWiki markup to HTML. One of the ways I've figured out is by sending a query to api.php:
http://en.wikipedia.org/w/api.php?action=parse&text={{Project:Sandbox}}&for…
After parsing response which I get is all special characters in HTML are kind of escaped. For e.g. < is <
So <a> becomes
<a>
Is there any way to disable this feature ?
Regards,
Siteshwar Vashisht
Dear All,
I'm new to MediaWiki, sorry if my question is trite but I still can't find answers in any page.
In MediaWiki API:Properties (http://www.mediawiki.org/wiki/API:Properties), we can know if a page is a redirect.
BUT
How to detect WHERE a page is redirected TO, and redirected FROM? How to check which page is the start, middle, or end of a redirect chain?
Could you please point me to the right direction?
Regards,Kevin.
Hello,
I am extracting wikipedia articles via mediawiki API (example :
http://en.wikipedia.org/w/api.php?action=parse&prop=text&page=Olive&format=…
)
and it's working quite well most of the time but sometimes the API makes a
delay to answer or worse I got no response at all from the API and my
request fall into timeout (tried many different CURL timeout params to
resolve it but nothing is absolutely safe (ie : 2 retries + 7 sec exec
timeout) ...
The problem occurs randomly on any size of article (big or small). And I
noticed that after a failure it always work with a second manual attempt...
TECH: I am using package SxWiki (SxWiki.inc.php) + CURL methods included
CURL ERROR: Operation timed out after 7000 milliseconds with 0 bytes
received
***
Do you know that kind of problems with the API ? Wikipedia API overload ??
Im trying to find a solution for weeks now and no method is 100% reliable!
Thank you for your help.
Oskar