It appears that since today, the "iiurlwidth" and "iiurlheight" parameters of the query API / imageinfo are ignored and these elements are missing from the returned XML:
The doc still says:
iiurlwidth - If iiprop=url is set, a URL to an image scaled to this width will be returned.
Only the current version of the image can be scaled
iiurlheight - Similar to iiurlwidth. Cannot be used without iiurlwidth
I don't remember seeing any "BREAKING CHANGE" email on this list about this: did I miss anything?
In any case, this breaks badly our free Discover app for iPad which had close to 1,000,000 downloads and a ton of enthusiastic users (http://itunes.apple.com/us/app/id384224429?mt=8). Any help on what's happening would be very valuable. Thanks in advance!
PS: Sample API request done by the app and observed result:
<n from="File:Bundesarchiv_Bild_146-2004-0099,_Kaiser_Friedrich_III..jpg" to="File:Bundesarchiv Bild 146-2004-0099, Kaiser Friedrich III..jpg"/>
<page ns="6" title="File:Bundesarchiv Bild 146-2004-0099, Kaiser Friedrich III..jpg" missing="" imagerepository="shared">
<ii timestamp="2008-12-12T22:17:03Z" size="44861" width="553" height="800" url="http://upload.wikimedia.org/wikipedia/commons/7/79/Bundesarchiv_Bild_146-20…" descriptionurl="http://commons.wikimedia.org/wiki/File:Bundesarchiv_Bild_146-2004-0099,_Kai…" metadata="" mime="image/jpeg"/>
I'm trying to learn how to convert MediaWiki markup to HTML. One of the ways I've figured out is by sending a query to api.php:
After parsing response which I get is all special characters in HTML are kind of escaped. For e.g. < is <
So <a> becomes
Is there any way to disable this feature ?
I'm new to MediaWiki, sorry if my question is trite but I still can't find answers in any page.
In MediaWiki API:Properties (http://www.mediawiki.org/wiki/API:Properties), we can know if a page is a redirect.
How to detect WHERE a page is redirected TO, and redirected FROM? How to check which page is the start, middle, or end of a redirect chain?
Could you please point me to the right direction?
I am extracting wikipedia articles via mediawiki API (example :
and it's working quite well most of the time but sometimes the API makes a
delay to answer or worse I got no response at all from the API and my
request fall into timeout (tried many different CURL timeout params to
resolve it but nothing is absolutely safe (ie : 2 retries + 7 sec exec
The problem occurs randomly on any size of article (big or small). And I
noticed that after a failure it always work with a second manual attempt...
TECH: I am using package SxWiki (SxWiki.inc.php) + CURL methods included
CURL ERROR: Operation timed out after 7000 milliseconds with 0 bytes
Do you know that kind of problems with the API ? Wikipedia API overload ??
Im trying to find a solution for weeks now and no method is 100% reliable!
Thank you for your help.
prop=pageprops formerly added all the page properties to the page node.
<page pageid="25970423" ns="0" title="IPad" defaultsort="Ipad"
In bug 27479, it was pointed out this can cause API errors if the
name of a page property conflicts with a property or node generated by
another module; for example, both prop=info&inprop=displaytitle and
prop=pageprops will try to add displaytitle to the page node.
In r82312 the output of pageprops was changed to place the properties
on a subnode named "pageprops" to avoid such conflicts, in much the same
way that prop=categoryinfo works. For example,
<page pageid="25970423" ns="0" title="IPad">
<pageprops defaultsort="Ipad" displaytitle="iPad" />
This is tagged for backporting to 1.17 and 1.17wmf1, although I couldn't
say when that might be done.