Hi, in response to bug 54607 [1], we've changed the semantics of the
mobileformat parameter to action=parse
== Summary ==
Previously, it used to accept strings 'html' or 'wml', later just
'html' and modify the structure of output (see below). This was problematic
because you needed to retrieve the HTML from output in different ways,
depending on whether mobileformat is specified or not. Now,
mobileformat is a boolean parameter, that is if there's a 'mobileformat'
parameter in request, it will be treated as "the output should be
mobile-friendly", regardless of value. And the output structure will
be the same. For compatibility with older callers,
mobileformat=(html|wml) will be special-cased to return the older
structure at least for 6 month from now. These changes will start
being rolled out to the WMF sites starting from tomorrow, Tuesday
October 24th and this process will be complete by October 31st.
== Examples ==
=== Non-mobile parse ===
api.php?action=parse&format=xml
{
"parse": {
"title": "...",
"text": {
"*": "foo"
}
}
}
api.php?action=parse&format=json
<?xml version="1.0"?>
<api>
<parse title="..." displaytitle="...">
<text xml:space="preserve">foo</text>
</parse>
</api>
=== Parse that outputs mobile HTML, old style ===
api.php?action=parse&format=json&mobileformat=html
{
"parse": {
"title": "API",
"text": "foo"
}
}
api.php?action=parse&format=xml&mobileformat=html
<?xml version="1.0"?>
<api>
<parse title="..." text="foo" displaytitle="...">
</parse>
</api>
=== Parse that outputs mobile HTML, new style ===
api.php?action=parse&format=...&mobileformat
Same as for non-mobile parses.
== FAQ ==
Q: I didn't use mobileformat before, does anything change for me?
A: No.
Q: I use mobileformat=html, will my bot/tool be broken now?
A: No, you will have 6 months to switch to new style.
Q: I'm only planning to use mobileformat, what should I do?
A: Just use the new style.
Q: How did this format discrepancy appear in the first place?
A: To err is human.
-----
[1] https://bugzilla.wikimedia.org/show_bug.cgi?id=54607
--
Best regards,
Max Semenik ([[User:MaxSem]])
Dear All,
I am working on an application which requires the total wikipedia page counts that contain specific word or a pair of words. I have tried to to use advanced search to get these page counts and retrieve the the HTML page to extract that. Please see the attached snapshot is information pointed by the red arrow the actual total pages containing the name David Beckham ?. Any advice is highly appreciated.
Cheers,
...................................................................
Muhidin A. Mohamed,
School of Electrical, Electronic and Computer Engineering,
University of Birmingham,
Pritchatts Road,
Edgbaston,
B15 2SA
Hello,
is there any minimum number of votes for fixing a bug?
I am wondering which is - if there is - an average time for having a normal
bug fixed, and see if I may help.
How would if possible to help without installing mediawiki software?
(I am a bit skilled in python, not other languages)
tks!
Hi,
My name is Kenrick, I'm an Indonesian Wikipedia administrator trying to
experiment things with MediaWiki API.
I am currently doing a project to list down the new articles using
MediaWiki API with these parameters:
action = query
list = recentchanges
rctype = new
rcshow = !redirect
But then this method also includes those articles which current revision is
a redirect page.
Is there any way to perfectly list down new articles like Special:Newpages
did?
Thank you.
Hi, All,
There are 2 move operations in recent change:
"move/move" and "move/move_redir".
And in "move/move" changes, there may be tag suppressedredirect="" which
means move the page without creating the redirect (
http://www.mediawiki.org/wiki/Help:Moving_a_page).
Here comes my question:
does "move/move" change without the suppressedredirect="" equal to
"move/move_redir", i.e. it's a move redirect operation?
Thanks.
Hi,
I am writing a mobile client that can show wikipedia content. My
approach is to download the raw media-wiki markup instead of the
generated html. This allows me more control and I avoid using a html
parser/viewer.
The approach is quite successful except for when I encounter images.
In the markup I can see something like;
File:1945-P-Jefferson-War-Nickel-Reverse.JPG
I use the API to fetch some metadata;
en.wikipedia.org/w/api.php?action=query&prop=imageinfo \
&iilimit=1&format=xml&iiprop=dimensions%7Cmime&titles=[foo]
The piece of the puzzle I am still missing is how to find out the
actual download URL for any given image.
I've seen images start with;
http://upload.wikimedia.org/wikipedia/en/6/6d/
and with;
http://upload.wikimedia.org/wikipedia/commons/d/d0/
But I don't really understand how to decide what url to prefix to my
image-name.
Anyone can shed some light on this?
Thanks!
--
Thomas Zander
Hi All,
we've been developing a product to access the knowledge in the wikipedia in
other ways... and of course we make use of the mediawiki api.
We are using geodata extension to construct a nearby similar to the wiki,
but I'd like the possibility to retrieve articles beyond the 10000 limit
radius: this is the case when I drag the map.
As example, you can zoom out your map at "nations" scale, drag it around
the globe, and you'll see other articles which will be pinned on the map.
I've seen something similar within google maps and other mobile app, so I
read again the doc http://www.mediawiki.org/wiki/Extension:GeoData
I figured out this feature should be available from geohack, which is
already included, by the use of parameter "scale".
However, I didn't understand how to use it; also geohack is not mentioned
in the main doc
http://en.wikipedia.org/w/api.php
Should I pipe it with a generator? Any example for doing it ?
Once again, thank you so much for your help to all devs!
Luigi
Hi,
As part of the visualisation tool I'm building I'm fetching the parsed
revisions of an article. When the article is of a considerable size , eg
latest revisions of Barack Obama it takes 10+ seconds. As the tool is
interactive and it shows the edits made to an article as an animation the
time taken by the server does not bode well. (The requests are only read )
I'm currently not making parallel requests. What would a reasonable degree
of parallel requests. Are there other ways to get around this latency issue
?
https://meta.wikimedia.org/wiki/Grants:IEG/Replay_Edits talks about the
tool and the project.
Thanks
Jeph
Dear All,
I need to integrate the number of Wikipedia pages containing specific word in my application. for example if I write car, I want to receive all page count that are in Wikipedia and containing that word in any position of the article content. I have tried to use special search and retrieve and raw wikitext article but it is not efficient. I will appreciate if you can forward any links or information leading to that.
Cheers,
Muhidin A.