I need some help - I've implemented the API using CodeIgniter for a
clients site and everything works on the site except for the wiki pages,
which are sluggish.
Here is my controller which lists the function calls being made and in
what order:
http://pastebin.com/m4021c1a7
And here is my model where the actual API calls are made:
http://pastebin.com/dd7c504c
The api.php file exists on the same server that is making the calls, so
it should be instant but the pages are so very slow, and only in the
instance of using the MW API.
Any help or insight anybody could give would be MUCH appreciated.
- Mark
We're heavily using the MediaWiki API in our opensource project mwlib (http://code.pediapress.com/
), so first of all: Thanks to you all for implementing this
functionality to MediaWiki!
Maybe you're following the discussion initiated by Erik Möller on
Foundation-l about appropriate attribution. As there is yet a consesus
to be found, we plan to include all authors (minus minor edits, minus
bots) after each article in documents (PDFs, ODFs) rendered from
article collections.
Currently we're using an API query with prop=revisions, requesting
rvprop=user|ids|flags. Afterwards we're filtering out minor edits,
anonymous/IP edits and bot edits (via regular expression on username
and comment) and combine edits by the same author. To retrieve the
data for all revisions for heavily edited articles (e.g.
[[en:Physics]]), this requires lots of API requests with rvlimit=500.
Is there a way (or a plan to implement one) to retrieve the list of
unique contributors for a given article (from a given revision down to
the first one)? Ideally this would accept parameters for the mentioned
filtering. I guess inside of MediaWiki code this can be handled very
efficiently (using appropriate database queries) and would eliminate
the need to transfer lots of redundant data over the socket.
-- Johannes Beigel
Before I decide to work on it sometime in the future, anyone else
interested in creating a LocalFileRepo for Amazon's API?
Unless someone corrects me, the best method of dealing with Amazon S3
for storing images would be to make use of S3's API, rather than
mounting buckets onto the filesystem. The former should be more reliable
(^_^ trying to use a mountpoint will probably drive someone up the wall
like NFS does for brion), and also using the API should be more reliable
for handling multiple buckets, since as I recall the Amazon docs say
that buckets can only hold up to 5Gb each.
Though considering the large things to deal with for multiple buckets,
and the fact that the best methods will probably also have some url
redirect handling as well to keep the standard urls, it might be best as
an extension rather than put into core.
--
~Daniel Friesen (Dantman, Nadir-Seen-Fire)
~Profile/Portfolio: http://nadir-seen-fire.com
-The Nadir-Point Group (http://nadir-point.com)
--It's Wiki-Tools subgroup (http://wiki-tools.com)
--The ElectronicMe project (http://electronic-me.org)
-Wikia ACG on Wikia.com (http://wikia.com/wiki/Wikia_ACG)
--Animepedia (http://anime.wikia.com)
--Narutopedia (http://naruto.wikia.com)
Hello,
i don not know, if i am right here, but i got a strange parse-response from wikiparser today.
Maybe one can have a look on it. (i am from germany and i used wikis api.php for parsing the entry("Baum");
here is the request i used:
http://de.wikipedia.org/w/api.php?action=parse&prop=text&format=xml&page=ba…
this should be the last parsed text: "<p><span id="interwiki-he-fa" class="FA"></span></p>"
actually, api.php adds more text at the end of the response.
<p><a href="/w/index.php?title=Af:Boom&....................class="new" title="Zh-yue:樹 (Seite nicht vorhanden)">zh-yue:樹</a></p>
In the browser, this is shown as very strange HTML-Text.
Did i made something wrong?
This only happens at the page=Baum
Greetings
Raenaet
_______________________________________________________________________
Jetzt neu! Schützen Sie Ihren PC mit McAfee und WEB.DE. 30 Tage
kostenlos testen. http://www.pc-sicherheit.web.de/startseite/?mc=022220
Hi all,
I haven't found an answer to this elsewhere, so I'm posing the question
here.
Is it possible to use a newer version of the mediawiki api (perhaps by
copying api.php from a newer version) with an older MW installation? I am
on MW 1.12 and would like to use the edit feature introduced in MW 1.13, but
without the effort of upgrading our entire wiki. Is this possible?
Thanks for your help.
Matthew
As of r42471 [1], prop=revisions&rvprop=content will no longer throw an
error when too many titles or revisions are specified, but will throw a
warning and ignore the superfluous titles/revisions. The warning message
is identical to the one issued when too many values are specified for
the titles or revids parameter.
This change was made to fix bug 16074 [2], which occurred when a
generator with gXXlimit=max (or any sufficiently high limit, really) was
used to feed prop=revisions&rvprop=content, which would then throw an
error because it was fed too many titles or revisions. However, the
generator is not aware that prop=revisions threw away most of its
results, and will set a query-continue as if this didn't happen. If this
query-continue value is used by the client, a (potentially large) number
of results will be skipped. When continuing such a request (i.e. one
with a generator feeding prop=revisions&rvprop=content with a high or
maximum limit), you have to set gXXlimit to a sufficiently low value
first, so prop=revisions doesn't receive too many results and doesn't
throw stuff away. The right number can be found in the text of the
warning message, which is always something like "Too many values
supplied for parameter 'titles': the limit is 50" (note that both
'titles' and the number 50 may vary).
Finally, it should be noted that this behavior can only occur with
prop=revisions&rvprop=content and only when a generator is used to feed
it. All other modules and all uses of prop=revisions not involving both
rvprop=content and a generator are not affected.
Roan Kattouw (Catrope)
[1] http://www.mediawiki.org/wiki/Special:Code/MediaWiki/42471
[2] https://bugzilla.wikimedia.org/show_bug.cgi?id=16074
_______________________________________________
Mediawiki-api-announce mailing list
Mediawiki-api-announce(a)lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-api-announce
I am slightly confused over the editing of a page and avoiding a conflict
using the api.
The documentation says
"To edit a page, an edit token is required. This token is the same for all
pages, but changes at every login. If you want to protect against edit
conflicts (which is wise), you also need to get the timestamp of the last
revision. You can obtain these as follows:"
And I had implemented this literally, so that on submitted the edit, I
first get the timestamp of the very last revision and then submit my
changes. But I assume I really need to
Get page contents and store the timestamp - make some changes to the page.
when I edit pass my timestamp back to the API.
Is this correct? If so, maybe the documentation would be better to say
"you also need the timestamp of the revision your edits are based on" ?
Best Regards
Jools
I looked through the API docs I found online, and I didn't see any
straightforward way to access template fields on a page through any obvious
means. As an example, I want to access a page, pull in values from a number
of different pre-defined wiki-page-template fields. Is there any way to do
this, or am I stuck doing regex on a big blob of text?
Thanks,
Brendan
hi all,
How can i use longitude and latitude get the nearby articles from wiki
?
--
Kind Regards,
Neil
Skype: anim510
Twitter: anim510
Email: lvjiajun(a)nibirutech.com
Email: anim510(a)163.com