I want to add a widget to a web page that will allow user to enter search
terms and search wikimedia for images that match the terms. I have
implemented a similar widget for flickr, using their API, but am having
trouble doing the same with wikimedia.
Basically, I would like to replicate the functionality of the
commons.wikimedia.org search page. Ideally I would like to be able to get a
Category listing (ex. http://commons.wikimedia.org/wiki/Chartres_Cathedral)
or a true search results (ex.
http://commons.wikimedia.org/w/index.php?title=Special%3ASearch&search=%22c…),
but at this point I would be happy with either.
I've tried using the allimages list, but that is not adequate. Is there any
other way to search images using the API?
I have also been looking at Freebase and DBPedia. These seem like they
might do what I want, but RDF is completely new to me and I'm still trying
to figure out the basics of it. If anyone can point me in the right
direction for either of those resources, I would appreciate it.
Regards.
Tim Helck
Hi,
I am looking for a convenient way to convert individual documents returned
from the MediaWiki API to standalone HTML documents. Currently I am
retrieving documents via the action/render construct
http://en.wikipedia.org/w/index.php?action=render&title="
I am encountering two problems:
1) I have to put a "shell" of crudely cut and pasted <html><head><body>
etc. around the rendered html.
2) I need to strip out the external hrefs from the results, and I don't
have a good way to do this.
I am taking a fresh look and wondering whether I should be retrieving the
docs in JSON or XML and then using a conversion program to turn those into
nice clean HTML docs. Does anyone have any suggestions or working examples?
FredZ
-----------------------------------------------------
Subscribe to the Nimble Books Mailing List http://eepurl.com/czS- for
monthly updates
Is it possible to spoof the user IP address during action=edit? It
sounds like an unsavory question, but let me explain. Let's say an
anonymous user edits a page in my application, which, in turn, uses a
MediaWiki API client to make the necessary token/edit requests. The
resulting user is not the IP address of the anonymous user, rather
it's the IP address of the API client. As a result, every anonymous
user that uses my application is truly, irrevocably anonymous. This is
why I wonder if it's possible to send an arbitrary user IP address
along with the edit request.
Jim
I invite you to the yearly Berlin hackathon.
This is the premier event for the MediaWiki and Wikimedia technical
community. We'll be hacking, designing, and socialising.
Our goals for the event are to bring 100-150 people together, with
lots of people who have not attended such events before. User
scripts, gadgets, API use, Toolserver, Wikimedia Labs, mobile,
structured data, templates -- if you are into any of these things, we
want you to come!
Some financial assistance will be available -- more details soon.
This event will be hosted by Wikimedia Germany (WMDE) and supported by
the Wikimedia Foundation. Thank you, WMDE!
Dates: June 1-3 2012. Barely-started wiki page, no registration details
yet: https://www.mediawiki.org/wiki/Berlin_Hackathon_2012 . Organizers:
me and WMDE's Nicole Ebber with assistance from Lydia Pintscher and
Daniel Kinzler.
Mark your calendars!
--
Sumana Harihareswara
Volunteer Development Coordinator
Wikimedia Foundation
Most of the php bot frameworks, including some which are heavily used on
en.WP, use serialized php.
I do not believe MW should make such a decision due to IE being a
poorly-behaved browser, this being the only argument presented.
Amgine
Hi, this idea had floated around for quite some time, but now that
bug 34257[1] was added to the long list of problems, I would like to
step up and start some progress. We[2] propose to remove the following
formats[3]:
* WDDX - doesn't seem to be used by anyone. Doesn't look sane either.
* YAML - we don't serve real YAML anyway, currently it's just a subset
of JSON.
* rawfm - was created for debugging the JSON formatter aeons ago, not
useful for anything now.
* txt, dbg, dump - the only reason they were added is that it was
possible to add them, they don't serve the purpose of
machine/machine communication.
So, only 3 formats would remain:
* JSON - *the* recommended API format
* XML - evil and clumsy but sadly used too widely to be revoved in the
foreseeable future
* php - this one is used by several extensions and probably by some
third-party reusers, so we won't remove it this time. However,
any new uses of it should be discouraged.
We plan to remove the aforementioned formats as soon as MediaWiki 1.19
is branched so that these changes will take effect in 1.20, but would
like to hear from you first if there are good reasons why we shouldn't
do it or postpone it. Please have your say.
------
[1] https://bugzilla.wikimedia.org/show_bug.cgi?id=34257
[2] Me and Roan Kattouw, one of API's primary developers
[3] https://www.mediawiki.org/wiki/API:Data_formats
--
Best regards,
Max Semenik ([[User:MaxSem]])
I posted this on the talk page for the API. Trying to find an answer.
I did try with 16.5, but still the same error, it is reporting a doubled Content-Length. Where
in the code is the header formed for content length? Or is it relying on server?
Doubled Content-Length in HTTP Header
MediaWiki version: 16.0 - also tried 16.5
PHP version: 5.2.17 (cgi)
MySQL version: 5.0.91-log
URL: www.isogg.org/w/api.php
I am trying to track down a bug in the api which is causing a double content-length in the
header. This is causing a lot of issues with a python bot. Here is the report from web-sniffer
showing the content of the api.php call from this wiki. All other pages when called, i.e. the
Main page, etc. only report 1 content-length. Is the api forcing the headers? Why is doubling
only the one?
Status: HTTP/1.1 200 OK
Date: Mon, 30 Jan 2012 14:31:25 GMT
Content-Type: text/html; charset=utf-8 Connection: close
Server: Nginx / Varnish
X-Powered-By: PHP/5.2.17
MediaWiki-API-Error: help
Cache-Control: private
Content-Encoding: gzip
Vary: Accept-Encoding
Content-Length: 16656
Content-Length: 16656
As you can see this is a Nginx server. On an Apache server with 16.0, only one content-length is
sent. Could that be the issue and how do I solve it? Thanks.
Tom
Dear Members,
I'm using mediawiki API to get a result with the following code:
http://en.wikipedia.org/w/api.php?action=opensearch&search=pc&limit=20&form…
But if you look at this link: http://en.wikipedia.org/wiki/PC
there is description like this:
PC most commonly refers to:
* Personal computer, a computer whose original sales price, size, and capabilities make it useful for individuals
* Political correctness, language or behavior that appears calculated to provide a minimum of offense
but with my API, I cannot get such a result. Please help me, how can i change my API to get a result like above.
Actually, I want to send an acronym word (e.g. PC) via API and get the suggestions. (e.g: Personal computer, Political Correctness)
Please help me to find a solution for that, I searched a lot and I could not find any solution for that.
Yours Sincerely
Sasan Moshksar
Hi!
Does anybody know if it is possible to upload a base64 encoded file via the API? I've already examined SVGEdit extension by Brion Vibber (http://www.mediawiki.org/wiki/Extension:SVGEdit) to get behind the api file upload magic. But as SVG is just text it doesn't quite fit to my use case.
When I try to upload a base64 encoded file via the API (i.e. a JPEG) I just get a file with the base64 string (i.e. plaintext with base64 code) written to the MediaWiki file repo.
Any hints?
Greetings,
Robert Vogel
Social Web Technologien
Softwareentwicklung
Hallo Welt! - Medienwerkstatt GmbH
[cid:image001.png@01CCE104.F5BC2A10]<http://www.cebit.de/aussteller/hallo-welt-medienwerkstatt/V417252>
______________________________
Untere Bachgasse 15
93047 Regensburg
Tel. +49 (0) 941 - 66 0 80 - 198
Fax +49 (0) 941 - 66 0 80 - 189
www.hallowelt.biz
vogel(a)hallowelt.biz
Sitz: Regensburg
Amtsgericht: Regensburg
Handelsregister: HRB 10467
E.USt.Nr.: DE 253050833
Geschäftsführer: Anja Ebersbach, Markus Glaser, Dr. Richard Heigl, Radovan Kubani