Hi,
I am looking for a query that returns a list of subcategories of a given
category. For instance, I would like to retrieve all 25 subcategories (Golf
by Country, Amateur golf, .) of http://en.wikipedia.org/wiki/Category:Golf
Thanks for your help
Christian
Hello,
I would like to know if it is possible to use MediaWiki API to get the
International Phonetic Alphabet of a word.
For example, I would like to retrieve the IPA "niːs" from the word
Nice as found in http://en.wiktionary.org/wiki/Nice
If yes, how is this possible?
If not possible, is there a way for me to help in order to make this possible?
Cheers,
Xavier
As of r75274, patrol tokens accepted by action=patrol and generated by
list=recentchanges are no longer equal to edit tokens and are no
longer the same within a session. Instead, they are now different for
every recentchanges row (i.e. they depend on the rcid).
It may take a while for this change to be deployed to the Wikimedia
cluster, but in the meantime changing your clients to no longer depend
on the patrol token to stay the same can't hurt.
Roan Kattouw (Catrope)
_______________________________________________
Mediawiki-api-announce mailing list
Mediawiki-api-announce(a)lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-api-announce
Hi,
I want to use MediaWiki API in order
to get some medical information. For now, I use only the wget and once
I see if it fits what I need I will probobly switch to Java and Perl
API in order to benefit from the 500 search limit that the bots have.
Here is my question: I want to restrict my search only to pages that
belong to medical categories (I mean, I am not interested in any pages
that don't belong to medical categories), but I still want to use
list=search&srsearch=<myQuery> and not title=<myQuery>.
If restricting the quesry to a certain category, is it possible to get
the list of categories to which a search result belongs. I found out
how to do that when I use: action=quesy&titles=...
For instance:
http://en.wikipedia.org/w/api.php?action=query&titles=Albert%20Einstein&pro…
how can I get the same info by using list=search. For instance, when I do:
http://en.wikipedia.org/w/api.php?action=query&list=search&srwhat=text&srse…
I don't get any categories info
Any help is really appreciated.
Thanks!
--Yassine.
Hi everyone,
Is it possible to retrieve via the Wikipedia API (or index.php) a thumbnail
of a picture given its file name ? All i know currently is that there's the
...index.php?title=Special:FilePath/<FileName> page but i do not know how i
could use it to get in my application a thumbnail with the specified
dimension instead of the full resolution picture.
Hello all,
I want to get just the first line of an article.
For example, for the article
http://en.wikipedia.org/wiki/Article_(grammar)
I want to obtain "An article (abbreviated art) is a word that combines with
a noun to indicate the type of reference being made by the noun."
Is there any way in which I can do that elegantly?
Best Regards
Prateek
--
Prateek Jain
Research Assistant, Kno.e.sis Center
Wright State University
Fairborn,Ohio 45435
http://knoesis.wright.edu/students/prateek/
Hello everyone,
I am trying to obtain through a web request from an application i'm working
on the raw wikitext version of an arbitrary Wikipedia article. However, i
prefer the templates to be already expanded into their constituent wikitext
and/or HTML, and from what i read on the Wikimedia API documentation this
should be done by appending "&templates=expand" to the query string; thus,
if I wanted the English version of the article about Barcelona in wikitext
with template expansion, the URL would be:
http://en.wikipedia.org/w/index.php?action=raw&title=Barcelona&templates=ex….
However, i soon found out that not all of the templates are actually
expanded, the exception being those templates enclosed in wiki tags, which
are left unexpanded (perhaps the most common example being {{cite ...}}
templates inside <ref></ref> tags).
I eventually learned that the API provides an individual template expansion
module, accessible through
...api.php?action=expandtemplates&text={{template_to_expand}}
. This was not really the solution I hoped for actually, since it is
impractical for long templates (i assume there's a limit imposed on that
query string) and for each of the unexpanded template i'd have to send a
separate request to the server, which could prove highly expensive in terms
of performance. I've also seen there's the special Wikipedia template
expansion page (http://en.wikipedia.org/wiki/Special:ExpandTemplates ), but
I don't know if and how I could access its functionality from my application
and how I could use it for several templates at once.
Could anyone please suggest a solution for expanding all the templates or at
least a viable workaround for this issue ?
Thanks in advance,
Gabriel S.
Greetings,
I'm delighted to announce that there's now a new Ruby library for
accessing the MediaWiki API. It's reasonably complete and up to date,
with support for these API operations:
create, delete, download, export, extensions, get, image_info,
import, list, login, render, search, semantic_query, undelete, upload
Open source, of course, and patches and suggestions more than welcome.
Download yours here:
Gem: http://rubygems.org/gems/mediawiki-gateway
Source: http://github.com/jpatokal/mediawiki-gateway
Cheers,
-jani
Hi,
sorry for bothering you again with the Mediawiki API. I have the following
problem / question: I'm looking for a query that returns all internal links
on a specific Wiki-Page. The query.
http://en.wikipedia.org/w/api.php?action=query
<http://en.wikipedia.org/w/api.php?action=query&pageids=328877&generator=lin
ks&prop=info&gpllimit=500>
&pageids=328877&generator=links&prop=info&gpllimit=500
.returns a set of results that exactly look like I want it, i.e. it returns
a pageid and title of each "subpage".
However, there are numerous subpages in this list that are not listed on the
corresponding wiki page (http://en.wikipedia.org/wiki/Alan_Jackson). E.g.,
the query result consists of a subpage "Ace in the Hole" but the wiki page
doesn't.
Thanks for your help and patience.
Bye
Christian