On Thu, Oct 27, 2011 at 12:04 PM, Robert Stojnic <rainmansr(a)gmail.com> wrote:
> Could be that one of the hosts has a stale copy of an index?
How do I check for that, and how do I fix it?
I'm seeing a problem here when paging through large result sets. It
appears as though the order and count is changing as I'm walking through
a given result set.
For instance, I make a call:
And I see I get 689 results. But wait, I make the call again, and there
is 690. OK, you mirror your lucene indexes across a cluster and one is
slightly out of sync. Not a big deal. I adjust the offset and move on.
But now... whenever I get the 689 record set, the order appears
changed. I can't page the results without the order being deterministic!
#1 Is there a way to specify order for this command?
#2 Did I find a bug with one your lucene indexes?
Thanks in advance for any insights.
I want to do a reverse lookup for article name using curid such as
is for the article of
(as indicated by the URL
http://en.wikipedia.org/w/index.php?curid=1194195 ). Is there any method
in the API to achieve this?
If not, is there any way to use the operations for fetching
interwikilinks, using the curid?
Thanks for the help,
Research Assistant, Kno.e.sis Center
Wright State University
Hello, I'm a Computer Science Engineering Student.
I'm working on Mediawiki dumps and I would like to know if there is a
way to obtain pagelinks and categorylinks by Category (avoiding to
download the huge dump). I'm using the Special:Export and I have an xml
file but without pagelinks (figuring out after have used the mwdumper to
convert xml to sql).
Thanks in advance,
Per the below, protocol-relative URLs are now enabled on
test.wikipedia.org and will be rolled out to the rest of the wikis
over the course of the next few weeks. What this means is that URLs
used in the interface will now look like //example.com instead of
http://example.com , so we can support both HTTP and HTTPS without
splitting our cache.
The API, in most cases, will not output protocol-relative URLs, but
will continue to output http:// URLs no matter whether you call it
over HTTP or HTTPS. This is because we don't expect API clients to be
able to resolve these correctly, and that the context of these URLs
(which is needed to resolve them) will frequently get lost along the
way. And we don't wanna go breaking clients, now, do we? :)
The exceptions to this, as far as I am aware, are:
* HTML produced by the parser will have protocol-relative URLs in <a
href="..."> tags etc.
* prop=extlinks and list=exturlusage will output URLs verbatim as they
appear in the article, which means they may output protocol-relative
If you are getting protocol-relative URLs in some other place, that's
probably a bug (or maybe it's intentional and I forgot to list it
here), so please let me know, or e-mail this list, or file bug, if you
see that happening.
Roan Kattouw (Catrope)
---------- Forwarded message ----------
From: Ryan Lane <rlane32(a)gmail.com>
Date: Thu, Jul 14, 2011 at 8:55 PM
Subject: [Wikitech-l] Protocol-relative URLs enabled on test.wikipedia.org
To: Wikimedia developers <wikitech-l(a)lists.wikimedia.org>
Over the past couple days Roan Kattouw and I have been pushing out
changes to enable protocol-relative URL support. We've gotten to a
point where we think it is stable and working.
We've enabled this on test.wikipedia.org, and plan on running it for
two weeks before enabling it elsewhere. Please test if everything is
working properly, especially with regards to the API and bots. Report
bugs in bugzilla if any are found.
Wikitech-l mailing list
Mediawiki-api-announce mailing list
Is it normal that a namespace has been added to the XML answer of the API
with MW 1.18 ?
I was quite busy lately, so I may have missed the announcement about this.
It means that answers from MW 1.18 are not compatible with answers from
How can we keep our tools working with both 1.18 and previous versions ?
<?xml version="1.0" encoding="UTF-8"?>