> Roan Kattouw wrote:
>> 2010/11/26 Bryan Tong Minh <bryan.tongminh(a)gmail.com>:
>>> Somehow I think that publishing an entire dump violates the "do not
>>> publish significant parts of an article" rule.
>> Surely the toolserver admins could be asked to consider waiving that
>> in this case considering the public nature of the dumps and the
>> downtime situation with download.wm.o
>> Roan Kattouw (Catrope)
> It's not that toolserver admins are excentric adding such rule, but an
> issue of WM-DE liability if such information is published.
> However, I think that providing such file to just a few selected people
> would be acceptable.
I am also waiting to download the xml dumps from de.wp and en.wp since the servers are down. If you provide them on a mirror or on an alternative server, I would appreciate if you could give me access, too.
How likely do you think that the server is running again next week? According to http://wikitech.wikimedia.org/view/Dataset1 it sounds like if the firmware problem was solved, the server should be back again, right? In such case I would simply wait until the official server is running again.
We don't have very many participants in the project-wide community at the
English Wikibooks and very few of those have the technical knowledge or
the vertical border between tabs under the default Vector skin is missing
entirely under Internet Explorer 8, but only when logged out. Strangely
enough the borders appear when logged in. No problems under Firefox either
way. I don't experience any problems at any other Wikimedia site. I was
wondering if anyone had any insights as to what could cause this.
I am forwarding your request to wikitech-l, in the hope that there are
more people on there who can comment on this issue.
For those who did not follow the entire thread: the user does not send
an Accept-Encoding: gzip header, but nevertheless gets a gzipped
On Thu, Nov 25, 2010 at 8:19 PM, Anand Ramanathan <rcanand(a)gmail.com> wrote:
> Bryan: No, I didnt set the Accept-Encoding header explicitly - I found the
> following related issue on bugzilla: 7098
> Andrew: Yes, thanks. I see that curl can support this, and so can open-uri.
> I wanted to clarify if I should be handling this in the client:
> As per http 1.1 (section 14.3), for non-browser user agents, if no
> Accept-Encoding is explicitly set, the response should be the document
> itself if the server supports returning the document itself (identity).
> However, if the server is unable to return the document itself, it is
> preferable to return gzip or compressed content.
> I think this issue is happening whenever I hit a cache node that has the
> gzip, but not the identity cached. From a server standpoint, it seems like
> the right behavior. So, it is up to the client, which needs to do one of the
> a) Set Accept-Encoding to make gzip not-acceptable, and identity as
> acceptable. In this case, a cache node containing only gzip encoded document
> will miss, and eventually a node that contains the identity will return it.
> (This is a leap of faith, as I cannot target such a cache node explicitly.
> If a node has both gzip and identity content, and is responding with gzip
> for a request with no explicit Accept-Encoding set, then it violates the
> spec and is a bug. Can anyone comment on this?)
> b) Set Accept-Encoding to accept gzip or identity (or leave it unset), and
> on the client, if Content-Encoding is gzip, unzip it explicitly.
> I am fine with either of these approaches. Is this an accurate assessment of
> the issue and options?
> On Thu, Nov 25, 2010 at 4:23 AM, Andrew Dunbar <hippytrail(a)gmail.com> wrote:
>> On 25 November 2010 19:41, Anand Ramanathan <rcanand(a)gmail.com> wrote:
>> > Yes, confirmed that they are. It is gzip - what is the best way to deal
>> > with
>> > this? Is this a bug that is tracked, or is this something worth handling
>> > in
>> > client code (checking if gzip and manually unzipping)?
>> > Thanks
>> > Anand
>> Curl can definitely handle gzipped responses. Here's something about
>> it from a very quick Google search:
>> Andrew Dunbar (hippietrail)
>> > On Thu, Nov 25, 2010 at 12:12 AM, Bryan Tong Minh
>> > <bryan.tongminh(a)gmail.com>
>> > wrote:
>> >> On Thu, Nov 25, 2010 at 9:02 AM, Anand Ramanathan <rcanand(a)gmail.com>
>> >> wrote:
>> >> > OK, I got it again: Here is my curl output (headers + first few
>> >> > characters)
>> >> > for the garbled India wikipedia page (and the proper China wikipedia
>> >> > page
>> >> > for comparison below that):
>> >> Can you verify that the first two characters are 0x1f and 0x8b
>> >> respectively? Looks like gzip.
>> >> _______________________________________________
>> >> Mediawiki-api mailing list
>> >> Mediawiki-api(a)lists.wikimedia.org
>> >> https://lists.wikimedia.org/mailman/listinfo/mediawiki-api
>> > _______________________________________________
>> > Mediawiki-api mailing list
>> > Mediawiki-api(a)lists.wikimedia.org
>> > https://lists.wikimedia.org/mailman/listinfo/mediawiki-api
>> Mediawiki-api mailing list
> Mediawiki-api mailing list
for some types of resources, it's desirable to upload source files
(whether it's Blender, COLLADA, Scribus, EDL, or some other format),
so that others can more easily remix and process them. Currently, as
far as I know, there's no way to upload these resources to Commons.
What would be the arguments against allowing administrators to upload
arbitrary ZIP files on Wikimedia Commons, allowing the Commons
community to develop policy and process around when such archived
resources are appropriate? An alternative, of course, would be to
whitelist every possible source format for admins, but it seems to me
that it would be a good general policy to not enable additional
support for formats that aren't officially supported (reduces
confusion among users about what's permitted -- there's only one file
format they can't use).
Deputy Director, Wikimedia Foundation
Support Free Knowledge: http://wikimediafoundation.org/wiki/Donate
I use mwdumper to import the latest current xml
dump enwiki-20101011-pages-meta-current.xml.bz2 to my mediawiki. Everything
seems fine, however, i found that only 6,669,091 pages in the database,
while the mwdumper stops working and exit at the number 21,894,705.
I am not sure if i have successfully imported all the current pages into
mediawiki. Is there any method for me to verify that? Is there any data on
pages for each dumps for cross referencing purpose? Any method for me to
track what error has encountered (other than viewing the huge log file)?
On the other hand, i found that the parsing efficiency drops from time to
time during the import process. It drops from (345.12/sec) to (79.125/sec).
Is it a normal phenomenon? Any method for me to boost this performance? The
strange part is this figure rise again to around (200/sec) after the 6mil
something page is imported (maybe due to nothing is inserted to the DB
Any sharing of thoughts would be appreciated. Thank you.
Would you like to put in "page" table coordinates for each page(of course
for the pages, which have coordinates)?Is it possible?
The reason I'm asking you is that we want to know, which Wikipedia pages are
marked in Google maps.
I'm thinking about this dump files, and the kernel panic of the server
occurred when I was downloading the last dump file(69.7%) of the spanish
version. There is a remote chance that that download take the server
down? I'm very worried cause that.
Este mensaje le ha llegado mediante el servicio de correo electronico que ofrece Infomed para respaldar el cumplimiento de las misiones del Sistema Nacional de Salud. La persona que envia este correo asume el compromiso de usar el servicio a tales fines y cumplir con las regulaciones establecidas
> Message: 7
> Date: Sat, 13 Nov 2010 22:39:05 +0300
> From: Max Semenik <maxsem.wiki(a)gmail.com>
> Subject: [Wikitech-l] CategoryFeed
> To: Wikimedia developers <wikitech-l(a)lists.wikimedia.org>
> Message-ID: <1687437385.20101113223905(a)gmail.com>
> Content-Type: text/plain; charset=us-ascii
> This extension received no significant updates for the last 5 years
> and doesn't work with anything newer than 1.4. It doesn't even have a
> description page on mw.org. Nevertheless, numerous developers
> was^H^H^H spent their time on it while doing batch improvements to
> the whole extensions directory.
> Is someone interested in reviving it, or we can delete it right away?
> Max Semenik ([[User:MaxSem]])
This is offtopic... But that extension's name sounds oddly familiar.
Did it used to be enabled on wikimedia (en wikinews specifically) a
long time ago?