don't know if this issue came up already - in case it did and has been
dismissed, I beg your pardon. In case it didn't...
I hereby propose, that pbzip2 (https://launchpad.net/pbzip2) is used
to compress the xml dumps instead of bzip2. Why? Because its sibling
(pbunzip2) has a bug bunzip2 hasn't. :-)
Strange? Read on.
A few hours ago, I filed a bug report for pbzip2 (see
https://bugs.launchpad.net/pbzip2/+bug/922804) together with some test
results done even some few hours before that.
The results indicate that:
bzip2 and pbzip2 are vice-versa compatible each one can create
archives, the other one can read. But if it is for uncomressing, only
pbzip2 compressed archives are good for pbunzip2.
I propose compressing the archives with pbzip2 for the following
1) If your archiving machines are SMP systems this could lead to a
better usage of system ressources (i.e. faster compression).
2) Compression with pbzip2 is harmless for regular users of bunzip2,
so everything should run for these people as usual.
3) pbzip2-compressed archives can be uncompressed with pbunzip2 with a
speedup that scales nearly linearly with the number of CPUs in the
So to sum up: It's a no loose and two win situation if you migrate to
pbzip2. And that just because pbunzip2 is slightly buggy. Isn't that
Dipl.-Inf. Univ. Richard C. Jelinek
PetaMem GmbH - www.petamem.com Geschäftsführer: Richard Jelinek
Human Language Technology Experts Sitz der Gesellschaft: Fürth
69216618 Mind Units Registergericht: AG Fürth, HRB-9201
The Add/Change dumps for April 19, 2015 seems to be missing for all wikis.
 Can someone have a look at what went wrong?
The dumps are working fine for April 18, 2015 and April 20, 2015.
[ re-arranged due to top-posting ]
On Sat, Apr 18, 2015 at 09:13:47AM -0400, Andrew Otto wrote:
> > On Apr 18, 2015, at 04:04, Hydriz Scholz <admin(a)alphacorp.tk> wrote:
> > The media file request count files for upload.wikimedia.org has
> > got a missing file for April 14, 2015.  There should be a file
> > called "mediacounts.top1000.2015-04-14.v00.csv.zip", but it was
> > apparently not generated and skipped.
> > Can someone look into this? Thank you.
> I have been fighting with some cluster issues all week, and will get this sorted out this coming week.
It seems the file appeared in the meantime.
---- quelltextlich e.U. ---- \\ ---- Christian Aistleitner ----
Companies' registry: 360296y in Linz
Kefermarkterstrasze 6a/3 Email: christian(a)quelltextlich.at
4293 Gutau, Austria Phone: +43 7946 / 20 5 81
Fax: +43 7946 / 20 5 81
We are a couple of undergraduate students at IIT Bombay working on the
entity linking problem. It is the process of annotating a piece of text
with entities from a knowledge base. A common test set for the above task
is from the Knowledge Base Population task from the Text Analysis
Conference. The reference knowledge base for the task was extracted from an
October 2008 dump of Wikipedia. Unfortunately, when the TAC knowledge base
was being created, a lot of important information concerning the Wikipedia
category hierarchy was lost since they only retain links between entity
pages. Beyond this, the TAC knowledge base also does not have the PageIDs
of the entities extracted from Wikipedia which makes matching the entities
in TAC with the current version of Wikipedia hard due to renames and
deletions. We were wondering if there was anyway we could gain access to a
dump from October 2008. We found that the dump from January 2008 was not
complete as far as the TAC knowledge base is concerned. Any help will be