Este mensaje le ha llegado mediante el servicio de correo electronico
que ofrece Infomed para respaldar el cumplimiento de las misiones del Sistem
a Nacional de Salud. La persona que envia este correo asume el compromiso de
usar el servicio a tales fines y cumplir con las regulaciones establecidas
> Then I further split it for ops and general tech.
> Let me know how well you think that has worked.
I would favor combining the two. They are both very low traffic
and I noticed other users were also confused in the past.
But if the split up is handier for ops, it's no big deal.
> I thought the file size would grow fairly linearly with the page count,
> but for the last 10% or so of the pages the file size hardly grew at all.
Pages in the dump are in order of page id, and thus more or less in order of
Pages in the end of the dump are small, more often stubs, with few
Tomasz Finc wrote:
> New full history en wiki snapshot is hot off the presses!
> It's currently being checksummed which will take a while for 280GB+ of
> compressed data but for those brave souls willing to test please grab it
> and give us feedback about its quality. This run took just over a month
> and gained a huge speed up after Tims work on re-compressing ES. If we
> see no hiccups with this data snapshot, I'll start mirroring it to other
> locations (internet archive, amazon public data sets, etc).
> For those not familiar, the last successful run that we've seen of this
> data goes all the way back to 2008-10-03. That's over 1.5 years of
> people waiting to get access to these data bits.
> I'm excited to say that we seem to have it :)
We now have an md5sum for enwiki-20100130-pages-meta-history.xml.bz2.
Please verify against it before filing issues.