Way cool - Look forward to a brown bag on this project - Diederik? :-)
On Wed, Aug 17, 2011 at 10:05 AM, Tomasz Finc <tfinc(a)wikimedia.org> wrote:
On Wed, Aug 17, 2011 at 9:58 AM, Diederik van Liere <dvanliere(a)gmail.com> wrote:
> Over the last few weeks, Yusuke Matsubara, Shawn Walker, Aaron Halfaker and
> Fabian Kaelin (who are all Summer of Research fellows) have worked hard
> on a customized stream-based InputFormatReader that allows parsing of both
> bz2 compressed and uncompressed files of the full Wikipedia dump (dump file
> with the complete edit histories) using Hadoop. Prior to WikiHadoop and the
> accompanying InputFormatReader it was not possible to use Hadoop to analyze
> the full Wikipedia dump files (see the detailed tutorial / background for an
> explanation why that was not possible).
> This means:
> 1) We can now harness Hadoop's distributed computing capabilities in
> analyzing the full dump files.
> 2) You can send either one or two revisions to a single mapper so it's
> possible to diff two revisions and see what content has been addded /
> 3) You can exclude namespaces by supplying a regular expression.
> 4) We are using Hadoop's Streaming interface which means people can use this
> InputFormat Reader using different languages such as Java, Python, Ruby and
> The source code is available at: https://github.com/whym/wikihadoop
> A more detailed tutorial and installation guide is available at:
> (Apologies for cross-posting to wikitech-l and wiki-research-l)
>  http://blog.wikimedia.org/2011/06/01/summerofresearchannouncement/
> Wikitech-l mailing list