I think it's a good way to go - having a browser render the data. The issue is that the only true parser is the one built into mediawiki! I'll have a look at your C code..

Cheers,
Brian

On 10/2/05, Wanchun <qpro11@yahoo.com> wrote:
I want to know if anyone is interested in writing an
offline wiki browser.

<a href
="http://wikifilter.sourceforge.net/">WikiFilter</a>
is a small program I wrote and both the binary and the
source code (in C) are uploaded there. It is not a
complete "reader", but a background "browser", because
it manages the dump file, and does the parsing, but
relies on a web browser to display the html output.

While managing the dump file is straightforward (build
an index, search it for an article by title, etc.),
parsing wiki text to html is quite tricky. I wonder if
anyone has tried this before.

In particluar, are there any robust algorithms to
parse templates and wiki tables?




__________________________________
Yahoo! Mail - PC Magazine Editors' Choice 2005
http://mail.yahoo.com
_______________________________________________
Wiki-offline-reader-l mailing list
Wiki-offline-reader-l@Wikipedia.org
http://mail.wikipedia.org/mailman/listinfo/wiki-offline-reader-l



--
Brian Mingus
AIM, Skype: BReflection
Google talk: reflection@gmail.com
Phone: (720) 771-2599
Web: http://www.qwikly.com