I want to know if anyone is interested in writing an
offline wiki browser.
<a href
="http://wikifilter.sourceforge.net/">WikiFilter</a>
is a small program I wrote and both the binary and the
source code (in C) are uploaded there. It is not a
complete "reader", but a background "browser", because
it manages the dump file, and does the parsing, but
relies on a web browser to display the html output.
While managing the dump file is straightforward (build
an index, search it for an article by title, etc.),
parsing wiki text to html is quite tricky. I wonder if
anyone has tried this before.
In particluar, are there any robust algorithms to
parse templates and wiki tables?
__________________________________
Yahoo! Mail - PC Magazine Editors' Choice 2005
http://mail.yahoo.com