On Wed, 21 Nov 2007 12:46:32 -0500, Jay R. Ashworth wrote:
On Wed, Nov 21, 2007 at 05:38:18PM +0000,
MinuteElectron wrote:
Jay R. Ashworth wrote:
On Wed, Nov 21, 2007 at 02:00:34PM +1100, Steve
Bennett wrote:
Also the fact that performance has become an
issue probably leans
even more towards the hand-built solution rather than a generated
parser.
It does for WMF, maybe.
What *I* want to take away is a parser that does the first 80% of WT
syntax, that I can drop into my CMS. And I'm certainly not alone.
If you want this you, or someone with a similar goal, will have to code
it yourself. It is unlikely anybody would do it for you, especially if
they just contribute to MediaWiki. However, I have been told, it is
fairly simple to just override some of the functions to remove any
database stuff - that would make it work in any CMS.
Well, I'm not at all sure that my desire is unreasonable and your
response, reasonable.
Very early in Steve's work, the issue was raised that one of the targets
driving the effort of defining the language so that the parser could be
reimplemented was the fact that it couldn't be a bad thing if people
needing lightweight markup languages for other purposes could easily
utilise mwtext for that.
Cheers,
-- jra
My perspective is mostly, what is the data set tied to? I.e. a
specification, a reusable component, or a set of applications.
In that sense, it's probably not so much useful to create a parser to drop
into other apps for its own sake, as that that would be complementary to
less encumbered data.