Jim Wilson wilson.jim.r@gmail.com wrote:
I agree wholeheartedly with Jared. A rewrite shouldn't invalidate everyone's collective time spent learning the current syntax. I mean, that's a whole lot of peoples time you could potentially be wasting.
I agree with you on this point. People should definitely still be able to input and edit articles using the same wikitext markup to which they've grown accustomed. Increasingly, however, people are wanting something easier. I know people who find wikitext difficult and are asking for WYSIWYG. Unfortunately, there's no easy way to use wikitext as a basis for WYSIWYG. The people who are trying to do it are going through something like the following:
wikitext -> HTML -> a Javascript WYSIWYG editor
When they save their changes, they have to go back through HTML on the way to wikitext, and since there's no one-to-one correspondence between wikitext and HTML, the results are inconsistent. It's hard to imagine a good fix for this problem because the only people interested in working on wikitext-to-HTML conversion are a subset of the relatively small number who actually write code for MediaWiki. Moreover, it's a moving target. Wikitext syntax changes every time someone writes a new parser function, and with the proliferation of MediaWiki-powered websites outside Wikimedia, it's looking more and more like a language with numerous dialects rather than a single consistent markup standard.
I realize that it's ambitious to contemplate putting XML under the hood of MediaWiki -- just as it was ambitious in its day for Apple to contemplate putting Unix under the hood of its graphical user interface. The result, however, was a better, more extensible operating system. If they hadn't done it, they'd probably have gone the way of Atari or Amiga.
IMHO, MediaWiki is currently the best software in existence for web- based wikis, and it has the considerable advantage of serving as the content management system for Wikipedia, which alone will guarantee its place in the world for the foreseeable future. However, there are other competitors in the wings that are getting serious about WYSIWYG, and MediaWiki might start to look dated if other platforms manage to offer a significantly more user-friendly experience.
By the way, Jim...I like your RSS extension. I'm probably going to install it on my own wiki.
-------------------------------- | Sheldon Rampton | Research director, Center for Media & Democracy (www.prwatch.org) | Author of books including: | Friends In Deed: The Story of US-Nicaragua Sister Cities | Toxic Sludge Is Good For You | Mad Cow USA | Trust Us, We're Experts | Weapons of Mass Deception | Banana Republicans | The Best War Ever -------------------------------- | Subscribe to our free weekly list serve by visiting: | http://www.prwatch.org/cmd/subscribe_sotd.html | | Donate now to support independent, public interest reporting: | https://secure.groundspring.org/dn/index.php?id=1118 --------------------------------
Thanks Sheldon - those are excellent points. The conversion back-and-forth through HTML seems to be a sticky spot.
I wonder if there's any happy-medium? If pages could be flagged as "Wikitext" or "HTML", then using a WYSIWYG on the HTML pages and traditional edit on Wikitext articles could make sense.
The idea being that _some people_ could write "content" pages with their pretty WYSIWYG editor, and _other people_ could wire them all together via templates and transclusion. Something akin to the separation of concerns between HTML and CSS. You could even section-off portions of pages with parser tags (<wysiwyg>some text</wysiwyg> etc), and a wysiwyg editor would only touch those portions, leaving the rest unscathed.
The benefits of this approach is that it can started right away and probably be completed through extensions alone. (Though admittedly it'd be easier if the Massive Hook Proposal were implemented)
Of course the obvious retort is "boy it sure would be nice if we could just pick which editor we want and have them seamlessly interoperate" - but I hope we can all agree that this is a ways off from where we are today.
Would anyone else find a split solution like this acceptable? Am I alone in thinking this is a good mid-term approach?
P.S. - Thanks Sheldon for plugging WikiArticleFeeds (the RSS extension) - if you run into any issues, have enhancement requests etc, I'd love to hear about them. Thanks!
-- Jim
On 3/7/07, Sheldon Rampton sheldon@prwatch.org wrote:
Jim Wilson wilson.jim.r@gmail.com wrote:
I agree wholeheartedly with Jared. A rewrite shouldn't invalidate everyone's collective time spent learning the current syntax. I mean, that's a whole lot of peoples time you could potentially be wasting.
I agree with you on this point. People should definitely still be able to input and edit articles using the same wikitext markup to which they've grown accustomed. Increasingly, however, people are wanting something easier. I know people who find wikitext difficult and are asking for WYSIWYG. Unfortunately, there's no easy way to use wikitext as a basis for WYSIWYG. The people who are trying to do it are going through something like the following:
wikitext -> HTML -> a Javascript WYSIWYG editor
When they save their changes, they have to go back through HTML on the way to wikitext, and since there's no one-to-one correspondence between wikitext and HTML, the results are inconsistent. It's hard to imagine a good fix for this problem because the only people interested in working on wikitext-to-HTML conversion are a subset of the relatively small number who actually write code for MediaWiki. Moreover, it's a moving target. Wikitext syntax changes every time someone writes a new parser function, and with the proliferation of MediaWiki-powered websites outside Wikimedia, it's looking more and more like a language with numerous dialects rather than a single consistent markup standard.
I realize that it's ambitious to contemplate putting XML under the hood of MediaWiki -- just as it was ambitious in its day for Apple to contemplate putting Unix under the hood of its graphical user interface. The result, however, was a better, more extensible operating system. If they hadn't done it, they'd probably have gone the way of Atari or Amiga.
IMHO, MediaWiki is currently the best software in existence for web- based wikis, and it has the considerable advantage of serving as the content management system for Wikipedia, which alone will guarantee its place in the world for the foreseeable future. However, there are other competitors in the wings that are getting serious about WYSIWYG, and MediaWiki might start to look dated if other platforms manage to offer a significantly more user-friendly experience.
By the way, Jim...I like your RSS extension. I'm probably going to install it on my own wiki.
| Sheldon Rampton | Research director, Center for Media & Democracy (www.prwatch.org) | Author of books including: | Friends In Deed: The Story of US-Nicaragua Sister Cities | Toxic Sludge Is Good For You | Mad Cow USA | Trust Us, We're Experts | Weapons of Mass Deception | Banana Republicans | The Best War Ever
| Subscribe to our free weekly list serve by visiting: | http://www.prwatch.org/cmd/subscribe_sotd.html | | Donate now to support independent, public interest reporting: | https://secure.groundspring.org/dn/index.php?id=1118
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org http://lists.wikimedia.org/mailman/listinfo/wikitech-l
On Wed, 07 Mar 2007 16:35:25 -0600, Jim Wilson wrote:
Thanks Sheldon - those are excellent points. The conversion back-and-forth through HTML seems to be a sticky spot.
I wonder if there's any happy-medium? If pages could be flagged as "Wikitext" or "HTML", then using a WYSIWYG on the HTML pages and traditional edit on Wikitext articles could make sense.
Since the wiki text is more structured than the HTML, rendering it that way would be a lossy conversion, which is why it would be useful to have an intermediate XML that keeps the structure of the wiki text without its ambiguities.
Initially, I think this XML would be more useful as a transfer encoding, to export the full structure of the data to third-party software that doesn't need to emulate our Parser, and would take some time to mature before being used for storage.
Of course, we could consider wysiwyg to be be one of those programs, encoding what it understands into XML, and flagging what it doesn't, editing, then reversing it. So the medium could be a partial conversion, but done on the fly, and dealt with by the application.
On 3/8/07, Steve Sanbeg ssanbeg@ask.com wrote:
Of course, we could consider wysiwyg to be be one of those programs, encoding what it understands into XML, and flagging what it doesn't, editing, then reversing it. So the medium could be a partial conversion, but done on the fly, and dealt with by the application.
Are you suggesting that you might end up with a chunk like:
<XML> ... <UNRECOGNISED> <CHUNK text="Some unrecognised code with {{funky!syntax|not..supported|by]]] wikiwyg" /> </UNRECOGNISED> ... </XML>
The concept sounds vaguely doable, but pretty much any comments not from the main mediawiki developers aren't even worth $0.02.
Steve
On 3/8/07, Steve Bennett stevagewp@gmail.com wrote:
Are you suggesting that you might end up with a chunk like:
<XML> ... <UNRECOGNISED> <CHUNK text="Some unrecognised code with {{funky!syntax|not..supported|by]]] wikiwyg" /> </UNRECOGNISED> ... </XML>
The concept sounds vaguely doable, but pretty much any comments not from the main mediawiki developers aren't even worth $0.02.
I think that's kind of excessive. :) "Main" MediaWiki developers are unlikely to be the ones to do this, and MediaWiki developers altogether maybe fifty-fifty at best, assuming it ever gets done. Remember that this is an open-source project.
Overall, I think this is probably a case of too much talk, not enough work. If this is going to get done, someone needs to get cracking. And I don't exclude myself here, although this isn't something I plan to try in the immediate future anyway.
On 3/8/07, Steve Bennett stevagewp@gmail.com wrote:
Isn't there a simpler way, something like:
{{subst:tilde}}{{subst:tilde}}{{subst:tilde}}{{subst:tilde}} ?
On 3/8/07, Alexander Ian Smith ais523@bham.ac.uk wrote:
{{subst:NonExistingPage}} doesn't transform on save.
Both of these are definitely bugs, and they should be fixed (although they aren't high-priority, obviously). Pre-save transforms should be done recursively with sanity checks and cleaning to preserve idempotence. For the most part, they *are* idempotent, and any cases where they are not are ipso facto bugs.
On 3/8/07, Timwi timwi@gmx.net wrote:
To be honest, I keep seeing talk about "defining the parser's behaviour", but I haven't seen any actual work on it at all. (I'm not saying there hasn't been any, only that I haven't seen it.)
There's been some work that I've seen, mostly abortive. I think your flexbisonparse probably constitutes most of the applied work people have done on it, though. I think other things I've seen have been people trying to work out grammars, but not actually getting them to work with bison or whatever.
On 3/8/07, Timwi timwi@gmx.net wrote:
Sheldon Rampton wrote:
(1) The XML would be close to the HTML version that most people see when they view articles in their web browser, so less parsing would need to be done when presenting articles for viewing.
This conclusion is incorrect. It doesn't matter how "close" they are to each other, you would still need to parse it.
But you would need to parse it "less", i.e., you could parse it more simply and quickly, with fewer rules and no informal rules. And you'd also be parsing it in a more standard way, using XML functions, so you can get C-like (at least C++-like) speed even in, say, JavaScript, which I believe now comes standard with lots of XML functions at least in most implementations. So something standard like XML has advantages. You could even use XSL for the transform rules, to avoid having to write your own transformations.
wikitech-l@lists.wikimedia.org