- Is there a definition / "complete" example of the JSON output of the
new parser somewhere? I didn't see it on the parser pages...
Working model of this is here:
We have an example document which includes various kinds of content. There's also a bunch of unit tests against the HTML and Wikitext serializers. Finally (and most importantly) there's a visual editor which can manipulate some of that DOM (soon all of it) with a graphical user interface. All of this code is what I'm working on. Inez from Wikia is also working with us on this 4 days a week.
- Will there be multiple "resolutions" of parsing? One would be
template name and key-value-pair parameters, another would be the
template replaced with the corresponding wikitext, another one the
template replaced with the corresponding wikitext parsed into JSON.
Either all-in-one large JSON object, or one of those "on demand"?
Also, extension tag/attributes/contents, rendered extension output,
WikiSource transclusions etc.
I can answer part of your question by explaining our plan for how the WikiDom will look when there's templates. A template call in WikiDom is just a template name with some parameters. The parameters are like documents, in that they are a series of blocks, just like a document is. The server could (and ideally will) render the templates into HTML, and pass that along with the parameter information. This will allow previews in the editor to be true to the final output, but also let the editor get at the parameters, change them, send them to the server for re-rendering, and then update the HTML representation. In this way, there can be different resolutions within a WikiDom structure.
- One of the functions I have issues with in WYSIFTW is copy&paste.
Besides making it work in the new editor, would it be worth to add
special behaviour for (cut|copy)/paste between articles? Like,
automatically adding the source article link to the edit description,
so the source of text can be traced, even if it's just manually?
So far copy-paste is working well, thanks to our approach to handling input. Underneath the EditSurface is a text input, which is visible but obscured. The text input is focused when the mouse interacts with the surface. When you type, we read the text from the input and insert it into the surface. When you select text, we fill the input with the plain text version of what you selected and set the input's selection to all. When you copy, well, nothing special needs to happen at all. When you paste, we treat it like typing. This works for plain text, but copy-pasting rich text will involve an extra couple of steps. When you copy, we will remember what the copied plain text looked like, and keep a formatted version around in memory. When you paste, if the pasted plain text is identical to the copied plain text then we can just use the in-memory formatted version. With some trickery, we may even be able to support this between tabs/windows. The neat thing about how we are handling this is that we have full control over what the plain text version of the text is, resolving lots of issues with browser and operating system copy/paste inconsistencies.
- Will there be an interface for to the parser for JavaScript tools
/outside/ edit mode? I'm thinking "Add a reference", "insert image"
etc. Just getting a char-based WikiText position from a mouse click
would be very helpful indeed, so the user can click where he wants the
reference in the rendered HTML, and JS can insert it at the
corresponding WikiText position.
Once we have a fully-featured WikiDom representation that we can safely round-trip Wikitext through, the sky is the limit to what kinds of APIs could be wrapped around it.
</questions-and-answers>
Thanks for the questions! I have been working really hard on the visual editor code, and probably need to spend a bit more time talking to people about it and documenting my work. If anyone wants to get involved, please let me know - we mostly need more JavaScript experts.
- Trevor