Rowan Collins wrote:
On 21/12/05, Uwe Brauer <oub(a)mat.ucm.es> wrote:
>What is the *problem* in using HTML constructs?
Speed in parsing?
I think the main problem is *consistency* - if
you're trying to let as
many people edit as possible, having multiple variants of markup for
doing [nearly] the same thing just makes everything that much harder
to learn, and gives people that much more chance to be baffled when
looking at the source.
HTML was invented as a human-writable language. Unfortunately, the
humans in question were computer-literate physicists at CERN, and less
technically-minded humans have a lot of trouble with it. Do you remember
looking at web page source code in 1995/1996? "Tag soup." People used
tag soup because they wanted a result on the page, and never mind the
programming formalisms.
The advantage of wikitext for mere humans is that it starts working as
plain text, then you can add wikitext soup to it and you still get a
result that nontechnical people can read and write.
Wikitext is formally horrible, but seems to work with human editors.
Wikipedia is one of those wonderful pieces of technology that works for
technophobes *and* advanced geeks (e.g. Mac OS X, LiveJournal, Firefox)
- which I think should be a goal of all programs where possible, btw.
People learn wikitext like learning a language. They try stuff and it
gets meaning across. They learn more and it gets more of the meaning
across. They gain proficiency as they go.
- d.