On 08.02.2012 2:52, Platonides wrote:
At the beginning, the interesection was huge even if
ability to edit
was low, just because there was a lot of knowledge missing. So as
the knowledge increases (eg. linearly) "people" appear to be more and
more stupid for editing.
This is well put. Perhaps this calls for RTFM but were
there studies
that have examined the trend from this POV?
Ability to edit and knowledge of rules are probably
orthogonal. And
users have a inmense rule-blindness. They won't want to read pages
and pages of rules or tutorials. They just want things done (eg.
change a birth date) I think most people act the same way. What was
the last time you read the VCR manual?
Yes, I agree with you completely, I
don't even remember if I have read a
manual for any of my cellphones.
And we should take that into account, too, not making
Rules/Tutorials that look like EULAs. Nonetheless, such thing would
be hard to do.
Indeed.
On 07.02.2012 1:23, Mihály Héder wrote:
What I imagine is a system which does not bother me
until I use a
cite template for the first time for example - but then it tries to
evaluate whether I use it according the guidelines - and probably
explain me the guidelines.
I have missed this part yesterday and it's clearly
interesting. I think
it's even possible to implement and will certainly improve the
usability. An editor without a license checkbox? Very nice, very nice
indeed.
I know most of this is science fiction but I hope we
will have
something like this in the far future :)
I don't see any top-science in showing
"When external links are
acceptable" textbox when the editor founds a user has added an external
link. In fact, current MediaWiki editor does this (shows CAPTCHA).
But my point is that the guidelines would be much
easier to digest
if always presented in context and only the relevant part.
This is the best
definition.
Maybe I'm an utopist but I can imagine a wikipedia
where fresh
editors just start typing their knowledge with zero education and
still be able to immediately produce valuable output, provided they
have good intentions.
And combined with a visual editor evne a housewife can bring
simple
edits, right. This might be a way to go.
Still, back to our reality...
2012/2/6
Amgine<amgine(a)wikimedians.ca>ca>:
As I understand it, for the foreseeable future
there will be a
raw wiki syntax interface available. I hope contributors can be
reassured on this point.
Combined with: 2012/2/6 Trevor
Parscal<tparscal(a)wikimedia.org>rg>:
Make significant changes to what Wikitext is and
can do
The problem with this is that if present "raw wiki syntax" will
be
kept it will ensure that edits continue downfall.
I disagree. Its existence in the backend shouldn't influcence it.
In the
backend - no, it won't; in the frontend/user space - yes, it
will. But we seem to be agree here.
Your proposing is like adopting a new, highly improved
C²
programming language and throwing all C code (which would be
incompatible with the new one). You are not proposing to create a new
C² language, but also to stop all support for C.
Yes, this is my proposal. However,
your phrasing misses one crucial
point: there can and will be written a compatibility layer that will
even work better than current wikitext parser. I say better because when
we will be writing it:
1. Old wikitext syntax will be frozen and this will allow devs create a
well-round parser knowing they won't need to add patches and fixes all
around later because of 'new features'.
2. Compatibility layer will work on the new parser framework and thus
will be easier and faster to create.
Moreover, since current wikitext syntax is more or less based on regular
expressions it is not a big deal to process it using more advanced
tokenizer.
So you say "Oh, no problem. I will make this
wonderful C2C²
converter that will seamlessly produce equivalent C² code from the
original C one, so you don't need to rewrite things from scratch. It
will be automatically taken care of."
Correct but not completely, see below.
Yes. Until that inline assembly that worked in C code
makes a random
memory overwrite in the kernel. And that other function, which got a
compatible by luck of sizeof(int) == sizeof(void*), now in C² makes
the application die horribily... and so on.
Really precise comparison. Code
relying on language tricks is hard to
migrate. But is this meant to prevent computer languages from evolving?
Backward compatibility is important but shouldn't act like a weight on
your leg. If you feel that old code and syntax is suffocating the
project will you push on instead of throwing it away, manually replacing
incompatible and tricky pieces of code with new?
Similarly, it is not denied that Wikipedia and many other
MediaWiki-based projects use tricky markup equivalent to inline
assembler in old C. Of course, such code will be troublesome to support
(albeit possible with enough effort) and thus it might be better to
error when converting such documents.
How many pages there exist that contain tricky wikitext? Will it be
possible to convert them manually to new syntax? To not offend anyone
but it is already a fault of wikitext devs that the markup wasn't being
improved in consistency with growing needs; why cutting this deeper and
letting old markup tag behind like a ghost?
The point of a human-readable/writable markup is to be crisp, it can't
allow tricky constructs and if it doesn't then it will be a piece of
cake to convert it automatically according to any future needs. And once
current pages are converted to proper markup there will be less to none
manual transformation necessary in future.
Signed,
P. Tkachenko