Hi all,
A bit of brainstorming here. Our two basic tools to help editors
track changes to articles they have an interest in are the watchlist
and 'my contributions' lists.
Summarised:
Watchlist: Show me every change recently made to any article I've
noted as "interesting", most recent first.
My contributions: Show me every change I've recently made, and tell me
whether anyone has touched it since then, sorted by when I made the
change.
However, what I really want is:
Show me every change made to any article I've worked on recently. Let
me "approve" changes so they don't show up again.
It's kind of half way between: the "my contributions" is primarily
interesting for the presence or absence of "(top)" which means that no
one has touched it - but it doesn't show who changed it, when, or how
often. Watchlist is more detailed, but quickly gets out of control and
you end up seeing changes on articles you haven't touched in months,
but haven't bothered to remove from your list.
I gather that there exists a mechanism for 'approving' changes at the
personal level (I don't want/need anyone else to see which changes
I've approved). If that were combined with a way of sorting the
watchlist in order of the last time I made a change to that article,
this would alost satisfy the need I see (apart from the watchlist
getting out of control...). Is this possible?
To be quite specific, let's imagine there are three articles, A, B and
C, which I've worked on in that order - C most recently. However,
since I've worked on them, editors have made changes to them in the
order B, C, A. Watchlist currently would show them:
A
C
B.
I would like to see them:
C
B
A
Does anyone agree that this would be a more useful and relevant way of
showing changes? Are there further enhancements one could add to
quickly see what's changed since the last time one checked - within
the narrow sphere of one's own interests?
Steve
An automated run of parserTests.php showed the following failures:
Running test TODO: Table security: embedded pipes (http://mail.wikipedia.org/pipermail/wikitech-l/2006-April/034637.html)... FAILED!
Running test TODO: Link containing double-single-quotes '' (bug 4598)... FAILED!
Running test TODO: Template with thumb image (with link in description)... FAILED!
Running test Template infinite loop... FAILED!
Running test TODO: message transform: <noinclude> in transcluded template (bug 4926)... FAILED!
Running test TODO: message transform: <onlyinclude> in transcluded template (bug 4926)... FAILED!
Running test BUG 1887, part 2: A <math> with a thumbnail- math enabled... FAILED!
Running test TODO: HTML bullet list, unclosed tags (bug 5497)... FAILED!
Running test TODO: HTML ordered list, unclosed tags (bug 5497)... FAILED!
Running test TODO: HTML nested bullet list, open tags (bug 5497)... FAILED!
Running test TODO: HTML nested ordered list, open tags (bug 5497)... FAILED!
Running test TODO: Parsing optional HTML elements (Bug 6171)... FAILED!
Running test TODO: Inline HTML vs wiki block nesting... FAILED!
Running test TODO: Mixing markup for italics and bold... FAILED!
Running test TODO: 5 quotes, code coverage +1 line... FAILED!
Running test TODO: HTML Hex character encoding.... FAILED!
Running test TODO: dt/dd/dl test... FAILED!
Passed 412 of 429 tests (96.04%) FAILED!
Hi all,
I've just started trying to play with MediaWiki's code, and am a little
lost as to where to start with this task. What I'd like is to have a
form in an article that can be filled out by any wiki viewer; following
submission of this form, I want to modify the contents of an article in
a specific way and update the article with the contents of the form.
Can anyone point me in the proper direction for this? Thanks a bunch.
Dajoo is a sourceforge project try to build a personal wiki-based
platform to help people manage their knowledge. It was designed to own
a plugin system in the kernel by which we can develop various tools
for different purpose.
The markup it adopted is compatible with Mediawiki, and now
interoperability with Mediawiki is under developing. On my personal
computer, now it can access Wikipedia freely. I will release the next
version with the interoperability features in September.
The project is now in the early stage, and you can download and try it
at http://sourceforge.net/projects/dajoo/
Thanks.
[[zh:User:Mountain]]
This hasn't been done for a while, so I'll try to sum up changes in
our operations since November, 2005.
There has been much less insane headless chicken run and we've seen
quite steady operation operation (except few hiccups) lately.
First of all, we could afford for a while ordering hardware before we
were completely overloaded - it was constant tune in previous years.
There were lots of system architecture changes lately too - the way
how we store data, the way how we serve and cache images, and text.
==Hardware==
One of good news is that we can still stay at same class of database
servers, which even are getting much cheaper than before.
Database server cost per unit went from $15000 in Jun, 2005 to $12500
in October, 2005, to $9070 in March, 2006.
We got four of these servers in March and called them... db1, db2,
db3 and db4.
For application environment we did a single $100000 purchase, that
provided us with 40 high performance servers (with two dual core
opteron processors and 4GB of RAM each).
This nearly doubled our CPU capacity, and also provided enough of
space for revision storage, in-memory caching, etc.
For our current caching layer expansion we ordered 20 high
performance servers (8GB memory, four fast disks, $3300 each), which
should appear in production in ~one month.
We're investigating possibilities of adding more hardware in
Amsterdam cluster. We might end up with 10 additional cache servers
there too.
We also purchased $40000-worth of Foundry hardware, based on their
BigIron RX-8 platform.
We will use that as our highly available core routing layer, as well
as connectivity for most demanding servers.
As well, this will allow flexible networking with upstream providers.
Our next purchase will be image hosting/archival systems, and now
there's still ongoing investigation whether to use our previous
approach (big cheap server with lots of big cheap disks), or to
deploy some storage appliance.
We reallocated some aging servers to search cluster and other
auxiliary, and still continue this practice, so that we'd end up with
more homogenous application environment.
==Software==
There were lots of improvements in MediaWiki itself, but additionally
Tim and Mark ended up in Squid authors list - changes made in it's
code were critical to proper squid performance.
We did split database cluster, with English Wikipedia ending up on
separate set of boxes.
Some of old database servers got their new life being slaves just of
few languages, thus compensating lack of memory or fast disk system.
Additionally revision storage was moved from our core database boxes
to 'external storage clusters', which are our application servers
utilizing their idle disks.
In optimization work multiple factors are being worked on.
"Make it faster" means not only having more requests per second
served, but also reducing response times, and both issues are worked
on constantly.
And of course, as always, team has been marvelous ;-) Thanks!
--
Domas Mituzas -- http://dammit.lt/ -- [[user:midom]]
Hi,
I was noising thru the source, and noticed this variable:
$wgEnableScaryTranscluding = false;
Does this feature actually work properly? Obviously its silly name
makes me think it hasn't been finished (and I couldn't see any docs on
via google), but then one of the other functions was called
getContentWithoutUsingSoManyDamnGlobals, and used everywhere...
The way I understand it it should allow for templates to be pulled in
from other wikis, am I correct?
Kind regards,
Alex
Simetrical wrote:
> > The most interesting revelation of the above tests, for those who
> > missed it, is that it *is* possible to link to a page named after a
> > URL, but [[http://foo.com]] won't do it (that generates a, what was
> Any reason that we explicitly ban pages from having titles that look like URLs?
I was not involved in devloping the program, so I can not say whay
this choose was take, but I can think at least to good reasons why
such ban my be an interesting thing:
1) avoid advertisment vandalism (it is not difficult to imagine that
if such name where allowed, many web site would want an article with
the name of tehir URL - well actually they may stil want an article
with the name of thei web site of with the name of their URL with the
beginning http:// stripped)
2) Since [http://www.example.com] is a standard way of inserting an
external link, a person who is to much used to insert internal link as
[[Foo]], may easely wrongly try to insert an external link as
[[http://www.example.com]]. If link and page name like this were
allowed, the wiki will be full of such sort of links and pages.
AnyFile
An automated run of parserTests.php showed the following failures:
Running test TODO: Table security: embedded pipes (http://mail.wikipedia.org/pipermail/wikitech-l/2006-April/034637.html)... FAILED!
Running test TODO: Link containing double-single-quotes '' (bug 4598)... FAILED!
Running test TODO: Template with thumb image (with link in description)... FAILED!
Running test Template infinite loop... FAILED!
Running test TODO: message transform: <noinclude> in transcluded template (bug 4926)... FAILED!
Running test TODO: message transform: <onlyinclude> in transcluded template (bug 4926)... FAILED!
Running test BUG 1887, part 2: A <math> with a thumbnail- math enabled... FAILED!
Running test TODO: HTML bullet list, unclosed tags (bug 5497)... FAILED!
Running test TODO: HTML ordered list, unclosed tags (bug 5497)... FAILED!
Running test TODO: HTML nested bullet list, open tags (bug 5497)... FAILED!
Running test TODO: HTML nested ordered list, open tags (bug 5497)... FAILED!
Running test TODO: Parsing optional HTML elements (Bug 6171)... FAILED!
Running test TODO: Inline HTML vs wiki block nesting... FAILED!
Running test TODO: Mixing markup for italics and bold... FAILED!
Running test TODO: 5 quotes, code coverage +1 line... FAILED!
Running test TODO: HTML Hex character encoding.... FAILED!
Running test TODO: dt/dd/dl test... FAILED!
Passed 412 of 429 tests (96.04%) FAILED!
On 8/18/06, Jay R. Ashworth <jra(a)baylink.com> wrote:
> I very strongly suspect that no one who hasn't lived intimately with
> the parser code (that's, what, 4 or 5 people? :-) could predict what
> those things would do; they all seem implementation defined to me.
>
> Or almost all...
>
> They do illustrate why making a late pass to hotlink URLs might not be
> a safe approach, though.
(oops, I should have changed the subject earlier)
Depends what you mean by a "late pass". Any "early pass" is wrong -
basically, a URL should only match if absolutely nothing else does -
no normal links, for instance. But what kind of "late pass" - is there
a parse tree that you can check to see whether the token has been
matched against anything fancier than plain text?
The most interesting revelation of the above tests, for those who
missed it, is that it *is* possible to link to a page named after a
URL, but [[http://foo.com]] won't do it (that generates a, what was
it, "direct link"). However, [[ http://foo.com]] works, although the
page ends up being called "Http://foo.com". It's not completely
inconceivable to me that one day we might want to write an article
about a URL, like if some postmodern band names an album
"http://stupid.com" or something.
Steve
A lot of comments have been made about whether wiki markup is a barrier to
entry, and if it is, whether it (the barrier to entry) is a good thing or
not.
On the first part, it would be interesting to measure edit attempts - how
many people clicked edit, but did not click save or preview. Of course this
does not tell you if the markup is a barrier to entry, but it may be some
kind of rough indicator. The experience of editing a well developed page
can be like looking at source code: if you don't know the language, there's
an awful lot of gobbledygook in there, and I'd think it intimidates some
people.
So... it would just be an interesting figure, to hold up next to other
figures, like completed edits.
Aerik