b> ------- Additional Comments From robchur(a)gmail.com 2006-11-13 07:58 UTC -------
b> We *want* users to be able to discuss bugs in length (when there are larger
b> implications or the implementation needs discussing) on wikitech-l.
This is good news for me, who cannot hang around on costly connections
leisurely entering bugs. For my weekly connection to the Internet, I
will submit new bugs with my Perl LWP script, and comment on existing
bugs by hitting "r" to the email bugzilla sends me, and changing the
To: created to wikitech-l instead of the useless
bugzilla-daemon(a)mail.wikimedia.org.
As to others who can add comments properly to bugs whilst I must use
the above method, chalk that up to bugzilla's expectation that every
one has permanent Internet connections. Compare the Debian Bug Tracker
System, where one is still allowed -- preferred -- to use email.
Indeed I did try to write an LWP script for process_bug.cgi but one
must connect first lest "mid-air collisions".
Wait, here's something I was going to send before being told that it
was OK to reply via wikitech-l:
Totally unfair. Bugzilla emails comments to the user, but the user
cannot email back.
Indeed, apparently nobody ever hit the R key to one of its messages in
their mailer to see what would happen (Bug #7901). (We end up sending
mail to Bugzilla, as that is what is in the From line, that bounces.)
Anyway, even if there was a non-bouncing Reply-to, it wouldn't append
anything to the main bug I'm sure. As far as sending to Wikitech-l,
I'm sure that isn't as good as appending a comment to the main bug.
Bugzilla assumes everybody just hangs around online with their 1st
rate Internet connections. Compare the Debian Bug Tracker System,
where one need not have satellite connections from their mountaintop,
and can still use email.
OK, I've made a Perl LWP script to submit new wikimedia bugs, and one to
add "+" comments to wikipedia Talk pages. Now I will attempt to make a
script to append comments to a given bug number.
However I see too many scary parameters in the
http://bugzilla.wikimedia.org/process_bug.cgi form. While all I am
interested in is (id, comment)
simetrical(a)svn.wikimedia.org wrote:
> (bug 3315) Allow rollback directly from history page.
[snip]
> + $extraRollback .= '&token=' . urlencode(
> + $wgUser->editToken( array( $wgTitle->getPrefixedText(), $rev->getUserText() ) ) );
> + $s .= ' ['. $this->mSkin->makeKnownLinkObj( $wgTitle,
> + wfMsg('rollbacklink'),
> + 'action=rollback&from=' . $rev->getUserText() . $extraRollback ) .']';
Don't forget URL encoding; this will fail for usernames with various
special characters.
It might also be wise to refactor some of the rollback bits; if the edit
token usage for rollback is changed, for instance, we've got several
places it has to be tweaked.
-- brion vibber (brion @ pobox.com)
I get a timeout when connecting to mail.wikimedia.org / mail.wikipedia.org.
All the other *.wiki*edia.org pages are up and running for me.
Anything with the mail server?
Michael
Hi
We are currently trying to implement Yulup (http://www.yulup.org) resp.
the Neutron protocol into MediaWiki and need to implement the following
three steps:
1) implement neutron introspection link into (X)HTML head, e.g
<head>
<title>Main Page - Wiki</title>
<link rel="neutron-introspection" type="application/neutron+xml"
href="MainPage-introspection.xml"/>
</head>
2) Make pages *-introspecition.xml available, e.g.
<introspection>
<edit mime-type="application/xhtml+xml" name="MainPage">
<checkout url="?title=MainPage&action=editneutron" method="GET"/>
<checkin url="?title=MainPage&action=saveneutron" method="PUT"/>
</edit>
</introspection>
3) Implement actions
-- editneutron
-- saveneutron
so we have three questions ;-)
1) What (file) is generating the layout resp. the HTML head of a
MediaWiki page
2) How can we generate XML pages for the introspection files, e.g.
index.php?introspection=MainPage.xml
3) How can we implement new actions where we have to hook them in?
Any pointers or hints are very much appreciated.
Thanks
Michi
--
Michael Wechner
Wyona - Open Source Content Management - Apache Lenya
http://www.wyona.comhttp://lenya.apache.org
michael.wechner(a)wyona.com michi(a)apache.org
+41 44 272 91 61
We've been having quite a few complaints about false positives from the
AntiSpoof extension -- an extension which attempts to prevent registration
of names which are confusingly similar to names already registered. Brion
responded to these complaints with "get a sysop to make the account for
you", but I don't think that's a very good solution. So I've been working on
the AntiSpoof extension today, attempting to make it a bit more relaxed.
The most fundamental problem is the problem of merging sets. Say if we want
to treat visually similar characters as part of a set, and we also want to
treat letters which are the same except for their case as part of a set. So,
for example, say if we have the following pairs:
Η (capital eta) = H (latin)
Η (capital eta) = η (lowercase eta)
η (lowercase eta) = n (latin)
If we merge all these pairs into a set, following the relations, we obtain
the result that latin n is the same as latin H. This is incorrect, and is
the cause of most of the bizarre false positives that we see with AntiSpoof.
The problem is that merging sets is fairly fundamental to the way AntiSpoof
works -- i.e. by calculating a canonical representation of the username,
storing it and indexing it. So it's not going to change any time soon unless
we get really clever. But there are some things we can do to minimise its
effects.
The first and most obvious thing to do was to remove the transliteration
pairs. These are pairs of characters where one member of the pair is a
common phonetic transliteration of the other, e.g. cyrillic en "Н" = latin
E. This was the cause of most of the spurious conflations between latin
characters. This should now be done.
There are now three remaining categories of conflated character pairs: case
folding, visual similarity and chinese traditional/simplified conversion.
The second thing to do is to minimise cross-script pairs. Since cross-script
usernames are disallowed, cross-script pairs are mostly redundant. You could
make a case to leave some of them in, for example some latin usernames can
be spoofed entirely using cyrillic characters. And some communities may have
a special need for allowing a certain pair of scripts in a username (e.g.
latin and hiragana). It's best if we can just keep the pairs which are
visually very similar, and consciously avoid including cross-script pairs
which will cause false conflations within scripts.
I've done some work on this, but I think it's time to hand over the job to
the community, if the community wants it. I've created a page with a big
list of pairs, at:
http://www.mediawiki.org/wiki/AntiSpoof/Equivalence_sets
You can edit this page. I will update the live copy on request.
Really clever ideas on how to avoid merging sets while maintaining good
performance would be appreciated.
Another misfeature in AntiSpoof which was causing false positives was the
fact that it merged sequences of repeated characters. For example, Yuma was
considered to be equal to Uma, because Y=U (from a transliteration pair),
and UUma = Uma. I've removed this behaviour.
I should really get a blog...
-- Tim Starling
An automated run of parserTests.php showed the following failures:
Reading tests from "/home/brion/src/wiki/phase3/maintenance/parserTests.txt"...
Running test TODO: Table security: embedded pipes (http://mail.wikipedia.org/pipermail/wikitech-l/2006-April/034637.html)... FAILED!
Running test TODO: Link containing double-single-quotes '' (bug 4598)... FAILED!
Running test TODO: Template with thumb image (with link in description)... FAILED!
Running test TODO: message transform: <noinclude> in transcluded template (bug 4926)... FAILED!
Running test TODO: message transform: <onlyinclude> in transcluded template (bug 4926)... FAILED!
Running test BUG 1887, part 2: A <math> with a thumbnail- math enabled... FAILED!
Running test TODO: HTML bullet list, unclosed tags (bug 5497)... FAILED!
Running test TODO: HTML ordered list, unclosed tags (bug 5497)... FAILED!
Running test TODO: HTML nested bullet list, open tags (bug 5497)... FAILED!
Running test TODO: HTML nested ordered list, open tags (bug 5497)... FAILED!
Running test TODO: Parsing optional HTML elements (Bug 6171)... FAILED!
Running test TODO: Inline HTML vs wiki block nesting... FAILED!
Running test TODO: Mixing markup for italics and bold... FAILED!
Running test TODO: 5 quotes, code coverage +1 line... FAILED!
Running test TODO: dt/dd/dl test... FAILED!
Running test TODO: Images with the "|" character in the comment... FAILED!
Running test TODO: Parents of subpages, two levels up, without trailing slash or name.... FAILED!
Running test TODO: Parents of subpages, two levels up, with lots of extra trailing slashes.... FAILED!
Running test TODO: Don't fall for the self-closing div... FAILED!
Running test TODO: Always escape literal '>' in output, not just after '<'... FAILED!
Reading tests from "/home/brion/src/wiki/phase3/extensions/Cite/citeParserTests.txt"...
Passed 449 of 469 tests (95.74%)... FAILED!
Hi,
I noticed this on the Edit page on mediawiki.org:
* Pages in the Help: namespace all have a thick blue border around
them, indicating that their content is released into the public
domain. Do not make edits to these pages unless you are
comfortable releasing your contributions under this license.
There are far too many misconceptions about copyright and public domain
in circulation already, MediaWiki shouldn't spread them even further.
Public domain is not a license. Please correct this as soon as possible.
Timwi
Hi,
I'm using mediawiki 1.5.8.
At http://meta.wikimedia.org/wiki/Category#Category_page I read, that
from 1.5 on the namespace prefixes at the category pages are not longer
shown.
In my case the prefix is shown / and even at wikipedia.org it is shown.
How and where can I change that in the config, that this prefix isn't
shown any more?
And is ist possible to change the displayed text in that way, that the
namespace prefix is placed behind the Article name - for example
'myArticle (myNamespace)'
Thanks
Hi,
It seems that some articles are missing from the full dump of
Wikipedia-en with revisions enwiki-latest-pages-meta-history.xml.7z
(dating from August 26): I can't find an entry for "France" or "United
States" for instance. The fact that it is 5.1GB, while previous dumps
from July are more than 6GB may be another indication of this fact (I
haven't downloaded them yet, so I can't be sure).
Does anyone know why these entries are missing? More importantly, would
there be a way to be certain that the dump is truly complete before
making it available for download? Is there a dump somewhere that has all
articles with their revisions?
Thanks,
--
Pierre Senellart
pierre(a)senellart.com
http://pierre.senellart.com/
Tel. (work) : +33 1 72 92 59 29
Tel. (home) : +33 1 42 55 13 38
Tel. (mobile) : +33 6 73 96 82 89