I have been doing range blocks to identify the new accounts that the
vandal General Tojo is creating. He makes about 20 new accounts a day
including sleepers. The trouble is that doing a checkuser on the
range of a popular provider not only takes a lot of time, but is also
doing a lot of grinding as it goes back 30 days. Could an option for
checkuser be coded to permit checkuser for the last 24 hours? That
would suffice for an ongoing problem like a persistent vandal.
Fred
An automated run of parserTests.php showed the following failures:
Running test TODO: Table security: embedded pipes (http://mail.wikipedia.org/pipermail/wikitech-l/2006-April/034637.html)... FAILED!
Running test TODO: Link containing double-single-quotes '' (bug 4598)... FAILED!
Running test TODO: Template with thumb image (with link in description)... FAILED!
Running test TODO: message transform: <noinclude> in transcluded template (bug 4926)... FAILED!
Running test TODO: message transform: <onlyinclude> in transcluded template (bug 4926)... FAILED!
Running test BUG 1887, part 2: A <math> with a thumbnail- math enabled... FAILED!
Running test TODO: HTML bullet list, unclosed tags (bug 5497)... FAILED!
Running test TODO: HTML ordered list, unclosed tags (bug 5497)... FAILED!
Running test TODO: HTML nested bullet list, open tags (bug 5497)... FAILED!
Running test TODO: HTML nested ordered list, open tags (bug 5497)... FAILED!
Running test TODO: Parsing optional HTML elements (Bug 6171)... FAILED!
Running test TODO: Inline HTML vs wiki block nesting... FAILED!
Running test TODO: Mixing markup for italics and bold... FAILED!
Running test TODO: 5 quotes, code coverage +1 line... FAILED!
Running test TODO: HTML Hex character encoding.... FAILED!
Running test TODO: dt/dd/dl test... FAILED!
Passed 413 of 429 tests (96.27%) FAILED!
hi rob,
welome back :-)
cheers,
jimmy
> -----Ursprüngliche Nachricht-----
> Von: Wikimedia developers <wikitech-l(a)wikimedia.org>
> Gesendet: 08.08.06 17:25:24
> An: "Wikimedia developers" <wikitech-l(a)wikimedia.org>
> Betreff: Re: [Wikitech-l] Fwd: HOWTO MediaZilla: MediaWiki extensions - New component
> On 08/08/06, Cyril DANGERVILLE <cyril.dangerville(a)gmail.com> wrote:
> > Hello,
> > I'd like to register a new extension (DynamicPageList2) on the
> > MediaZilla for bug reports. So my question is: how do I add a new
> > "Component" under Product "MediaWiki extensions"?
> > Thanks for your help.
>
> Ask Brion.
>
>
> Rob Church
> _______________________________________________
> Wikitech-l mailing list
> Wikitech-l(a)wikimedia.org
> http://mail.wikipedia.org/mailman/listinfo/wikitech-l
This arose from a conversation on wikien-l.
There is some pretty strange behaviour involving the ISBN "magic word"
when used as the label in a piped link.
---- copied from that thread ----
In summary, it looks like:
* Plain text like "ISBN 123" gets mapped to
http://en.wikipedia.org/wiki/Special:Booksources/isbn123
* Plain text with slashes or hyphens or whatever are mapped to
different URLs (slashes, fwiw, are mapped to underscores), which all
behave identically - extraneous characters are treated as though
they're not there
http://en.wikipedia.org/wiki/Special:Booksources/isbn1_2-3
* You can access this page via [[Special:Booksources/isbn123]]
* When a link to an ISBN page like that is piped via an "ISBN" text,
the actual link is ignored, and the piped label is used instead:
[[Special:Booksources/isbn123|ISBN 456]] actually links to the ISBN
page for 456, and the link reads "ISBN 456".
So far so good? Well, it's odd, but explainable by the "ISBN ..." text
being parsed and treated later in the cycle. Now, here's the killer:
*[[Test|ISBN 456]] links to Test!
(see the example at http://en.wikipedia.org/wiki/User:Stevage/sandbox )
That's the one I can't understand - putting "ISBN xxx" in the piped
part of a link only overrides the actual link if it it's to an ISBN
page. This is most unexpected. I suppose it has some strange "prevent
misleading ISBN links" benefit, but it's just...odd.
To be honest, the fact that the piped label gets "interpreted" for
magic words at all just looks like a bug, or an oversight, or a
misfeature or whatever. From some more testing it looks like some
"magic word" behaviour gets interpereted in piped links and some
doesn't - http:// links *don't* override the actual link for instance.
---- end copy ----
Stephen Bain and I were also wondering about the history of the ISBN
magic word. When did it appear, what was the reasoning etc? Since it
can be implemented perfectly well as a normal template
([[Special:Booksources/isbn{{{1}}}|ISBN {{{1}}}]]), was the decision
to make it a magic word just a legacy thing to make thousands of
existing ISBN's suddenly "come to life"?
Also, is it documented anywhere? Haven't looked anywhere except
http://meta.wikimedia.org/wiki/Help:Magic_words .
Thanks very much,
Steve
An automated run of parserTests.php showed the following failures:
Running test TODO: Table security: embedded pipes (http://mail.wikipedia.org/pipermail/wikitech-l/2006-April/034637.html)... FAILED!
Running test TODO: Link containing double-single-quotes '' (bug 4598)... FAILED!
Running test TODO: Template with thumb image (with link in description)... FAILED!
Running test TODO: message transform: <noinclude> in transcluded template (bug 4926)... FAILED!
Running test TODO: message transform: <onlyinclude> in transcluded template (bug 4926)... FAILED!
Running test BUG 1887, part 2: A <math> with a thumbnail- math enabled... FAILED!
Running test TODO: HTML bullet list, unclosed tags (bug 5497)... FAILED!
Running test TODO: HTML ordered list, unclosed tags (bug 5497)... FAILED!
Running test TODO: HTML nested bullet list, open tags (bug 5497)... FAILED!
Running test TODO: HTML nested ordered list, open tags (bug 5497)... FAILED!
Running test TODO: Parsing optional HTML elements (Bug 6171)... FAILED!
Running test TODO: Inline HTML vs wiki block nesting... FAILED!
Running test TODO: Mixing markup for italics and bold... FAILED!
Running test TODO: 5 quotes, code coverage +1 line... FAILED!
Running test TODO: HTML Hex character encoding.... FAILED!
Running test TODO: dt/dd/dl test... FAILED!
Passed 413 of 429 tests (96.27%) FAILED!
Thank you! I found it! I'll update the page at
http://meta.wikimedia.org/wiki/MediaWiki_extensions so the next person
can find them.
Thanks again,
Tim
> -------- Original Message --------
> Subject: Re: [Wikitech-l] Where are the Extensions
> From: jf(a)mormo.org (Jens Frank)
> Date: Mon, August 07, 2006 4:56 pm
> To: Wikimedia developers <wikitech-l(a)wikimedia.org>
>
> On Mon, Aug 07, 2006 at 02:53:25PM -0700, tim(a)greenscourt.com wrote:
> > I am looking for the "Reviews" extension which I believe was written by
> > Magnus Manske and feel lost in a maze.
> >
> > Several messages posted to the MediaWiki-L mailing list refer to updates
> > to this code as follows:
> > Update of /cvsroot/wikipedia/extensions/Review
> > In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv22161
> >
> > The page at http://meta.wikimedia.org/wiki/MediaWiki_extensions links to
> > http://cvs.sourceforge.net/viewcvs.py/wikipedia/extensions/#dirlist, but
> > this location just times out for me.
> >
> > I've tried going to http://sourceforge.net/projects/wikipedia/ but I
> > can't seem to find where the extensions are hiding there.
>
> We don't use sourceforge's CVS any more. Check our SVN repository at
> http://svn.wikimedia.org/viewvc/mediawiki/trunk/extensions/
>
> Regards,
>
> JeLuF
> _______________________________________________
> Wikitech-l mailing list
> Wikitech-l(a)wikimedia.org
> http://mail.wikipedia.org/mailman/listinfo/wikitech-l
Hi Julien,
> I have wrote a "google suggest" like service for wikipedia under GPL
> licence [1].
> By the way, I am not sure if this project will interest you, I am open
> to all comments from your community.
Yes!! Good stuff!
== UI stuff ==
The only two constructive suggestions for the User-Interface that I
would make are:
1) That if the user presses 'Enter' in the search textbox whilst
typing out a query, that it automatically choose/open/redirect to the
first item in the list. That way I can type out what I want, and press
enter to open the first link when I've typed enough to specify it well
enough to get it to the top of the list, all without using the mouse.
2) Allow the user to press the down/up arrows to select/highlight a
specified entry on the list (including but not limited to the first
item), and press enter to open it. That way again the user can be lazy
and can select a link without using the mouse, and without typing out
the full title.
== More Technical stuff ==
1) How do you handle pages with the same title, but different
capitalization? They're rare, but they do occur. My suspicion from
scanning Analyzer.cpp is that you just take the most popular. However,
if it's for search, it would be best to include everything (I think).
2) Doesn't seem to get include redirects. For example, when I search
for "Formula weight", it's not listed, but on the EN Wikipedia
"Formula weight" is a redirect to "Atomic mass". It would definitely
be better to include redirects (in my personal opinion).
However the downside of including these two things is that the amount
of data that you need to store goes up. I've actually had a go at a
very similar problem (storing a memory index of all article names, but
meeting the two conditions specified above, plus for redirects I would
also store the name of the article it redirected to [something which
could potentially maybe also be useful for your suggest service if you
wanted to show this information too]). However this was in PHP, and it
a complete memory hog (think > 1 Gb for the memory index). My solution
(since it was only for me) was to just "buy more RAM", however I like
your approach of getting more efficient. By the way, the reason I was
doing this was for suggesting links that could be made in wiki text -
just so you know I'm not in competition with you, but that the
problems we face are similar in some ways, and could maybe benefit
from a common solution.
How big would be a memory index be that had these properties (i.e.
including all NS:0 articles/redirects, and maybe including the targets
for redirects)?
All the best,
Nick.
Last month, someone (from the US Navy, actually) wrote me to check on
the status of my stable versions extension. I told him it basically
should work, but has never been tested in a live environment. He got it
to work (on a 1.6 MediaWiki) and tells me "everything is working fine",
with some minor hacks he made, and which I checked into SVN for him.
There is an issue with caching, though, which will prove to be no real
obstacle but just a little work to fix it.
Sadly, I didn't have time to come to Wikimania this year, but I listened
to Jimbo's opening speech today. I was very pleased to hear that
finally, we're going to get stable versions, and as a German, I concur
that de.wikipedia is a perfect testing environment for this (we've
banned all non-free images, and stubs; stable versions are the logical
next step). I also agree with Jimbo to just start with the simplest
possible solution, see how it works, and modify it from there. Releasing
software very early has been my motto from the start, much to the pain
of our CTO ;-)
Anyway, I'd like to know if there'd be a point in me fixing the caching
issue. Brion, will you roll your own implementation, or use mine? If
it's the latter, should I apply final polish to it so you can get single
login up and running first?
Apart from that, some general thoughts:
While the "simplest possible solution" would, of course, be another
field slapped to the page table, it seems to me that any expansion from
there (multiple stable versions, different kinds of stable versions -
vandalism-free to peer-reviewed, etc.) will end up using its own table.
Should there be a new group of "stable version editors"? Should these be
initially identical with admins to give it a kick-start? But that would
already be covered with the code we have, right?
Don't forget to implement "oldid=stable" to return the stable revision.
What if there's no stable revision? Return a blank page, a note, or just
the current version?
Who's gonna see the stable version, if there is any? Anons, of course.
What about the google bots? Probably the same. And new users? Yes, to
prevent culture shock ;-) And add a user option to change that. Current
users, however, should have the "show stable version by default" option
turned /off/ by default to keep the current behaviour working. Least
surprise, right?
Oh, and, of course, an additional dump with just the stable versions
instead of the current ones. Also a "mixed" one, with stable version if
existing, and current ones otherwise?
There should be a special page to list pages
* with stable versions
* without stable versions
* pages with stable version, sorted by how much time/how many edits/mor
many bytes are the difference between the stable and the current version
So much from Germany, and have fun at the last Wikimania day,
Magnus