Is it just me, or is Wikipedia horribly bogged tonight, with
molasses-slow editing and numerous server errors? (I'm prepared
to believe it's just me, as I'm having trouble elsewhere, too.)
There's quite a bit of call for greater civility as a stronger rule on
wikien-l, and the listmods would be very happy to put this into
action. The trouble is, the current Mailman interface isn't very
fine-grained.
wikien-l is a problematic beast: it's an official forum for en:wp
discussion, and it's also the forum of last resort for many banned or
disfavoured editors. This creates an interesting dynamic ... We'd love
to do more to cut the rubbish down. However, we need more ability to
manage the posts of posters who are on moderation.
A poster can be (a) free to post, (b) have their posts held for
moderation, or (c) kicked off the list. The second category, moderated
posts, show up randomly amongst a ridiculously large daily pile of
penis spam. This makes it laborious to keep even one or two frequent
contributors on strict moderation. Newbies always start moderated, so
as to keep address-morphing trolls at bay. (We've taken it off before
and they come right back.)
Things that would be nice:
* Some "reason" field for moderation. e.g. "first-time poster",
"warned about querulousness", "chronically incivil".
* The moderation interface to put moderated posts at the beginning of
the list. Then we just have to check through the penis spam for
messages from unsubscribed posters.
Does anyone have Mailman knowledge that might be helpful on either of
these? (The Mailman 2.1 documentation is disorganised and incomplete,
so I might easily have missed something that says "tick here to make
these two things happen.")
- d.
> Earlier: There's quite a bit of call for ...
> stronger rule on wikien-l, and the listmods
> would be very happy to put this into action...
Peter Blaise ALWAYS responds: May I courteously suggest that those of
us who have NO PROBLEM operating our own inbox filters, and our own
scroll down arrow keys, and our own delete keys, probably outnumber the
ones who call for someone else to do that kind of filtering for them.
More importantly, I object to ANYONE filtering my inbox of anything but
spam for me under the guise that they are doing me a favor.
All I expect any moderator to do is prevent or delete spam, delete and
warn about off topic posts (pretty much like spam, actually), and then
relax and let me do my own reading, searching, sorting, and selecting.
There are many threads I am not interested in, however, I do not expect
any moderator to delete those threads just because they don't suit my
interests today. May I (perennially) suggest that others who feel
burdened by threads that they are not interested in get over it. Scroll
on.
However, if anyone, including a moderator, wants anyone else to write
and contribute differently, please, by all means, set an example of the
kinds of posts you prefer, and contact the person you find offensive on
or off list to resolve the misunderstanding. Email is perennially
accosted as delivering errant emotive messages, and banning has never
resolved anything. We end up endlessly discussing the fine tuning of
any banning rules, discussions which I find even more repulsive than the
posts objected to in the first place!
Scroll on!
I found this thread about SVG in the firefox img-tag bug
"external SVG not loaded from img tag":
https://bugzilla.mozilla.org/show_bug.cgi?id=276431
and it cites some interesting ways to visualize svg in wikipedia, that might
be useful to inspire some wikimedia developer here:
--- Comment #42 from Jeff Walden (remove +bmo to email) <jwalden+bmo(a)mit.edu>
2007-10-24 14:14:19 PDT ---
While SVG may in reality have been designed as a document format, its uses
in practice are primarily as an image/graphics format, whose natural home in
the mind of a web developer has always been in <img>. I don't think you can
reasonably fight this intuition, particularly without support from other
browsers.
--- Comment #43 from Guilherme Fonseca <fonseca(a)cs.umd.edu> 2007-10-24
14:42:29 PDT ---
Let's look, for example, at the following wikipedia image page:
http://en.wikipedia.org/wiki/Image:Hexahedron.svg
It contains a PNG preview of an SVG image and a link to it. I believe that
it
would be cleaner if the preview used the svg image itself, and left
rendering
of the SVG for the web browser.
--- Comment #44 from Jeff Walden (remove +bmo to email) <jwalden+bmo(a)mit.edu>
2007-10-24 14:58:06 PDT ---
(In reply to comment #43)
> I believe that it would be cleaner if the preview used the svg image
itself,
> and left rendering of the SVG for the web browser.
It would indeed, but I don't think that's quite the distinction being made.
The question is whether it's worthwhile to support <img> when perfectly
adequate support already exists in <iframe>, <object>, etc. with understood
and effective security mechanisms in place to prevent the SVG (or any other
document format loaded instead) from escaping its prison. (There's no
reason Wikipedia couldn't do that now, with Accept or user-agent detection.)
Those security mechanisms would have to be modified or reinvented for
supporting SVG in <img>, or the document-support code would have to be
modified to provide an <img>-style context; implementing either is a
decent-sized task with lots of potential for regressions (and security ones,
at that).
Added op_andi, op_joerg, op_markus, op_moritz: Ontoprise developers
working on Project HALO, interested in making commits to Semantic MediaWiki.
Daniel Schwen (dschwen): WikiMiniAtlas extension.
Paul Grinberg (gri6507): Various extensions currently published on
mediawiki.org.
-- Tim Starling
An automated run of parserTests.php showed the following failures:
This is MediaWiki version 1.12alpha (r26940).
Reading tests from "maintenance/parserTests.txt"...
Reading tests from "extensions/Cite/citeParserTests.txt"...
Reading tests from "extensions/Poem/poemParserTests.txt"...
Reading tests from "extensions/LabeledSectionTransclusion/lstParserTests.txt"...
17 still FAILING test(s) :(
* URL-encoding in URL functions (single parameter) [Has never passed]
* URL-encoding in URL functions (multiple parameters) [Has never passed]
* Table security: embedded pipes (http://lists.wikimedia.org/mailman/htdig/wikitech-l/2006-April/022293.html) [Has never passed]
* Link containing double-single-quotes '' (bug 4598) [Has never passed]
* message transform: <noinclude> in transcluded template (bug 4926) [Has never passed]
* message transform: <onlyinclude> in transcluded template (bug 4926) [Has never passed]
* BUG 1887, part 2: A <math> with a thumbnail- math enabled [Has never passed]
* HTML bullet list, unclosed tags (bug 5497) [Has never passed]
* HTML ordered list, unclosed tags (bug 5497) [Has never passed]
* HTML nested bullet list, open tags (bug 5497) [Has never passed]
* HTML nested ordered list, open tags (bug 5497) [Has never passed]
* Inline HTML vs wiki block nesting [Has never passed]
* Mixing markup for italics and bold [Has never passed]
* dt/dd/dl test [Has never passed]
* Images with the "|" character in the comment [Has never passed]
* Parents of subpages, two levels up, without trailing slash or name. [Has never passed]
* Parents of subpages, two levels up, with lots of extra trailing slashes. [Has never passed]
Passed 527 of 544 tests (96.88%)... 17 tests failed!
Is anyone here using Mediawiki with php 5.2.4? I tried upgrading from
5.1.2 to 5.2.4, same config, with Apache 2.2 and the load went
straight through the roof as soon as I restarted Apache. I had to go
back to 5.1.2.
It's not likely something has changed in PHP to warrant such an effect, is it?
Travis
My apologies on that last blank email.
I will add to my explanation that yesterday I was about to extract over 2
million legitimate redirects. By legitimate I mean both the source and
target were pages existing in the page.sql file. I obtained these by
extracting them from the large articles-pages.xml file.
This 2 million number is very consistent with the 1 million which exist in
the redirect.sql file plus the 1 million more which I see listed in the
page.sql file (pages flagged as redirects). So my question stands as to why
these are missing from the redirect.sql file.
Is this the right place to ask this question, or is there a more direct
contact I should make or bug I should file?
thanks!!
John
Date: Tue, 23 Oct 2007 10:43:58 -0500
From: "John Lehmann" <john.lehmann(a)gmail.com>
Subject: [Wikitech-l] 1 million redirects missing from redirect.sql
file?
To: wikitech-l(a)lists.wikimedia.org
Message-ID:
<91ba8f10710230843n3407bf4vefedc42269987ecb(a)mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
It looks to me like there are a large number (as many as 1 million)
redirects missing from the redirect.sql file.
My script extracts redirects from the redirect.sql file and the page id's
using the page.sql file. Most of these pages can be resolved (about 1
million). However, when I scan the page.sql file for page names which are
redirects which were never resolved to any relation in the
redirect.sqlfile, there is about 1 million more.
Here are some examples (the ones on the left are missing from redirect.sql)
which were derived from 20070908 but I believe the problem is not limited to
this date:
Alstrom's syndrome -> Alstrom syndrome
Tito's Handmade Vodka -> Tito's Vodka
Titov_Drvar -> Drvar
Another experiment which seems to confirm this is that I can extract
2.4million redirects from the
page-articles.xml file, which is approximately the number of redirects I get
from redirect.sql + the number which seem missing according to page.sql.
Am I misunderstanding something?
A related question is why the redirect.sql file has the destination link as
a string and not as a page id? The category-links.sql file does this also.
Is this just for readability, because it takes more effort to construct
linked databases.
I hope I have posted this in the right place.
thanks!!
John
An automated run of parserTests.php showed the following failures:
This is MediaWiki version 1.12alpha (r26919).
Reading tests from "maintenance/parserTests.txt"...
Reading tests from "extensions/Cite/citeParserTests.txt"...
Reading tests from "extensions/Poem/poemParserTests.txt"...
Reading tests from "extensions/LabeledSectionTransclusion/lstParserTests.txt"...
17 still FAILING test(s) :(
* URL-encoding in URL functions (single parameter) [Has never passed]
* URL-encoding in URL functions (multiple parameters) [Has never passed]
* Table security: embedded pipes (http://lists.wikimedia.org/mailman/htdig/wikitech-l/2006-April/022293.html) [Has never passed]
* Link containing double-single-quotes '' (bug 4598) [Has never passed]
* message transform: <noinclude> in transcluded template (bug 4926) [Has never passed]
* message transform: <onlyinclude> in transcluded template (bug 4926) [Has never passed]
* BUG 1887, part 2: A <math> with a thumbnail- math enabled [Has never passed]
* HTML bullet list, unclosed tags (bug 5497) [Has never passed]
* HTML ordered list, unclosed tags (bug 5497) [Has never passed]
* HTML nested bullet list, open tags (bug 5497) [Has never passed]
* HTML nested ordered list, open tags (bug 5497) [Has never passed]
* Inline HTML vs wiki block nesting [Has never passed]
* Mixing markup for italics and bold [Has never passed]
* dt/dd/dl test [Has never passed]
* Images with the "|" character in the comment [Has never passed]
* Parents of subpages, two levels up, without trailing slash or name. [Has never passed]
* Parents of subpages, two levels up, with lots of extra trailing slashes. [Has never passed]
Passed 527 of 544 tests (96.88%)... 17 tests failed!