this works for me. But isn't it a little too technical for the ordinary user?
Dominik
>
Add the following into your user CSS (e.g. [[User:Dominik/monobook.css]]):
.interwiki-en { font-weight: bold }
.interwiki-ca,
.interwiki-bs,
.interwiki-ko { display: none }
(of course, customize the language codes according to your wishes)
HTH,
-- [[cs:User:Mormegil | Petr Kadlec]]
______________________________________________________________
Verschicken Sie romantische, coole und witzige Bilder per SMS!
Jetzt bei WEB.DE FreeMail: http://f.web.de/?mc=021193
I'd like to have semi-protection enabled at de.wikisource. I thought it
would come automatically with new software updates. Will it be enabled
in all wikis some day? Could somebody enable it at de.wikisource? Or am
I able to enable it myself? And if not, will there be a special page for
Bureaucrats to enable extensions some day? Then it wouldn't always be
the busy developers who have to do it.
Jofi
Bureaucrat de.wikisource
Hello,
I commited in a change which doesn't invoke tidy for interface
messages. That means that every broken interface message (let it be
edit tools or whatever) will break XHTML of whole page. That means
that broken interface messages should better be fixed. This would
have effect on many many pages. There's no need for us to call tidy
for every small snippet that is edited once in a year. What is great! :)
Some of pages will win up to 40% response time.
If anyone becomes especially tired of this change,
$wgAlwaysUseTidy=true;
Domas
> This is one template that could *really* do with something built into
> the software. If it was even slightly possible that I could work this
> myself, I would attempt to do so.
Agreed. Citations and references should really be built into the
software. Lack of sources in articles is a *much* bigger problem than
any of this meta-template nonsense.
Infoboxes and taxoboxes could be built-in, too.
In the meantime, "complex" single templates are better than 15
different forks of the same template.
I just had an inspiration! Now that I'm thinking about it, if this
counting-url-posted-in-the-last-x-hours things works out, maybe the best way
to implement it would be for that to be part of the captcha code - if you
try to post the same url anywhere in the wiki more than x times in y hours,
you have to answer a captcha. Perhaps then, at some higher threshold, it
either a) adds it to the spam blacklist or b) operates as a short term spam
blacklist - you can't post it more than some number of times (10? 25?) in y
hours - this gives the community time to respond and blacklist the url if it
is indeed spam. This still does not address the possibility that some
legitimate need may arise to post an url into 100 pages, but it is a much
"softer" but still effective solution. I'm liking it better and better...
maybe set different thresholds for logged in users vs. anonymous users...
Best Regards,
Aerik
Phil Boswell wrote:
>Would this be checking for identical URLs, or simply multiple URLs
>referencing the same site?
>
>Because I can think of one concrete example where I have been adding many
>URLs: references to Placeopedia. Does the fact that I am using a small
>template help?
>Sometimes, you might have a WikiProject which is bringing their articles up
>to a standard which includes adding references, and if this means that a
>bunch of articles get similar URLs added in a short space of time, this
>might trigger your filter.
Phil, just to be clear, at this point I'm proposing a feasibility analysis
- but you make some very good points/questions. This would be for identical
urls posted to different pages. The goal is to develop a filter that would
be able to block spammy behaviour, even if it's edits from multiple
users/IPs. I think if you're posted a template, that would NOT trigger it
(there's a hole - a spammer could create a spam template? but at least you
could quickly negate the effects by deleting the template... nothing's
perfect I guess), only multiple posts of the same URL.
I can't imagine where you'd have a project that would cause you to post the
exact same URL to a whole bunch of pages at once, but I'm sure it's
possible. Maybe we'd only want to apply the filter against anonymous
edits. Or, maybe triggering the filter would invoke Brion's captcha? I'm
not trying to nail down the implementation at this point, just the
feasibility. But this is a good discussion.
Thanks,
Aerik
In July 2006, we plan to release a short Esperanto documentary (20-30
min) on DVD and we would also like to include the complete Esperanto
Wikipedia for viewing offline. I looked at
http://static.wikipedia.org/ and read that if this were put on the
web, it would be a trademark violation. So, what would be the best
and easiest way to get a version of the Esperanto Wikipedia that users
could view on a DVD-Rom?
Thank you,
Chuck
I posted previously about this and got no responses, but I think it's a
really good idea, so I'm trying one more time. I've been thinking about a
filter that adds urls to a blacklist after X number of repeat posts of that
url within Y amount of time.
The idea is that the normal behaviour for a wiki is that it is very unlikely
for an url to be referenced in an edit more than X times in 24 hours (I'll
postulate twice, just for fun), but it is very common behaviour for a
spammer. Therefore, if one could ascertain (do some tests) that some large
percentage (say 99.9%) of valid (non-spam) urls posted via edits occur less
than X in 24 hours, you could put in a filter inconveniencing a very small
percentage of users that would autmatically add those urls to a blacklist.
I think it's worth testing anyway. What I'm proposing is to collect urls
posted via edit in a short period of time (say a week) and do an analysis.
Anybody else like this idea? I'd be willing to write a script, but there'd
have to be a hook, or a small hack in editpage to call it.
Best Regards,
Aerik
Hi all,
I wrote a JavaScript to allow pages Quick Preview without any server
access.
I don't know for other language, but many French use Preview option
(that use bandwidth, SQL access and server CPU parsing) to reread their
text many time before submit changes because read Wiki code is not so
easy. The idea is to make a limited parser to allow text rereading
without any cost for our servers.
You can test it on French Wikipedia. Actually, the script only support
(partly) the more basic features (titles, bold, italic, etc.) but it may
be easy to extend it to support almost all the WikiSyntax. Except:
missing articles (red link), templates and images. It works on Firefox,
Mozilla and Konqueror and don't work (yet?) on... suspense... MSIE.
I think this optional feature (if JS is not active the button just don't
appear) may save lot of servers resources and is really cheap (parsing
is only executing if Quick Preview button is pressed) and I would like
to have support from other developers to help improve it. It may be good
to include this feature into MediaWiki software in the future.
Regards,
Aoineko / Guillaume Blanchard