> Hi folks,
>
> I created categories in huwiki for pages with broken file links as written
> on
> https://bugzilla.wikimedia.org/show_bug.cgi?id=33413#c12
> That was 5 days ago:
> https://hu.wikipedia.org/w/index.php?title=MediaWiki:Broken-file-category&a…
>
> The old category is still not divided into the new categories! I tried to
> wait a few hours, a few days, purge articles and categories several times,
> but without any result.
> http://hu.wikipedia.org/wiki/Kateg%C3%B3ria:Hib%C3%A1s_f%C3%A1jlhivatkoz%C3…
> still 1258 members from various namespaces after 5 days. Only approx.
> 100 pages have leaked into new categories one by one, very slowly. For
> example, it has http://hu.wikipedia.org/wiki/Algoritmus. Clicking to this
> article it shows category Hibás fájlhivatkozásokat tartalmazó
> szócikkek<http://hu.wikipedia.org/wiki/Kateg%C3%B3ria:Hib%C3%A1s_f%C3%A1jlhivatkoz%C3…>as
> expected (hidden category!), but opening this category you won"t see
> this article in it. And so on.
>
> How is that possible?
>
>
>
> --
> Bináris
>
Hi.
Basically the category will only change the next time the page has a
linksupdate. This can happen in the following ways:
*Someone edits a page
*Someone does a null edit (not a purge) of the page. [aka Open page,
change nothing, hit save]
*Someone edits a template used on the page.
In particular, doing ?action=purge does *not* update the categories
that are used on a page.
https://bugzilla.wikimedia.org/show_bug.cgi?id=28876 is a bug for
making the category change happen quickly. Its on my list of bugs that
I think would be nice if they were fixed.
Cheers,
Bawolff
Hi,
We are writing QUnit tests for the Narayam extension and we ran into
some trouble simulating JavaScript events. The extension
transliterates ASCII Latin characters into characters which are hard
to type without installing keyboard layouts, and it's used in some
Wikimedia projects.
The problem is in the
extensions/Narayam/tests/qunit/ext.narayam.rules.tests.js file. It is
successful in testing the keyboard layouts that are currently present
there, but the same method doesn't work for testing layouts that have
to keep a key buffer - to remember the keys that were typed before the
last character. This is the case for German, for example, where '~o'
is supposed to be converted to 'ö'. It works when we actually type,
but if we try to test it using QUnit, it yields an empty string,
because by the time we get to the 'o', the '~' is forgotten.
Does anybody have an idea on how to solve this?
--
Amir Elisha Aharoni · אָמִיר אֱלִישָׁע אַהֲרוֹנִי
http://aharoni.wordpress.com
“We're living in pieces,
I want to live in peace.” – T. Moore
See also https://bugzilla.wikimedia.org/show_bug.cgi?id=34763
I try a concise description of a not-so-easy-to-describe problem.
Background:
The RSS extension has some code built-in which does some parsing internally
(rationale: because it uses two templates where users or admins can
configure the actual feed layout on a wiki page).
My problem while working to find a final solution for bug34763 is:
==============================================
After encapsulating some sanitized HTML text (I use
$parser->insertStripItem() as suggested by bawollf)
my function Parser hook function renderFeed returns with
return $parser->recursiveTagParse( $renderedFeed );
As an alternative I already tried using sandboxParse as described on
https://www.mediawiki.org/wiki/Manual:Special_pages#workaround_.231
but it has the same effect.
Typical (valid und wanted) content of $renderedFeed at that moment is
{{MediaWiki:Rss-feed | title = Test 3 | link =
https://www.example.com/User:WikiSysop | date = (2012-02-22 22:09:00) |
author = WikiSysop | description = UNIQ67172f774bad4ca9-item-1--QINU }}
{{MediaWiki:Rss-feed | title = Test 4 | link =
https://www.example2.com/User:Alice | date = (2012-02-21 22:09:00) |
author = Alice | description = UNIQ67172f774bad1234-item-2--QINU }}
...
The recursiveTagParse (or sandboxParse) above parses the templates
(here: MediaWiki:Rss-feed) (ok)
but also _un_strips the formerly intentionally encapsulated HTML. (not ok)
The early unstripping is unwanted, because the (sanitized) HTML should
be rendered later, at the place where the MediaWiki:Rss-feed template
wants to have it.
The final stripping should take place just when the wiki page is finally
rendered.
My questions:
==========
1. Is there a way to use recursiveTagParse so that it parses-out
anything but does not "unstripGeneral()" the encapsulated items ?
2. Do I have to use another hook in addition ?
3. Your feedback is appreciated on
https://bugzilla.wikimedia.org/show_bug.cgi?id=34763
Tom
[[User:Bolatbek]] has asked for help on kkwiki with styles and
Javascript. Since my ability to help in these areas is limited, I'm
putting this out to wikitech-l:
It looks like (some?) infoboxes are right-aligned now. See, for
example, [[kk:Қазақстан]]: http://hexm.de/g6
You can find reports of more problems on his talk page, especially this
bit: https://kk.wikipedia.org/w/index.php?diff=870529&oldid=495994
Thanks.
--
Mark A. Hershberger
Bugmeister
Wikimedia Foundation
mah(a)wikimedia.org
John Du Hart wrote:
> I really don't understand why we'd rather suffer than use a
> superior proprietary product.
David Gerard pointed out:
> https://wikimediafoundation.org/wiki/Values
>
> Duplicability down to the infrastructure is considered extremely
> important, or the free content isn't free. "Open core" fails this
> test.
Not to mention open source allows WMF and volunteers to fix bugs and
add features themselves. Non-FOSS software also greatly decreases the
chance of available volunteer expertise and help (unless the software
happens to be extraordinarily popular (e.g. Windows, Word).
--
Greg Sabino Mullane greg(a)endpoint.com
End Point Corporation
PGP Key: 0x14964AC8
Dear Mr James Alexander,
I am Eranga. I'm a 3rd year Computer Science and Engineering undergraduate
at University of Moratuwa Sri Lanka. I am interested in participating in
gsoc 2012 programme with MediaWiki. Currently I'm developing a multimedia
extension for MediaWiki which synchronize videos with other content. I went
through the gsoc idea list in MediaWiki gsoc 2012 page. Among those ideas,
I'm interested in "who's been awesome?" idea which has been published by
you. I want to know weather someone is already working on this idea.If it
is still available, I'm keen to start development with it with your
support.
Thank You
Undergraduate Computer Science and Engineering
University of Moratuwa
Student Member IEEE