These blank lines should not - under any circumstances - be here. But I
do know why they are...
Tim Starling modified the standard distribution of JSMin[1] in some good
and some bad ways. These blank lines are the result of one of these
modifications which I find to be misguided. He's basically only
compressed horizontal white-space, leaving new line characters in place.
The blank lines you see are where the comments used to be.
I have made this point before, clearly upon deaf ears - but I will make
it again.
* ResourceLoader has 2 modes. Production and Development (or debug)
mode.
* Production mode should be as fast as possible for users, period.
* Development mode should be as easy as possible for developers, period.
* Any attempt to blend the two only serves to diminish the
effectiveness of either mode.
If you want a version of the script that has not been compressed add
debug=true to the URL or set $wgResourceLoaderDebug = true; in your
LocalSettings.php.
This particular change (the "don't delete line breaks" part of r73196)
should be reverted, Tim's good changes should be pushed upstream, and we
should be using a standard JSMin distribution whenever possible.
- Trevor
[1]
http://www.mediawiki.org/w/index.php?title=Special:Code/MediaWiki/author/ts…
On 12/7/10 5:32 AM, jidanni(a)jidanni.org wrote:
> Why so many blank lines in this vector component?
> $ cat yy
> set 'http://transgender-taiwan.org/load.php?debug=false&lang=zh-tw&modules=start…'
> GET $1|perl -nwe 'print " $." if /^$/'
> $ sh yy # These lines are blank:
> 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 20 26 27 28 29 32 33 34 37
>
> http://transgender-taiwan.org/load.php?debug=false&lang=zh-tw&modules=site&…
> is affected too.
>
> Yes these aren't meant for human consumption.
>
> _______________________________________________
> MediaWiki-l mailing list
> MediaWiki-l(a)lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/mediawiki-l
This may or may not be appropriate to this list -- this is where I
found most of the discussions on the matter, so posting here.
>From reading the past couple of weeks of messages, I surmise that
there isn't a way to get a current data dump (for enwiki), while the
server is fubar.
I have the 20100312 dump, which seems to be more recent than others
available from archive.org, Amazon EC2, and others. However, even this
dump is significantly behind the current article revisions from
en.wikipedia.org.
I pulled 333 semi-random articles from the live API -- of those, 329
of them have significant content changes since 20100312 dump.
Thus, my question:
What is the current preference/recommendation regarding pulling
significant quantities of articles (250k/ish) from the live API, until
the dumps are available again?
Sidenote 1: I'm in the process of uploading the 20100312 dump to a
public web location, in case it is helpful to others.
Sidenote 2: Is there any discussion regarding insuring current dumps
are mirrored in the future, say with archive.org ?
--------------------------------------
James Linden
kodekrash(a)gmail.com
--------------------------------------
Could anybody help me locate a dump of mediawiki.org while the dump
server is broken please? I only need current revisions.
Thanks in advance.
Andrew Dunbar (hippietrail)
Hi,
Test plan for the MediaWiki installer is shared at
<http://www.mediawiki.org/wiki/New_installer/Test_plan>
http://www.mediawiki.org/wiki/New_installer/Test_plan (based on
functionality of 1.18 alpha).
The high level test scenarios were broken down as high, medium and low
priority tests. Basically the high-level test scenarios are to cover
successful and unsuccessful MediaWiki installation work flows. Other
alternative scenarios are specified as medium priority and UI related test
scenarios are categorizes as low priority .
Detailed test cases are derived from the high-level scenarios. Test cases
which can be automated are specified in the detailed test cases and will be
automated using selenium/PHP Unit. Initially the test scripts have to be
executed as independent scripts, since the current framework will not work
without an existing installation. Later the same scripts could be
integrated with the selenium framework with suitable modifications to the
framework.
Thanks & Regards,
Nadeesha Weerasinghe
Software QA Engineer
Calcey Technologies - <http://www.calcey.com/> www.calcey.com
Voice: +9411 282 7560, +1 415 462 1561 (US)
Fax: +9411 282 7561, +1 831 597 3678 (US)
Hi there -- I don't post much here, but I was the programmer on the
Multimedia Usability Project, which primarily focused on making uploads
easier. The outside funding for that project just ended, so I think it's
a good time to talk about what (if anything) we will do in the future
along these lines.
Going forward, we ought not to think about usability as the
responsibility a few people in San Francisco. I have been asking myself
how we could end the need for usability projects, and instead make that
part of everyone's practices.
What makes you a usability engineer? My personal belief is that it isn't
(primarily) a matter of having special knowledge.
You become a usability software engineer when you see five average users
utterly fail to accomplish the task you wanted them to be able to
accomplish.
Programming is a hubristic enterprise, but for UI, these negative
feelings are essential: watching ordinary users get angry and frustrated
dealing with what you've created, even feeling a certain shame and
embarassment that you got it so wrong. Only then do you see how large
the conceptual gap is between you and the average user -- but you also
usually come out of the experience with an immediate understanding of
how to fix things.
So is there a way to have *everybody* who develops software for end
users in our community have that experience? Maybe.
At the WMF, for these Usability Projects, we had to do formal studies
with expert consultants, because these were grant-funded projects and we
needed to present scientific data. Doing those studies is expensive and
time-consuming.
But as a developer, it was more valuable to do "hallway usability
testing" in an informal way. There are lots of startups these days that
try to deliver such lightweight user testing over the web; could we do
the same? Or, possibly we don't even need software; maybe what we need
is a tradition of doing this for everything we release.
So how about that? If there were an easy way to integrate usability
testing into your process as a developer, would you be interested? And
what should that look like?
--
Neil Kandalgaonkar ( <neilk(a)wikimedia.org>
Hello,
Lately on fr.wiki we have a disruptive person who is vandalizing daily
using transparent proxies. We have blocked the IP range he is on but he
can still edit using those transparent proxies. With the check-user
tolls we can see his IP address in the XFF headers. Would it be possible
to implement a way to block an IP address from editing through
transparent proxies using the XFF headers?
BTW there is an opened bug for that:
https://bugzilla.wikimedia.org/show_bug.cgi?id=23343
Thanks in advance,
N.
You probably noticed, or heard in advance that it would happen, but I
figured I'd announce it anyway: 1.17 was branched today. The branch is
in /branches/REL1_17 and the branch point is r77974.
When you commit or find a revision post-r77974 that you feel should be
in 1.17 (bugfixes, typically, no new features), tag it with "1.17" in
CodeReview. Please do this even if you're not afraid of SVN merging,
so we can merge stuff in batches and keep the commit noise down.
Nothing has been merged yet, although a few things seem to have been
tagged already.
When you find something pre-r77974 that you think should be backed out
of 1.17 (i.e. reverted in the branch), tag it with "revert1.17". This
shouldn't happen very often, so feel free to do this yourself if you
know how. The only thing that's been backed out so far is a
refactoring of the skin system in r77893.
After merging/reverting, the tag should be removed.
I'd also like to call upon everyone to help out with code review. Of
course not everyone feels comfortable reviewing everything, but by all
means review what you feel comfortable reviewing. Even if you're not
an experienced reviewer/developer and only review trivial changes or
changes to one specific component you're familiar with, you're taking
work out of the hands of other reviewers.
Roan Kattouw (Catrope)
Can I again ask that the powers that be please assist to roll-out a
bug fix that the Wikisource sites do need for a usability fix
https://bugzilla.wikimedia.org/show_bug.cgi?id=21526
The bug was reported over a year ago,
Reported: 2009-11-15 21:21 UTC by Simon Lipp
and fixed five months ago
ThomasV 2010-07-07 11:07:00 UTC
Thanks for patch and the detailed explanation. I commited it (r69139)
Implementation to servers? Unknown
Requests for implementation? Ignored
Any issues with means of implementation? Not known, not visible
Systematic route to implementation? Seemingly not followed, and forgotten
It is so incredibly frustrating to have to get continually ask for the
simple fix to be put in place for a whole set of sister sites. It is
so incredibly humiliating to almost to get down and beg that some
consideration be given to some of the smaller sites.
The utter silence that pervades these matters has moved from disregard
and moving into disgraceful. That there is no communication or
ability to even get an understanding of what can be expected is now
well past disappointing and into unprofessional.
I understand that there is a big picture to consider, however, there
is the matter of consideration, courtesy and respect, and these very
sadly seem to be missing. The situation seems to be going from
neglect and moving towards something approaching culpability by the
management system.
If it is the noisy squeaky and disruptive that is the means to get
something implemented, we can go there, however, surely, surely,
surely, what we want is the polite and considerate.
billinghurst
----------------------------------------------------------------
This message was sent using iSage/AuNix webmail
http://www.isage.net.au/
I ran into an issue while experimenting with something today.
I created a voidbook skin with a SkinVoidBook class which was going to
be a test skin to duplicate monobook in a compiled template language.
But I did it using the extension technique of setting $wgValidSkinNames
and $wgAutoloadClasses like any other 3rd party skin.
I ran into an issue with the skin not being loaded.
After I debugged it I found out that for several years our skin system
has been doing something utterly screwed up...
Here's how Skin::newFromKey works with monobook.
With $key = "monobook" passed to it the method Skin::getSkinNames() to
get the fully filled $wgValidSkinNames data.
$skinName becomes "MonoBook" and $className becomes "SkinMonobook"
`$className = 'Skin' . ucfirst( $key );`
The method does a class_exists triggering the autoloader checking for a
"SkinMonobook" class.
SkinMonobook is not found so the method loads up skins/MonoBook.php
after optionally loading skins/MonoBook.deps.php, this loads the
SkinMonoBook class.
class_exists returns true, and so `new $className` is called... This
creates an instance of SkinMonoBook from the $className SkinMonobook.
See the screw up there? For a long time we've been sticking to naming
our skin classes by the SkinMonoBook convention... while really, the
skin system has been trying to load SkinMonobook and only succeeding
because we use require_once directly loading SkinMonoBook and php's
class system happens to be case insensitive so when we ask for
SkinMonobook it gives us SkinMonoBook.
Problem! Our autoloader is NOT case insensitive.
So anyone following our internal conventions while trying to create a
skin inside of an extension and they happen to decide on a skin name
using a capital letter in the middle of the word gets a nasty suprise.
Instead of the way it played out with MonoBook, VoidBook gets this result.
$key = "voidbook"; Skin::newFromKey sets $skinName = "VoidBook";
$className = "SkinVoidbook";
class_name calls our autoloader asking for "SkinVoidbook", the class is
actually SkinVoidBook so our autoloader does NOT load the class. The
class_exists returns false, Skin::newFromKey continues along, doesn't
see skins/SkinVoidBook.deps.php so it skips it... then it tires to
require skins/SkinVoidBook.php, now because this is an extension based
skin it trips up and we get a cryptic fatal php error.
Now we've defined $wgValidSkinNames as an array mapping skin ids to
names of those skins... however from what I see convention violates this
notion. "cologne" is "CologneBlue" yet the skin's actual name is
"Cologne Blue". "standard" is "Standard" yet the skin's actual name is
"Classic".
Despite the array being documented as a list of skin names, it really
appears to conform to something that maps lower case skin ids to their
proper cased counterparts which when prefixed with "Skin" will give you
a skin's class name. While we use the i18n system for the real skin "name".
This is actually fairly well supported by the fact that we use that skin
"name" when we require a skin file from the skins folder.
So to fix this bug I propose we change the documented format of
$wgValidSkinNames to be an array mapping skin ids "monobook" to the
proper cased key used for building class names and requiring files. And
change `$className = 'Skin' . ucfirst( $key );` to `$className =
"Skin{$skinName}";` so that "monobook" will try to load SkinMonoBook
instead of SkinMonobook.
As a side effect of this, it will also theoretically become possible to
alias skins by doing something like `$wgValidSkinNames = "MonoBook";`
which considering there are already cases in the wild where people have
duplicated monobook to have varying styles like ReferenceBook,
WhiteBook, etc... would probably be a desirable feature.
--
~Daniel Friesen (Dantman, Nadir-Seen-Fire) [http://daniel.friesen.name]