I ran into our coding conventions for creating elements in JS:
https://www.mediawiki.org/wiki/Manual:Coding_conventions/JavaScript#Creatin…
var $hello = $('<div>').text( 'Hello' );
// Not '<div/>'
// Not '<div></div>'
This looks like some really bad advice.
This dates back to an issue I ran into with jQuery 3 years ago:
https://forum.jquery.com/topic/ie-issue-with-append#14737000000469433https://forum.jquery.com/topic/ie-issue-with-append#14737000000469445
Basically $( '<span class="foo">' ) will break completely in IE7/IE8.
jQuery supports /> on all elements (eg: $( '<span class="foo" />' ))
intentionally. It does internal replacements to support it as a syntax for
specifying elements. But besides that special case jQuery wants anything
passed to it to be valid markup. It just leaves the parsing of it up to
the browser and all the quirks the browser may have.
jQuery does special case attribute-less $( '<div />' ) but this is a
performance enhancement. The fact that $( '<div>' ) does not break in
IE7/IE8 is an unintentional side effect of jQuery's lazy support of
special cases like $( '<img>' ) where the tag is self closing and the
browser will not require a /. This is the ONLY case where jQuery
intentionally supports leaving out a closing tag or a self-closing /.
When devs consider `$( '<div>' )` ok they typically believe that `$( '<div
class="foo">' )` should be ok to. It looks like a special cased way to
define an element, attributes et. all. However the former is a performance
enhancement side effect, and the later while it will look like it works in
Chrome and Firefox actually relies entirely on quirky error correction
behavior which differs between engines and breaks in IE7/IE8 engine.
The jQuery documentation also does not state that `$( '<div>' )` for
non-selfclosing elements is supported:
http://api.jquery.com/jQuery/#jQuery2
Hence, I think we should change our coding conventions to always use `$(
'<div />' )`.
((And for anyone that suggests that developers should know they should add
a / or </div> to <div> when they add any attributes to it. I bring up the
fact that our code style requires {} blocks and does not allow implicit
`if ( foo ) foo();`. This is the same thing.))
--
~Daniel Friesen (Dantman, Nadir-Seen-Fire) [http://daniel.friesen.name]
I'd like to give a shout-out to some people who made my day better:
+ Brian Wolff (bawolff), for thoughtful commentary on code and on our
social infrastructure
+ Lydia Pintscher, for diligent, thoughtful and cheerful shepherding of
Wikidata conversation
+ Daniel Zahn, for adding Bugzilla tracking to
http://status.wikimedia.org/8777/263658/Bugzilla
How about a little email thread for us to say nice things about each
other? Rules: be kind, thank someone, and say why you're thanking them.
[If you don't get thanked and you feel mopey, email me and I'll comfort
you. :-)]
--
Sumana Harihareswara
Engineering Community Manager
Wikimedia Foundation
Hello,
Recently I noticed that keywords in bugzilla get
updated more and more often, mostly with keywords
like "patch", "patch-need-review", etc.
I am wondering what to do in the following situations
(like https://bugzilla.wikimedia.org/show_bug.cgi?id=39635
for example):
- user A posts a patch
- the bug gets "patch", "patch-need-review"
- user B posts a patch that is different and says
he does not like patch of A
- user B submits change to gerrit
When "need-review" should be removed? What are replacements
if any? What if I believe that core ideas behind the
patch are wrong? What if I just think the implementation
should be improved? What it it's more or less okay?
I see only "patch-reviewed" in the keywords - which can be
both negative and positive.
Before I open a whole can of worms by asking a question
how do I relate those keywords to the Gerrit workflow
we have, maybe the current bugmeisters could explain
how they use those keywords and how we can help?
//Saper
*
Hello all,
test.wikipedia.org (hostname srv193) is currently running Ubuntu 12.04
precise as well as a slightly update configuration package and a new (minor
update only) version of php.
Tomorrow morning PST, I will be putting two apaches, srv194 and srv281,
into the production apache pool with precise and the new
packages/configurations.
I have already done some manual testing against them, as recommended by
Tim, and they seem to be behaving well, but please let me know if you
notice anything unusual going on with them.
Once this is concluded, I will move on building the Apaches in our Equinex
datacenter.
Please note that this is only "regular" apaches, not imagescalers. They are
next on my todo list and I will send an update when testing/deployment of
them begins.
Questions? Comments? Concerns?
--peter
Hi,
There is a number of services running on our wikis, such as english
wikipedia which are for example generating some information and
frequently update certain page with these data, (statistics etc). That
makes tons of revisions that will be never needed in future and which
will be in database with no option to remove them.
For this reason it would be nice to have an option to create a special
page on wiki which would have only 1 revision and its data were
overwritten, so that it wouldn't eat any space in DB. I know it
doesn't look like a big issue, but there is a number of such pages,
and bots that update them generate lot of megabytes every month. Bot
operators could use for example external web server to display these
data, but these can't be transcluded using templates, like here:
http://en.wikipedia.org/wiki/Wikipedia:WikiProject_Articles_for_creation/Su…
What do you think about this feature? Is it worth of working on, or
this isn't a big issue? Keep in mind that even if we have a lot of
storage, these unneeded data make the dumps larger and processes that
are parsing the XML files are slower
Hello,
Platonides requested that Jenkins runs unit tests using
$wgDevelopementWarnings. That will trigger an error in the test suite
and mark the build as failing whenever a test uses a deprecated method.
As mentioned in the bug report, if the test is legit and is actually
testing a deprecated method, you can override the warning by using:
$this->hideDeprecated('deprecatedMethod').
Daniel Kinzler fixed several of them in the Wikidata branch, you can
look at https://gerrit.wikimedia.org/r/#/c/21732/ | 96f7db3f
Bug request: https://bugzilla.wikimedia.org/38882
--
Antoine "hashar" Musso
Hi,
It seems like some people are saying that getting workable code out of the
Google Summer of Code is a relatively minor aspect of the program - maybe
the third or fourth most important outcome of it, behind things like
finding new developers, getting better at mentoring, teaching software
development, etc. I disagree with this view - I think creating useful
projects is the most important part of GSoC, and that this aspect of it
should be taken much more seriously than it is. That's for a few reasons:
1) It's Google's money that's paying for all of this, and in their listing
of goals for GSoC, they do list most of these things in some form, but the
#1 goal they list is "Create and release open source code for the benefit
of all". [1]
2) It's a real waste to not use this resource to get actual improvements to
the software, given the (relatively) limited resources that the Wikimedia
Foundation and MediaWiki have. GSoC offers free money, and a framework, for
getting development done - it's a tremendous resource that should be taken
full advantage of.
3) As MZ notes, even from the vantage point of recruiting developers,
having unsuccessful projects is a problem - putting all that effort in,
with nothing in the end to show for it, can be demoralizing. And that sort
of thing can accumulate: if talented young open-source developers are
considering doing a GSoC project for Wikimedia, and they look at the track
record for the WMF's previous GSoC project, they may draw negative
conclusions about it, and maybe about doing MediaWiki development in
general.
So how can this be improved? Having mentored three successful GSoC projects
for the WMF, I have some perspective on this. As MZ also notes, WMF
projects so far have tended to be too large in scope for the length of time
allotted. But that's not something that's easy to correct - estimating
development time is always a challenge, even when the developers involved
are experienced and not newbies. And smaller projects tend not to inspire
anyone, mentors or students.
So here is my proposal: there should be in place some plan of action,
ideally for every project, in case it goes overlong and doesn't get
completed in time. That can take several forms: a commitment by the mentor
or another experienced developer to finish up the project; a commitment by
the WMF to fund the student to finish it, if the student turns out to be
serious and it's just a simple lack of time that was the issue; a
commitment by the WMF to fund someone else to finish it or just a
commitment by the student to finish it themselves, on a volunteer basis.
The last of those is tricky, because that tends to be the conclusion at the
end of uncompleted projects anyway - the student keeps working on it, but
that usually only lasts for a few months before the student's school work
and/or regular work get in the way, and then the project often falls by the
wayside. A commitment on the WMF and/or mentor side would be the ideal
thing - and if there's no such guarantee for a specific project, then that
should be taken into consideration when deciding on whether to accept it.
Of the three projects I mentored, none of them produced something that was
fully usable on the final day of GSoC. In all three cases, more work was
put in - the first time, by the student, the second two times by me. The
post-GSoC work was significant in all cases, but it was still a lot less
than it would have been to try to do the project from scratch. It was a
comparatively small amount of effort, to get the code to at least
"beta"-level or higher, that ended up making a huge difference. Something
like that, from what I've seen, is almost always needed - so it should be
factored into the planning.
[1]
http://www.google-melange.com/document/show/gsoc_program/google/gsoc2012/fa…
-Yaron
--
WikiWorks · MediaWiki Consulting · http://wikiworks.com
Hello,
Is it possible to customise the section numbers displayed by the wiki
(with the magic word __NUMBEREDHEADINGS__, or through the user
preference)?
We are developing an open educational resource for African teachers,
consisting of several units, e.g.
http://orbit.educ.cam.ac.uk/wiki/OER4Schools/3.2_Supporting_reasoning_and_m…
We'd like to prefix each section within the wiki page with the unit
number (3.2 in this case), i.e. change the numbering from 1, 2, 3, ...
to 3.2.1, 3.2.2, 3.2.3, etc.
One solution would be to implement numbering manually (turning wiki
numbering off with __NONUMBEREDHEADINGS__; then using parser functions
and Variables extension to reinsert custom numbering), e.g. as
= {{unitnumber}}.{{sectionnumber}} My section =
== {{unitnumber}}.{{sectionnumber}}.{{subsectionnumber}} My subsection ==
However, that's obviously cumbersome. More elegant would be something like
= {{thenumber}} My section =
== {{thenumber}} My sub section ==
where the {{thenumber}} template works out which (sub)section it's in
and generates the number accordingly. Even more elegant would be
something like
{{sectionnumberprefix|3.2}}
= My section =
== My sub section ==
Any thoughts on how one might implement that?
Many thanks!
Bjoern
Aaron, thanks for your reply. On a technical level you are simply saying that it is not currently possible. I'll report that at the wiki so we can vote on a different option.
But I respectfully disagree regarding the value of the function, and hope you will take an alternative view into account. You wrote:
>You cannot have pages "use the latest quality version" as the default version.
>This would create a very confusing interface that takes a mouth full to explain.
But there would be nothing confusing about it at all:
"When a page reaches X level of quality, that version becomes the default."
What is confusing about that? Not only is it completely straightforward, but it would seem to be a very basic function for an extension of this sort. At the very least as an option.
You also correctly write that:
>Also, it's hard enough to keep "checked" versions up to date, even hard for"quality" ones. You don't won't
>to end up with people having their edits take weeks (sometimes months) to show to readers because they
>haven't been highly proofed yet.
I agree of course that this can be a problem for any installation of "Flagged Reviews" no matter what the configuration. But a configuration such as the one I'm asking about wouldn't make that problem any worse. On the contrary, it would make it far easier to deal with because only a minority of highly feted pages would ever wait to be proofed. It would certainly be far *less* of a problem than the option that is currently offered, namely that the latest reviewed version can be the default for *all* pages, no matter how high or low the quality).
Did the option I'm asking about ever exist in any previous version of the extension? Since this is such a critical and important Wikimedia extension, is there any appropriate forum for focused discussion of it? (Or is here at Wikitech the right place?)
Hi all
I think it would be extremely useful to allow nested database transactions - or
simulate them using a counter that would only to the actual commit after
commit() has been called as many times as begin() was called before.
This actually used to be the case, according to the comment on Database::trxLevel:
* Historically, transactions were allowed to be "nested". This is no
* longer supported, so this function really only returns a boolean.
This means that currently, if you call begin() while a transaction is already in
progress, the previous transaction is inadvertently committed, possibly causing
inconsistencies (at least on MySQL).
Why was this feature removed? Not counting transaction levels is causing a world
of pain for us on the Wikidata project, and I'm sure the same problem arises
elsewhere. Here's the problem:
* Before saving a change using WikiPage::doEdit(), i want to perform some checks
on the database, enforcing some global consistency constraints.
* The check should be in the same transaction, so the DB can't change after the
check but before the save.
* I can't open a transaction before my check, because WikiPage::doEdit()already
opens a transaction which would in turn abort mine, causing the save to be
performed in a separate transaction after all.
* I could try to inject my check into WikiPage::doEdit() using some hook. That
may work but it cumbersome and annoying, and I have to hope that the hook is
never moved outside the transaction (hooks generally don't guarantee whther they
are called in a transaction or not).
Basically, any code that needs transactional logic needs to first check whether
a transaction is already ongoing, open a transaction if not, remember whether it
owns the transaction, and commit the transaction in the end only if it's owned
locally. This essentially implements the is-in-transaction-counter based on the
call stack.
Looking at the code, this kind of check is hardly ever done. So the code just
*assumes* that it is, or is not, called within a transaction. This is bad.
So... why was the nice and simple trxLevel counting removed? What would break if
we put it back? Is there some other magic method to do this safely and nicely?
Thanks,
Daniel
PS: btw, if the non-counting behavior is to be kept, Database::begin() should
really fail if a transaction is already in progress. Silently committing the
previous transaction is very likely to cause creeping, low volume and hard to
track down database corruption.