I was trying to use this old extension: http://www.mediawiki.org/wiki/Extension:UserLoginLog to log login attempts. It used $wgMessageCache; therefore it worked fine until we went to a 1.18 wiki. I tried to modernize it by creating a UserLoginLog.i18n file to keep the messages.
I have $wgExtensionMessagesFiles['userLoginLog'] = dirname( __FILE__ ) . "/UserLoginLog.i18n.php"; at setup.
A sample method is:
function wfUserLoginLogSuccess(&$user) {
wfLoadExtensionMessages( 'userLoginLog' );
$log = new LogPage('userlogin',false);
$log->addEntry('success',$user->getUserPage(),wfGetIP());
return true;
}
The statement 'wfLoadExtensionMessages( 'userLoginLog' );' does not seem to do anything because the message in the log is <userlogin-success> instead of the message.
I am not sure how to pass in the message log name or how to make it global now that wgmessageCache is no longer available. Is there a document on how to use the logging feature in the wiki?
Thanks,
Mary Beebe
Hi everyone,
I probably mistitled my last message in the "How to avoid a
post-branch code slush" thread. There seems to be a misconception
that the code slush is over.
Please go back and read the thread. The status quo is *there is still
a code slush*. What this means:
1. No big architectural changes without discussion. Not merely "hey,
I sent a message, and no one responded", but as in: you send a
message, and two or three people say "yeah, you're right, that needs
to happen, make it so" without serious dissent
2. No big omnibus whitespace cleanups. These are of debatable use
any time, but now they make backporting a big pain-in-the-butt, and
we'll need to do a lot of backporting to 1.19.
3. Have a reviewer lined up ready to ok your revision *before you
commit it*. If you can't get a tentative commitment from a reviewer,
don't commit it.
We have a couple of different options in this interim period between
now and when we start using Git:
a. we let people commit as was previously normal into trunk, but we
only migrate the code that's been formally reviewed.
b. we insist on pre-commit review from now until we go live on Git.
So far, there hasn't been a lot of discussion on either option. Brion
and Roan both pointed out problems with option "a", while no one has
raised a serious objection to option "b".
Rob
On Wed, Feb 8, 2012 at 10:23 PM, Rob Lanphier <robla(a)wikimedia.org> wrote:
> On Wed, Feb 8, 2012 at 3:43 PM, Brion Vibber <brion(a)pobox.com> wrote:
>> This is one of the reasons I've been hoping we'd move to a more pre-commit
>> review model. Especially for big refactorings and cleanups that have
>> limited immediate value, we tend to get a lot of breakages a not a lot of
>> interest in fully reviewing them (eg actually checking all the code paths
>> to make sure they really work).
>
> Here's the thinking that lead to where we are: the cutover to Git is
> the point at which we want to fully move to precommit. It seems like
> an enormous pain-in-the-butt to move to full precommit with our
> current toolset (SVN + CodeReview tool).
>
> However, we're much closer than we've ever been to having Git, and it
> may be worth dealing with some short-term pain.
>
>> To a certain degree, I'd actually consider it desirable to have a permanent
>> 'slush' to the extent that destabilizing work should *always* be talked out
>> a bit and tested before it lands on trunk/head/master.
>
> Yup, agree 100%.
>
> Let's all just pretend this has always been the status quo starting
> right now. I think we've already established that there used to be
> more liberal reversion, and that when that went away, so too went our
> ability to stay on top of the review queue.
>
>> If we're not ready to go fully git the instant we branch 1.19,
>
> Given that the branch just happened, and we're not ready yet, that's
> the case. Chad can give you more of an update, but my understanding
> is:
> * A (hopefully) final test migration of core is slated for this week.
> Chad believes he's got all of the blocking problems sorted out.
> * Extensions migration isn't going so smoothly. The same tools that
> work splendidly with core seem to crash with the very small subset of
> extensions that he's tried it out with. Could be a minor problem
> that's easy to fix, or could be gawd awful. TBD
> * We'd like a two-week window of warning/testing/playing around
> before making the cutover.
>
> All told, the current plan is beginning of March for core, middle of
> March for extensions. More details here:
> https://www.mediawiki.org/wiki/Git/Conversion
>
> ...and in the email that I hear Chad is writing :-)
>
>> we may wish
>> to consider applying more formal review to things proposed to go into trunk
>> on SVN. This may be simpler than attempting to synchronize SVN and git via
>> post-SVN-pre-git reviews...
>
> I'd be perfectly fine with either outcome (more formal pre-commit
> review, or picking our SVN->Git cutover point based on what's
> reviewed).
>
> Rob
I invite you to the yearly Berlin hackathon.
This is the premier event for the MediaWiki and Wikimedia technical
community. We'll be hacking, designing, and socialising.
Our goals for the event are to bring 100-150 people together, with
lots of people who have not attended such events before. User
scripts, gadgets, API use, Toolserver, Wikimedia Labs, mobile,
structured data, templates -- if you are into any of these things, we
want you to come!
Some financial assistance will be available -- more details soon.
This event will be hosted by Wikimedia Germany (WMDE) and supported by
the Wikimedia Foundation. Thank you, WMDE!
Dates: June 1-3 2012. Barely-started wiki page, no registration details
yet: https://www.mediawiki.org/wiki/Berlin_Hackathon_2012 . Organizers:
me and WMDE's Nicole Ebber with assistance from Lydia Pintscher and
Daniel Kinzler.
Mark your calendars!
--
Sumana Harihareswara
Volunteer Development Coordinator
Wikimedia Foundation
I have been working with internal wikis for a while. We have several wikis that we edit within our company then give the wiki to the client. The client just searches the information and does not do any further edits.
We now have a wiki that we want to make external so I need to make sure I am addressing everything.
As far as user groups, I have it set that everyone can read the wiki and all login users can edit within the wiki. Only administrators can create a new user account. We have provided an email address for people to request authoring privileges. The server we are using for the wiki does not have a mail server. I am assuming that we would need a mail server on the server to have the wiki email passwords to the person. Is that true or can we set it to use another mail server?
Thanks,
Mary Beebe
Battelle Charlottesville, VA
434-951-2149
This message is intended only for the use of the individual or entity to which it is addressed, and may contain information that is privileged, confidential and/or otherwise exempt from disclosure under applicable law. If the reader of this message is not the intended recipient or the employee or agent responsible for delivering the message to the intended recipient, any disclosure, dissemination, distribution, copying or other use of this communication or its substance is prohibited. If you have received this communication in error, please return to the sender and delete from your computer system.
Hi, this idea had floated around for quite some time, but now that
bug 34257[1] was added to the long list of problems, I would like to
step up and start some progress. We[2] propose to remove the following
formats[3]:
* WDDX - doesn't seem to be used by anyone. Doesn't look sane either.
* YAML - we don't serve real YAML anyway, currently it's just a subset
of JSON.
* rawfm - was created for debugging the JSON formatter aeons ago, not
useful for anything now.
* txt, dbg, dump - the only reason they were added is that it was
possible to add them, they don't serve the purpose of
machine/machine communication.
So, only 3 formats would remain:
* JSON - *the* recommended API format
* XML - evil and clumsy but sadly used too widely to be revoved in the
foreseeable future
* php - this one is used by several extensions and probably by some
third-party reusers, so we won't remove it this time. However,
any new uses of it should be discouraged.
We plan to remove the aforementioned formats as soon as MediaWiki 1.19
is branched so that these changes will take effect in 1.20, but would
like to hear from you first if there are good reasons why we shouldn't
do it or postpone it. Please have your say.
------
[1] https://bugzilla.wikimedia.org/show_bug.cgi?id=34257
[2] Me and Roan Kattouw, one of API's primary developers
[3] https://www.mediawiki.org/wiki/API:Data_formats
--
Best regards,
Max Semenik ([[User:MaxSem]])
Hey all,
Over the last few extensions I've been creating I got annoyed at having to
create certain database interaction code over and over again and ended up
creating a DataObject class encapsulating most of the work and doing some
nice abstraction. The current version, as I have it in the Education
Program extension, is documented here:
https://www.mediawiki.org/wiki/User:Jeroen_De_Dauw/DBObject
An earlier version of this class which is included in the Contest extension
has been reviewed and is in use on mediawiki.org.
Since this is a very generic utility and is clearly useful in many
extensions (and probably in core as well), I like to put it into core for
MW 1.20. If you see issues that need to be resolved before this is done,
please describe them.
Cheers
--
Jeroen De Dauw
http://www.bn2vs.com
Don't panic. Don't be evil.
--
This is a question about an infrastructural detail of ResourceLoader and how it interacts with Internet Explorer. (It's my first post to wikitech-l, so apologies if it's the wrong forum.)
Our MediaWiki 1.17.0 site recently installed a bunch of extensions that use ResourceLoader, such as Extension:WikiEditor. To our surprise, some of our site's unrelated CSS styles stopped working. This was happening only in Internet Explorer. After some detective work, we discovered the problem is Internet Explorer's limit of 31 stylesheets:
http://support.microsoft.com/kb/262161
With so many extensions calling $wgOut->addModule [PHP} and mw.loader.load [JavaScript], the limit of 31 stylesheets is exceeded quickly. I removed a few mw.loader.load calls - it didn't matter which ones - and the problem went away.
Obviously this is an IE problem, not MediaWiki's, but it's going to cause issues on MediaWiki sites. WikiEditor itself loads about 10 stylesheets, for example, taking the site ~30% of the way toward a CSS failure.
So my questions are:
1. Is there a workaround for sites like mine, with many stylesheets from separate extensions loaded by ResourceLoader?
2. Should ResourceLoader address this IE problem? Maybe start combining stylesheets (with @import) automatically?
Thanks,
DanB
Earlier this week
(http://lists.wikimedia.org/pipermail/wikitech-l/2012-January/057638.html),
I wrote about the upcoming 1.19 release and the changes that need to be
made on-wiki to provide a consistent experience.
I've been thinking about this and getting input from on-wiki
administrators, but I'm interested in your thoughts,
I have talked to Krinkle and I realize his concerns -- javascript
dependencies really should be enumerated. Still, I'm concerned about
the experience for users who have installed several gadgets (which we
can test in Krinkle's "Tour de Wiki") as well as possible UserScripts
(which we really can't test, at least not at as easily or as quickly).
I adjusted the [[MediaWiki:Gadgets-definition]] on enwiki.beta and let
the people on enwiki know what I had found, but I think this sort of
adjustment will be needed in more places. For proof, just look at
http://de.wikipedia.beta.wmflabs.org/wiki/Wikipedia:Hauptseite?debug=true
in FireBug.
You'll see (or, at least, I see) two un-fulfilled dependencies on
mw.util.
Some sort of dependency needs to be added on mw.util -- either just
preload it or make it log a message when there is an unenumerated
dependency on it (and other necessary js dependencies).
This, plus a message on WP:VPT or the like, would be a way for users and
gadget authors to update their javascript. It be a great way to notify
users of the deprecation from 1.18 to 1.20 (or 1.21) without providing a
horribly shocking experience after we upgrade the cluster.
Let's ensure that the Wiki* experience is consistent. We can avoid some
of the mistakes like those that happened when we introduced
ResourceLoader. Backwards compatibility is important. If we upgrade
MediaWiki and we know that people are going to complain because a
widespread dependency (like mw.util) disappeared, let's eliminate that
"experience gap".
Mark.
On Wed, Feb 8, 2012 at 4:40 AM, Petr Bena <benapetr(a)gmail.com> wrote:
> Hi, is there any update on branching? Thank you
Sam will be doing it soon.
After that it means, in theory, that trunk will be open for
post-deploy commits. However, we *cannot* let the same backlog back
up that we did before, and there's no way we can keep up with
everything we need to do for deployment (last minute bugfixes,
addressing fixmes regardless of committer) while at the same time
dealing with a flood of new commits.
A big problem with our current post-commit review regime is that it is
exactly these times that really regrettable changes can and probably
do get made. Many refactoring exercises happen without much
discussion on the mailing list. The code doesn't get reviewed, and
then it gets entangled with a lot of other important code to the point
that we're forced to forge ahead with a suboptimal refactoring
decision. In addition to building up a large review backlog, we also
find ourselves chasing pockets of breakage due in part to incomplete
refactoring and backwards-incompatibility breakages.
We're migrating to Git very soon after this release. It would really
suck to have a huge pile of unreviewed commits going into trunk. So,
I'm going to suggest a Git migration strategy that will avoid having a
monsterous backlog. Instead of cutting over trunk at the very latest
revision, we cut over at the latest revision that is fully reviewed
and ok. Everything before the 1.19 branch point would be
grandfathered in, but everything after would need to be reviewed and
ok. So, for example, if r111000 is the branch point, and
r111000-r111020 are reviewed, but r111021 is a huge omnibus change
that sits unreviewed or fixme'd, then r111020 would be the branch
point, even if r111022-r120000 are fully reviewed. That's hopefully
an extreme example, but the goal is to make sure that trunk is always
fully reviewed.
What would happen to everything after the branch point is tbd. I
haven't talked to Chad about this, but I think it's conceivable that
we can import the remainder of commits into a branch that we can
cherrypick.
The dynamic that this will create is that it will motivate more
peer-to-peer scrutiny of code, rather than waiting for one of the
reviewers to play bad cop.
Until we agree on this strategy or some other strategy that everyone
agrees is workable, we'll need to keep the code slush in place.
Thanks
Rob