Hi,
By popular demand, mediawiki/ cloaks are now available for all
developers with access to commit to svn. There will probably be future
exceptions made if we have lots of artists and translators who want to
get cloaks too but initially we're using this as an indicator of
contributions.
To get such a cloak, make an edit somewhere on mediawiki.org saying "I
am [[User:Xyz]] and my nick on IRC is xyz". Your userpage is a good
choice. You can then revert this edit and then send a link to the diff
to one of the IRC Group Contacts when you request your cloak. This is
anyone of seanw, Rjd0060, kibble or dungodung on IRC. Please check
idle times and contact one who is active. Once they have confirmed
your access and identity (you'll need to state your on-wiki username
on IRC too and be identified to NickServ) you'll be cloaked with
mediawiki/On-Wiki-Username or MediaWiki/On-Wiki-Username (your
choice).
The reason this has taken so long to come through is because we were
waiting on our developer to add features to verify this sort of cloak
automatically to the cloak request system. Unfortunately we seem to
have lost touch with him so are switching to doing all cloaks
manually.
S
--
Sean Whitton / <sean(a)silentflame.com>
OpenPGP KeyID: 0x25F4EAB7
Hello,
Just out of curiosity, now consider if a security hole in mediawiki
was identified, how does wikipedia manage to roll out the new patch to
all servers?
Any formal steps? Tesing? Regression? UAT?
Any build/deployment scripts can be shared?
Thanks.
On Fri, Aug 7, 2009 at 5:29 PM, <dale(a)svn.wikimedia.org> wrote:
> http://www.mediawiki.org/wiki/Special:Code/MediaWiki/54611
>
> Revision: 54611
> Author: dale
> Date: 2009-08-07 21:29:26 +0000 (Fri, 07 Aug 2009)
>
> Log Message:
> -----------
> added a explicit keyframeInterval per gmaxwell's mention on wikitech-l. (I get ffmpeg2theora: unrecognized option `--buf-delay for adding in buf-delay)
I thought firefogg was tracking j^'s nightly? If the encoder has
two-pass it has --buf-delay. Does firefog perhaps need to be changed
to expose it?
I'm starting a new thread because I noticed my news reader has glued together messages with the title "A potential land mine" and "MW test infrastructure architecture," which may confuse someone coming into the discussion late. Also, the previous thread has branched into several topics and I want to concentrate on only one, specifically what can we assume about the system environment for a test infrastructure? These assumptions have direct impact on what test harness we use. Let me start by stating what I think can be assumed. Then people can tell me I am full of beans, add to the assumptions, subtract from them, etc.
The first thing I would assume is that a development system is less constrained than a production system in what can and what cannot be installed. For example, people shot down my proposal to automatically discover the MW root directory because some production systems have administrators without root access, without the ability to load code into the PEAR directory, etc. Fair enough (although minimizing the number of places where $IP is computed is still important). However, if you are doing MW development, then I think this assumption is too stringent. You need to run the tests in /tests/PHPUnitTests, which in at least one case requires the use of $wgDBadminuser and $wgDBadminpassword, something a non-priviledged user would not be allowed to do.
If a developer has more system privileges than a production admin, to what extent? Can we assume he has root access? If not, can we assume he can get someone who has to do things like install PHPUnit? Can we assume the availability of PERL or should we only assume PHP? Can we assume *AMP (e.g., LAMP, WAMP, MAMP, XAMP)? Can we assume PEAR? Can the developer install into PEAR?
Dan
Hi all,
I've been trying to patch MediaWiki towards a bug for the past 4 days but
there is some weird stuff happening. The wiki behaves unresponsive to any
patches. Please help me out.
Till now, I've tried the following, but the wiki on my localhost functions
as though no changes have taken place:
1) Applied the first hack given on
http://meta.wikimedia.org/wiki/Customizing_edit_toolbar [not even the table
icon appears]
2) Set the predicate (preg_match) to zero in Parser.php (689) [it still
parses it as a table]
3) removed all the content of index.php !! [it still runs]
and a lot of things to try out, but no success.
I've not had any progress due to this unexpectedly weird stuff going on.
Please give a hint or a pointer in the right direction.
Thanks
Taja
Hello!
You are receiving this email because your project has been selected to
take part in a new effort by the PHP QA Team to make sure that your
project still works with PHP versions to-be-released. With this we
hope to make sure that you are either aware of things that might
break, or to make sure we don't introduce any strange regressions.
With this effort we hope to build a better relationship between the
PHP Team and the major projects.
If you do not want to receive these heads-up emails, please reply to
me personally and I will remove you from the list; but, we hope that
you want to actively help us making PHP a better and more stable tool.
The first release candidate of PHP 5.2.11 was just released and can be
downloaded from http://downloads.php.net/ilia/, the win32 binaries are
available athttp://windows.php.net/qa/. Please try this release
candidate against your code and let us know if any regressions should
you find any. The goal is to have 5.2.11 out within two to three weeks
time, so timely testing would be extremely helpful.
In case you think that other projects should also receive this kinds
of emails, please let me know privately, and I will add them to the
list of projects to contact.
Best Regards,
Ilia Alshanetsky
5.2 Release Master
I am investigating how to write a comprehensive parser regression test. What I mean by this is something you wouldn't normally run frequently, but rather something that we could use to get past the "known to fail" tests now disabled. The problem is no one understands the parser well enough to have confidence that if you fix one of these tests that you will not break something else.
So, I thought, how about using the guts of DumpHTML to create a comprehensive parser regression test. The idea is to have two versions of phase3 + extensions, one without the change you make to the parser to fix a known-to-fail test (call this Base) and one with the change (call this Current). Modify DumpHTML to first visit a page through Base, saving the HTML then visit the same page through Current and compare the two results. Do this for every page in the database. If there are no differences, the change in Current works.
Sitting here I can see the eyeballs of various developers bulging from their faces. "What?" they say. "If you ran this test on, for example, Wikipedia, it could take days to complete." Well, that is one of the things I want to find out. The key to making this test useful is getting the code in the loop (rendering the page twice and testing the results for equality) very efficient. I may not have the skills to do this, but I can at least develop an upper bound on the time it would take to run such a test.
A comprehensive parser regression test would be valuable for:
* fixing the known-to-fail tests.
* testing any new parser that some courageous developer decides to code.
* testing major releases before they are released.
* catching bugs that aren't found by the current parserTest tests.
* other things I haven't thought of.
Of course, you wouldn't run this thing nightly or, perhaps, even weekly. Maybe once a month would be enough to ensure the parser hasn't regressed out of sight.