I keep seeing references in WMF documents to using Redis for session storage and as the storage for the job queue going forward. However, LocalSettings doesn't have any references to Redis. It looks like the session changes for Redis were in 1.20, and I thought that the change for the job queue to be backed by Redis was coming in 1.21. It also seems that Notifications (Echo) may require Redis, but that's not real clear.
I'm wondering what the general status of this is? As a 3rd party mediawiki admin, I'm eager to get redis going primarily for the job queue improvements, but with nothing noted in local settings documentation I'm wondering what the plan is for these redis based features? Will $wgMainCacheType be getting a CACHE_REDIS option?
find me on AIM Twitter Facebook LinkedIn
I'm a testing a new rendering option for the <math /> element and had
problems to store MathML elements in the database field
math_mathml which is of type text.
The MathML elements contain a wide range of Unicode characters like the
INVISIBLE TIMES that is encoded as 0xE2 0x81 0xA2 in UTF-8 or even 4 byte
chars like MATHEMATICAL BOLD CAPITAL A 0xF0 0x9D 0x90 0x80 .
In some rar cases I had problem to retrieve the stored value correctly from
To fix that problem I'm now using the PHP functions utf8_encode /decode to
which is not a very intuitive solution.
Do you know a better method to solve this issue without to change the
I was thinking about the past hackathons and I realized that the interaction with eachother is often the best way to learn during these events, it's so much quicker than using gerrit/bugzilla/email/IRC.
I also remember that a lot of the time, we do a lot of informal review and assessment of problems during these events.
This time we have a very nice "How to get your code deployed on Wikimedia" workshop. Perhaps in addition to that, it might be a nice idea to do a live "office hour" dedicated to bug assessment and code review ?
People could submit bug reports and gerrit changesets (etherpad?) and then we pick one hour, where a group of us simply try to help people with these issues in any form. We'd have multiple disciplines and areas of expertise being able to chip in, which should be great for the attendees with "why does no one pay attention to my bugreport/patch"-issues.
Does anyone else think something like that might be a nice idea ?
We'd need to find a timeslot though, that's probably gonna be the hardest part. I'm guessing that many WMF folks will have quite a few meetings again.
We've enabled JGit's recursive merger for all repositories using content merge
strategies (basically any not using fast forwarding, which is most).
is to lower the number of trivial conflicts that people are having to
This is considered experimental by Gerrit, but is now the default for
JGit itself, so
I believe it's stable enough for us to use. That being said, it's
still new-ish so there's
always a chance we'll hit some bug. So, if you see anything, ANYTHING related to
merge problems then I'd like to know about it so we can either get it
fixed or turn the
feature back off (if it's patently broken).
I have implemented an idea for WikiEditor extension: replace
"step-by-step publish" feature with another one - "publish staying in
edit mode via AJAX". You can see a demo at http://wiki.4intra.net/ if
you want. It works simply by sending an API save article request while
NOT closing the article being edited. Also it handles section edits
correctly via re-requesting section content after editing, so you'll
stay with consistent edit form even if you add sections.
The idea is to give authors the ability to save intermediate results.
My question is - does anyone really need "step-by-step publishing"
feature that is in WikiEditor? I think it's useless because it just
duplicates the existing functionality, just submits the form using
normal POST request, and makes editing harder as you have to do more
clicks. I would submit a patch to Gerrit if you're interested in
replacing it with "publish-staying-in-editmode".
With best regards,
In trying to submit https://gerrit.wikimedia.org/r/#/c/63907/18, I get a
Jenkins build failure due to PHPUnit failing, due to a Fatal being thrown:
PHP Fatal error: Class 'MFMockRevision' not found in
on line 500
The thing is, 'MFMockRevision' should be made available by it's file
being included in efExtMobileFrontendUnitTests() (our hook handler for
Unit tests execute fine for me and at least one other person on the
mobile team. I do not want to manually merge that patchset since I
imagine it would cause build failures on subsequent patchset +2's.
Anyone know what might be going on?
Software Engineer, Mobile
I do a lot of maintenance tasks on Commons, and many tasks require some sort of database query to find the oddball cases. The queries can be done through one of several ways:
1) Using CatScan and CatScan2 tools
2) Database query service 
3) Weekly Database reports 
Unfortunately lately some of those ways are breaking down. CatScan and CatScan2 rarely work failing in many different ways: usually due to exceeding the 'max_user_connections' (30 for magnus's CatScan2, and 15 for Daniel's CatScan), but otherwise with some timeout or no-connection errors, or can work on a query for hours (or days if you let it) and never returns anything. I developed some CatScan2 based queries for Creator template maintenance, that worked fine 2-3 years ago, but always time-out since. That might be due more and more images on Commons. Similarly, Database query service seems also very inactive. There are many requests and few replies, like my request from April 2 .
For example, lately I was searching for images on Commons that do not have any license templates (sometimes since 2007 or earlier), see . At some point
Magnus was helping me with that query, however after it failed several times with "server not found" error we gave up. It seems like less and less can be done with current infrastructure.
So are there any non-toolserver based alternatives for database queries? I was trying to read about Wikimedia Labs looking for tools based on them. Ideally there would be some CatScan2 like tool that is based on different database, with higher number of users allowed.
Hello! So I tried converting
https://github.com/wikimedia/qa-browsertests/pull/1 into a Gerrit changeset
(https://gerrit.wikimedia.org/r/#/c/54097/) , and was mostly successful. It
is also a relatively painless process - at least for single commits.
This assumes you (person doing the GitHub -> Gerrit bridge) have a Gerrit
account. I wrote a small script that sortof makes this easy:
This only does things one time - it moves a set of commits in a pull
request to a squashed single commit on gerrit, assuming your current
directory is a cloned version of the gerrit repo you want to commit to. It
should not to be too hard to write an actual, idempotent sync script that
maintains a 1-to-1 correspondence between Pull Requests and Gerrit
Changesets, and I'll attempt to do that tomorrow.
Note that this is a shitty bash script (to put it mildly) - but that seems
to be all I can write at 5:30 AM :) I'll probably rewrite it to be a proper
python one soon. That should also allow me to use the GitHub API to also
mirror the GitHub Pull Request Title / Description to Gerrit.
I also offer to manually sync pull requests into gerrit as they come until
the automatic Gerrit integration is ready. Shall be writing another small
script tomorrow to have me 'watch' all the wikimedia/* GitHub repositories.
Thank you :) I'll update this thread as the script gets less shitty. Do let
me know if you have build a far more complete script :)
Yuvi Panda T
Hi, we have now a mailing list dedicated to QA specific topics:
Please forward to your colleagues interested in testing and in
contributing to Wikimedia!
See the rationale and background below.
-------- Original Message --------
Welcome to the QA list!
This mailing list is an umbrella to host people and discussions focusing
on software quality assurance in all its aspects: exploratory testing,
browser automation testing, unit testing, continuous integration, the
beta cluster, bug management and community QA activities.
We hope this list becomes useful to integrate and retain people
primarily interested in testing / QA. We are seeing many people
interested in testing from different angles. Some people are current
developers willing to learn and discuss more about this topic, best
practices, etc. Some are people new to the community that see testing /
QA as a way to contribute in technical tasks other than development.
This specialized list follows the good results offered by precedents
like the Analytics, Design of Editor Engagement lists. wikitech-l
subscribers will still receive the deep & wide QA related announcements.
No teams comfortable with the status quo will be encouraged to change
their current communication practices because of this list.
There is a bit more background at
Now, who's next?
Technical Contributor Coordinator @ Wikimedia Foundation