I would like to introduce to the Wikimedia community WikiToLearn, a FOSS
project of which I am a participant and which is lately getting a lot of
contributions and momentum.
It is a KDE project sponsored (among the others) by Wikimedia Italy and
recently joined by institutions such as HEP Software Foundation (CERN,
Fermilab, Princeton...) or Universities such as University of Pisa and Milano-
Bicocca. These institutions are already populating the website with content.
We aim to provide a platform where learners and teachers can complete, refine
and re-assemble lecture notes in order to create free, collaborative and
accessible textbooks, tailored precisely to their needs.
Although the project is quite young (only a few months old), it is already
growing in allure at an unexpected rate. Thanks to this we are now counting on
nearly 40 developers, and growing (including content developers).
We are different from Wikipedia and other WMF projects in several ways, and in
a sense, complementary. Our focus is on creating complete textbooks (and not
encyclopedic articles), drawing from a professor’s or a student’s own notes,
either existing or that have to be written down.
We also have a strong focus on offline use: all the content of WikiToLearn
should be easily printable by any student for offline use and serious
Besides a good team for content development, we can count on a small but
motivated team of developers, and we would like to improve communication with
upstream (a.k.a. you ;-) ), because we found ourselves developing a few
features which could probably be made available to the general public, with
some generalization and polishing. ;-)
Is this a right place to start such a discussion?
We would like to help as much as we can, but we might need some mentoring in
how to best approach MediaWiki development, as many of us are relatively new
to OSS/Web development.
I ran into an issue trying to save MediaWiki:Common.js I get this page not available (index.php?title=MediaWiki:Common.js&action=submit)
MediaWiki:Common.css save fine
Any Idea what I'm doing wrong?
The XML database dumps are missing all through May, apparently
because of a memory leak that is being worked on, as described
However, that information doesn't reach the person who wants to
download a fresh dump and looks here,
I think it should be possible to make a regular schedule for
when these dumps should be produced, e.g. once each month or
once every second month, and treat any delay as a bug. The
process to produce them has been halted by errors many times
in the past, and even when it runs as intended the interval
is unpredictable. Now when there is a bug, all dumps are
halted, i.e. much delayed. For a user of the dumps, this is
extremely frustrating. With proper release management, it
should be possible to run the old version of the process
until the new version has been tested, first on some smaller
wikis, and gradually on the larger ones.
Lars Aronsson (lars(a)aronsson.se)
Aronsson Datateknik - http://aronsson.se
As part of T113210 , which is a broader discussion on track for the
developer summit, I am hosting an IRC office hour  this Thursday at
Since shared hosting is a broad topic, this session will focus specifically
on brainstorming ways to reconnect with the shared hosting community.
Shared hosting mediawiki users are currently underrepresented in the
greater mediawiki community. We rarely run into them in phabricator, on
gerrit or on the mailing lists. Which means that people often have to think
on their behalf about their use cases and issues, instead of getting direct
There must be practical ways to bring those thousands of mediawiki users
back into the fold, so to speak. Hopefully we can come up with interesting
ideas to achieve that.
And if you happen to be a shared hosting user, by all means, please join
this IRC office hour :)
I want to implement something similar to ~~~~ expansion or the
in internal links in an extension.
That is, I need to execute code
- on save and preview only,
- on wikitext ony,
- on page content only,
- before the pre-save-parser treats links in the wikitext,
- altering the wikitext of the page.
I tried several hooks and ideas, but was unable to make it specific
Help is appreciated.
This drag and drop from a CSV and other mentions of VE neatness would make a great blog post. I would love see a post about being productive in VisualEditor. Tips and tricks as it were.
This electronic mail and any attached documents are intended solely for the named addressee(s) and contain confidential information. If you are not an addressee, or responsible for delivering this email to an addressee, you have received this email in error and are notified that reading, copying, or disclosing this email is prohibited. If you received this email in error, immediately reply to the sender and delete the message completely from your computer system.
I see that there's an active workboard in Phabricator at
https://phabricator.wikimedia.org/project/board/225/ for CAPTCHA issues.
Returning to a subject that has been discussed several times before: the
last I heard is that our current CAPTCHAs do block some spambots, but they
also present problems for humans and aren't especially difficult for more
sophisticated spambots to solve. Can someone share a general update on
what's happening with regard to improving usability for humans and
increasing the difficulty for bots? I'm particularly concerned about the
former issue, since CAPTCHAs might be filtering out some good-faith human
We often have the case of a change to an extension depending on a
pending patch to MediaWiki core. I have upgraded our CI scheduler -
Zuul - a couple weeks ago and it now supports marking dependencies even
in different repositories.
Why does it matter? To make sure the dependency is fulfilled one
* CR-2 the patch until dependent change is merged
* write a test that exercise the required patch in MediaWiki.
With the first solution (lack of test), once both are merged, nothing
prevent one from cherry picking a patch without its dependent patch.
For example for MediaWiki minor releases or Wikimedia deployment branches.
When a test covers the dependency, it will fail until the dependent one
is merged which is rather annoying.
Zuul now recognizes the header 'Depends-On' in git messages, similar to
'Change-Id' and 'Bug'. 'Depends-On' takes as parameter a change-id and
multiple ones can be added.
When a patch is proposed in Gerrit, Zuul looks for Gerrit changes
matching the 'Depends-On' and verify whether any are still open. In such
a case, it will craft references for the open patches so all the
dependencies can be tested as if they got merged.
Real world example
The ContentTranslation extension is tested with the Wikidata one and was
not passing the test. Wikidata created a patch and we did not want to
merge it until we confirm the ContentTranslation one is passing properly.
The Wikidata patch is https://gerrit.wikimedia.org/r/#/c/252227/
On ContentTranslation we indicated the dependency:
+ Depends-On: I0312c23628d706deb507b5534b868480945b6163
Which is the Wikidata patch.
* received the patch for ContentTranslation
* looked up the change-id and found the Wikidata
* created git references in both repo to point to the proper patches
* zuul-cloner cloned both repos and fetched the references created by
the Zuul service
* run tests
That confirmed us the Wikidata patch was actually fixing the issue for
ContentTranslation. Hence we CR+2 both and all merged fine.
Please take a moment to read upstream documentation:
Antoine "hashar" Musso