Hi, just a quick note: as part of general search and discovery work, me and
Yuri are resurrecting the project to have OpenStreetMap in Wikimedia
starting in April. Because the initial part of this work will include
researching options which will influence precise goals and this is yet to
be done, we still can't commit to a precise timeline, but as a ballpark
estimate I personally want to aim for serving PNG tiles at a reasonable,
though not necessarily "dynamic maps on every WP page" scale by the end of
Q4. Vector/multilingual maps would be the next stage. We will be mostly
using Phabricator for planning,
https://phabricator.wikimedia.org/tag/openstreetmap/ is my first pass on
the outline of things to be done.
Your comments and suggestions would be highly appreciated, please share
your thoughts, ideas of projects that might use these maps, or just
merciless critique! :D
Max Semenik ([[User:MaxSem]])
My apologies if this is the wrong place to start a discussion on this, but
it's a better place than nowhere. I recently took part in two very
different Wikipedia workshops -- one in Uganda for schoolchildren aged
14-17, and one Bodø, Norway, for GLAM people aged 35-55. One glaringly
obvious barrier of entry that was common for both groups is that the
CAPTCHA we use is too freaking hard.
The main concern is obviously that it is really hard to read, but there are
also some other issues, namely that all the fields in the user registration
form (except for the username) are wiped if you enter the CAPTCHA
incorrectly. So when you make a mistake, not only do you have to re-type a
whole new CAPTCHA (where you may make another mistake), you also have to
re-type the password twice *and* your e-mail address. This takes a long
time, especially if you're not a fast typer (which was the case for the
first group), or if you are on a tablet or phone (which was the case for
some in the second group).
So I would like to start a discussion about changing to a CAPTCHA that is
more user-friendly, and hopefully one that isn't as
English/Latin-alphabet-centric as the one we currently use. If Ugandan
children and old Norwegian people, which all use the Latin alphabet, have
such problems deciphering the CAPTCHA, what about people speaking languages
that don't use the Latin alphabet? I would prefer something more
simplistic, like some sort of math or image-based CAPTCHA, instead of the
current CAPTCHA we use.
Jon Harald Søby <http://meta.wikimedia.org/wiki/User:Jon_Harald_S%C3%B8by>
I wish to express my interest in working on the above mentioned project. I
have the required technical skill -PHP- and I am willing tread new grounds.
I would love to discuss more about the project, what is really expected of
the student, to establish some measurable goals before getting myself soak
into the community.
Jacob Applebaum made another remark about editing Wikipedia via tor this
morning. Since it's been a couple months since the last tor bashing thread,
I wanted to throw out a slightly more modest proposal to see what people
This is getting some interest from a few people:
Which lays out a way for twitter to use an external, trusted identity
provider to verify identifies / throttle requested, and then create an
account in a way that neither twitter or the identity provider can link the
account to the request (as long as you mitigate timing attacks).
What if we turn this around a bit and let the wiki establish identity and
throttle, and open up an editing proxy that is accessible via tor which
consumes the identities?
* Established wiki user who wants to use tor makes a blinded request (maybe
public, maybe a private queue for some group with appropriate rights) for a
tor-based account creation token.
* User gets that blinded token signed if they're in good standing, and are
limited to some number (3 total, not less than 6 months since the last
request, or something like that).
* User creates an account on the editing proxy via tor, and gives their
unblinded token to the proxy. The proxy creates an account for them, and
allows edits via OAuth token using that new account.
If the use turns it to be a spammer:
* The anonymous account can be blocked like a normal account. The user is
throttled on how many requests for accounts they can make.
* If the proxy generates to much spam, a steward can revoke the key, and we
all go home to think up the next experiment.
To make this happen, we need:
* a light editing proxy (I already have a couple of those as demo OAuth
apps) which is run by a *non-wmf* entity
* something for normal users to download and run that does the blinding for
* work out how to address timing attacks if the volume of requestors is low
enough that we can correlate request to first edit by the proxy.
Anyone interested in helping?
Is this conservative enough for those worried about the flood of tor spam,
while being simple enough that the average editor would be able to
understand and go through the process?
I fully upport and welcome this, but at least for Project:Support_desk you should communicate this on this LQT board, too, that it will be converted (if you didn't do hat already, i haven't looked now, because LQT ist terrible on mobile :P). There are probably very active supporters, who haven't subscribed this list, but they should have the possibility to post their needs and opinions about that.
Gesendet mit meinem HTC
----- Reply message -----
Von: "Nick Wilson (Quiddity)" <nwilson(a)wikimedia.org>
An: "Wikimedia developers" <wikitech-l(a)lists.wikimedia.org>
Betreff: [Wikitech-l] Starting conversion of LiquidThreads to Flow at mediawiki.org
Datum: Di., März 17, 2015 01:51
LiquidThreads (LQT) has not been well-supported in a long time. Flow
is in active development, and more real-world use-cases will help
focus attention on the higher-priority features that are needed. To
that end, LQT pages at mediawiki.org will start being converted to
Flow in the next couple of weeks.
There are about 1,600 existing LQT pages on Mediawiki, and the three
most active pages are VisualEditor/Feedback, Project:Support_desk, and
Help_talk:CirrusSearch. The Collaboration team has been running
test conversions of those three pages, and fixing issues that have
come up. Those fixes are almost complete, and the team will be ready
to start converting LQT threads to Flow topics soon. (If you’re
interested in the progress, check out phab:T90788 and linked
tasks.) The latest set is visible at a labs test server. See an
example topic comparison here: Flow vs LQT.)
The VisualEditor/Feedback page will be converted first (per James'
request), around the middle of next week. We’ll pause to assess any
high-priority changes required. After that, we will start converting
more pages. This process may take a couple of weeks to fully run.
The last page to be converted will be Project:Support_desk, as that is
the largest and most active LQT Board.
LQT Threads that are currently on your watchlist, will still be
watchlisted as Flow Topics. New Topics created at Flow Boards on your
watchlist will appear in your Echo notifications, and you can choose
whether or not to watchlist them.
The LQT namespaces will continue to exist. Links to posts/topics will
redirect appropriately, and the LQT history will remain available at
the original location, as well as being mirrored in the Flow history.
There’s a queue of new features in Flow that will be shipped over the
next month or so:
* Table of Contents is done
* Category support for Flow Header and Topics is done
* VE with editing toolbar coming last week of March (phab:T90763) 
* Editing other people’s comments coming last week of March (phab:T91086)
* Ability to change the width & side rail in progress, probably out in
* Search is in progress (no ETA yet) (phab:T76823)
* The ability to choose which Flow notifications end up in Echo,
watchlist, or both, and other more powerful options, will be coming up
next (no ETA yet)
That being said -- there are some LiquidThreads features that don’t
exist in Flow yet.
We’d like to hear which features you use on the current LQT boards,
and that you’re concerned about losing in the Flow conversion. At the
same time, we’d like further suggestions on how we could improve upon
that (or other) features from LQT.
Please give us feedback at
https://www.mediawiki.org/wiki/Topic:Sdoatsbslsafx6lw to keep it
centralized, and test freely at the sandbox.
Much thanks, on behalf of the Collaboration Team,
 https://www.mediawiki.org/wiki/VisualEditor/Feedback and
 http://flow-tests.wmflabs.org/wiki/Testwiki:Support_desk and
 http://flow-tests.wmflabs.org/wiki/Topic:Qmkwqmp0wfcazy9c and
 https://phabricator.wikimedia.org/T90763 ,
Nick Wilson (Quiddity)
Wikitech-l mailing list
I am Dibya Singh and I am applying for FOSS Outreachy 10. I have selected a
project from #possible-tech-projects list named One stop translation to
improve consistency for translation. I have been in contact with mentor
Niklas Laxström and Federico Leva alias for understanding the project in
depth. I have drafted a rough proposal based on my interaction with the
product as a user.
Please find link for my proposal. Please give your valuable feedback.
Link for my mediawiki user page can be found here
just wanted to quickly let you know that MediaWiki will verify that
extensions register all rights they define in $wgAvailableRights (or
using the "UserGetAllRights" hook).
To make sure your extension complies with that just add all the rights
your extension defines to $wgAvailableRights (which is a simple string
of theses user rights).
This test will be introduced with https://gerrit.wikimedia.org/r/192087
When ContentHandler support was added to MediaWiki in 2012, the content
type and content model of a revision is stored with it. However, the DB
tables for WMF wikis did not have the new columns, so
$wgContentHandlerUseDB was set to false on our wikis.
Eventually the database jobs to add and populate the columns completed, and
$wgContentHandlerUseDB has been true on some wikis including mediawiki.org
for months. There are several projects that are requesting this be set
true everywhere, T51193.
However, changing the content model of an existing page is a disruptive
change. We added the right `editcontentmodel` without which attempts to
change content model through the API or EditPage.php fail. Currently no
group (user or bot) has this right. So we think it's OK and safe to enable
$wgContentHandlerUseDB on WMF wikis.
https://gerrit.wikimedia.org/r/#/c/170129/ is the patch.
There are issues with granting the editcontentmodel right, see T85847.
The Flow discussion and collaboration software has its own contentmodel.
Currently the Flow team changes a talk page to a Flow board by editing a
PHP config variable (!), which doesn't scale. (FYI, plans for enabling Flow
are at , and it is happening slowly.) When we do we archive the
existing talk page content.
The first change to the status quo is allowing a *new* page to be a Flow
board. In particular, the Co-op project wants to provide a Flow board
for each new editor who signs up to collaborate with a mentor. This
doesn't feel like changing the content model of a page, since there was
nothing present before. So Flow has its own right, 'flow-create-board',
which we grant to flow-bot group; attempting to add a Flow topic or Flow
board header to an non-existent page fails unless the user has this right.
The Co-op team will ask the Bot Approval Group on enwiki to grant their bot
Eventually we envision having a Special:Flowify page that will let admins
turn a page into a Flow board. This will run PHP code to archive the
current page, handle redirects, and then create a Flow board revision, etc.
This feels like the 'editcontentmodel' right, but it will probably be a
more restrictive right, 'flow-flowify'.
Daniel Kinzler proposed that we should not grant the editcontentmodel right
because any change to content model is a special case that requires smart
handling via dedicated PHP code. Which is what Flow is doing for both the
Co-op bot and the future Special:Flowify.
So is there anything to discuss? :)
 https://www.mediawiki.org/wiki/Flow/Rollout Relax, is happening slowly.