Hi, I'd like to present a new RFC for your consideration:
https://www.mediawiki.org/wiki/Requests_for_comment/Minifier
It is about how we can shave 10-15% off the size if JavaScript
delivered to users.
Your comments are highly welcome!:)
--
Best regards,
Max Semenik ([[User:MaxSem]])
When api.php was basically the only API in MediaWiki, calling it "the API"
worked well. But now we've got a Parsoid API, Gabriel's work on a REST
content API, Gabriel's work on an internal storage API, and more on the
way. So just saying "the API" is getting confusing.
So let's bikeshed a reasonably short name for it that isn't something awful
like "the api.php API". I'm horrible at naming.
--
Brad Jorsch (Anomie)
Software Engineer
Wikimedia Foundation
Thank you for the quick fix!
Best,
--
Sukyoung
On Jan 29, 2014, at 9:55 AM, Nathan wrote:
> FYI in case you aren't subscribed to the list.
>
> ---------- Forwarded message ----------
> From: Yair Rand <yyairrand(a)gmail.com>
> Date: Tue, Jan 28, 2014 at 7:25 PM
> Subject: Re: [Wikitech-l] Bug in the Wikipedia main web page
> To: Wikimedia developers <wikitech-l(a)lists.wikimedia.org>
>
>
> Thank you for pointing out this bug. Your suggested change to
> MediaWiki:Gadget-wm-portal.js has been implemented by Meta-Wiki
> administrator User:PiRSquared17.
>
>
> On Tue, Jan 28, 2014 at 6:50 PM, Sukyoung Ryu <sukyoung.ryu(a)gmail.com>wrote:
>
> > Dear all,
> >
> > We are researchers at KAIST in Korea working on finding JavaScript bugs in
> > web pages. While analyzing top websites from Alexa.com, we found an issue,
> > which seems to be a bug, on the Wikipedia main web page (wikipedia.org).
> > We would be grateful if you can either confirm that it is a bug and even
> > better fix it or let us know what we're missing.
> >
> > Here's the issue. When a user selects a language in which search results
> > are displayed via the language selection button from the Wikipedia main web
> > page, the following JavaScript function is executed:
> >
> > 1 function setLang(lang) {
> > 2 var uiLang = navigator.language || navigator.userLanguage, date
> > = new Date();
> > 3
> > 4 if (uiLang.match(/^\w+/) === lang) {
> > 5 date.setTime(date.getTime() - 1);
> > 6 } else {
> > 7 date.setFullYear(date.getFullYear() + 1);
> > 8 }
> > 9
> > 10 document.cookie = "searchLang=" + lang + ";expires=" +
> > date.toUTCString() + ";domain=" + location.host + ";";
> > 11 }
> >
> > Depending on the evaluation result of the conditional expression on line
> > 4, "uiLang.match(/^\w+/) === lang", the function leaves or dose not leave
> > the selected language information on the user's computer through a cookie.
> > But we found that the expression, "uiLang.match(/^\w+/) === lang", always
> > evaluates to false, which results in that the function always leaves
> > cookies on users' computers. We think that changing the contidional
> > expression, "uiLang.match(/^\w+/) === lang", to the expression,
> > "uiLang.match(/^\w+/) == lang", will solve the problem.
> >
> > This problem may occur in the main web pages of all the Wikimedia sites.
> > Could you kindly let us know what you think? Thank you in advance.
> >
> > Best,
> > Changhee Park and Sukyoung Ryu
> >
> >
> > _______________________________________________
> > Wikitech-l mailing list
> > Wikitech-l(a)lists.wikimedia.org
> > https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> _______________________________________________
> Wikitech-l mailing list
> Wikitech-l(a)lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>
I've written a simple MediaWiki extension that uses an instance of the
W3C Validator service (via the Services_W3C_HTMLValidator
<http://pear.php.net/package/Services_W3C_HTMLValidator> PEAR package)
to validate SVG images hosted on a wiki. It is meant to replace the
current system on Commons, that relies on individual contributors adding
templates (e.g. InvalidSVG
<https://commons.wikimedia.org/wiki/Template:InvalidSVG>) by hand to
file description pages.
It exposes a simple API (and a Scribunto module as well) to get the
validation status of existing SVG files, can emit warnings when trying
to upload invalid ones, and is well integrated with MediaWiki's native
ObjectCache mechanism.
I'm in the process of publishing the code, but have some questions I
think the community could help me answer.
* Given that the W3C Validator can also parse HTML files, would it be
useful to validate wiki pages as well? Even if sometimes the
validation errors appear to be caused by MediaWiki itself, they can
also depend on malformed templates.
* Does storing the validation status of old revisions of images
(and/or articles) make sense?
* Do you think the extension should use the extmetadata property of
ApiQueryImageInfo instead of a its own module?
* Is it advisable to store validation data permanently in the database?
Hi all,
My apologies if this is the wrong place to start a discussion on this, but
it's a better place than nowhere. I recently took part in two very
different Wikipedia workshops -- one in Uganda for schoolchildren aged
14-17, and one Bodø, Norway, for GLAM people aged 35-55. One glaringly
obvious barrier of entry that was common for both groups is that the
CAPTCHA we use is too freaking hard.
The main concern is obviously that it is really hard to read, but there are
also some other issues, namely that all the fields in the user registration
form (except for the username) are wiped if you enter the CAPTCHA
incorrectly. So when you make a mistake, not only do you have to re-type a
whole new CAPTCHA (where you may make another mistake), you also have to
re-type the password twice *and* your e-mail address. This takes a long
time, especially if you're not a fast typer (which was the case for the
first group), or if you are on a tablet or phone (which was the case for
some in the second group).
So I would like to start a discussion about changing to a CAPTCHA that is
more user-friendly, and hopefully one that isn't as
English/Latin-alphabet-centric as the one we currently use. If Ugandan
children and old Norwegian people, which all use the Latin alphabet, have
such problems deciphering the CAPTCHA, what about people speaking languages
that don't use the Latin alphabet? I would prefer something more
simplistic, like some sort of math or image-based CAPTCHA, instead of the
current CAPTCHA we use.
--
mvh
Jon Harald Søby <http://meta.wikimedia.org/wiki/User:Jon_Harald_S%C3%B8by>
Hi Everyone,
just wanted to quickly let you know that MediaWiki will verify that
extensions register all rights they define in $wgAvailableRights (or
using the "UserGetAllRights" hook).
To make sure your extension complies with that just add all the rights
your extension defines to $wgAvailableRights (which is a simple string[]
of theses user rights).
This test will be introduced with https://gerrit.wikimedia.org/r/192087
Cheers,
Marius
When ContentHandler support was added to MediaWiki in 2012, the content
type and content model of a revision is stored with it. However, the DB
tables for WMF wikis did not have the new columns, so
$wgContentHandlerUseDB was set to false on our wikis.
Eventually the database jobs to add and populate the columns completed, and
$wgContentHandlerUseDB has been true on some wikis including mediawiki.org
for months. There are several projects that are requesting this be set
true everywhere, T51193.
However, changing the content model of an existing page is a disruptive
change. We added the right `editcontentmodel` without which attempts to
change content model through the API or EditPage.php fail. Currently no
group (user or bot) has this right. So we think it's OK and safe to enable
$wgContentHandlerUseDB on WMF wikis.
https://gerrit.wikimedia.org/r/#/c/170129/ is the patch.
There are issues with granting the editcontentmodel right, see T85847.
The Flow discussion and collaboration software has its own contentmodel.
Currently the Flow team changes a talk page to a Flow board by editing a
PHP config variable (!), which doesn't scale. (FYI, plans for enabling Flow
are at [1], and it is happening slowly.) When we do we archive the
existing talk page content.
The first change to the status quo is allowing a *new* page to be a Flow
board. In particular, the Co-op project[2] wants to provide a Flow board
for each new editor who signs up to collaborate with a mentor. This
doesn't feel like changing the content model of a page, since there was
nothing present before. So Flow has its own right, 'flow-create-board',
which we grant to flow-bot group; attempting to add a Flow topic or Flow
board header to an non-existent page fails unless the user has this right.
The Co-op team will ask the Bot Approval Group on enwiki to grant their bot
this right.
Eventually we envision having a Special:Flowify page that will let admins
turn a page into a Flow board. This will run PHP code to archive the
current page, handle redirects, and then create a Flow board revision, etc.
This feels like the 'editcontentmodel' right, but it will probably be a
more restrictive right, 'flow-flowify'.
Daniel Kinzler proposed that we should not grant the editcontentmodel right
because any change to content model is a special case that requires smart
handling via dedicated PHP code. Which is what Flow is doing for both the
Co-op bot and the future Special:Flowify.
So is there anything to discuss? :)
[1] https://www.mediawiki.org/wiki/Flow/Rollout Relax, is happening slowly.
[2] https://en.wikipedia.org/wiki/Wikipedia:Co-op
Hi,
I think I proposed this once but I forgot the outcome.
I would like to implement a new feature called "tool edit" it would be
pretty much the same as "bot edit" but with following differences:
-- Every registered user would be able to flag edit as tool edit (bot
needs special user group)
-- The flag wouldn't be intended for use by robots, but regular users
who used some automated tool in order to make the edit
-- Users could optionally mark any edit as tool edit through API only
The rationale is pretty clear: there is a number of tools, like AWB
and many others that produce incredible amounts of edits every day.
They are spamming recent changes page -
https://en.wikipedia.org/wiki/Special:RecentChanges can't be filtered
out and most of regular users are not interested in them. This would
make it possible to filter them out and it would also make it easier
to figure out how many "real edits" some user has made, compared to
automated edits made by tools.
Is it worth implementing? I think yes, but not so sure.
Thanks