Hi, I'd like to present a new RFC for your consideration:
https://www.mediawiki.org/wiki/Requests_for_comment/Minifier
It is about how we can shave 10-15% off the size if JavaScript
delivered to users.
Your comments are highly welcome!:)
--
Best regards,
Max Semenik ([[User:MaxSem]])
Thank you for the quick fix!
Best,
--
Sukyoung
On Jan 29, 2014, at 9:55 AM, Nathan wrote:
> FYI in case you aren't subscribed to the list.
>
> ---------- Forwarded message ----------
> From: Yair Rand <yyairrand(a)gmail.com>
> Date: Tue, Jan 28, 2014 at 7:25 PM
> Subject: Re: [Wikitech-l] Bug in the Wikipedia main web page
> To: Wikimedia developers <wikitech-l(a)lists.wikimedia.org>
>
>
> Thank you for pointing out this bug. Your suggested change to
> MediaWiki:Gadget-wm-portal.js has been implemented by Meta-Wiki
> administrator User:PiRSquared17.
>
>
> On Tue, Jan 28, 2014 at 6:50 PM, Sukyoung Ryu <sukyoung.ryu(a)gmail.com>wrote:
>
> > Dear all,
> >
> > We are researchers at KAIST in Korea working on finding JavaScript bugs in
> > web pages. While analyzing top websites from Alexa.com, we found an issue,
> > which seems to be a bug, on the Wikipedia main web page (wikipedia.org).
> > We would be grateful if you can either confirm that it is a bug and even
> > better fix it or let us know what we're missing.
> >
> > Here's the issue. When a user selects a language in which search results
> > are displayed via the language selection button from the Wikipedia main web
> > page, the following JavaScript function is executed:
> >
> > 1 function setLang(lang) {
> > 2 var uiLang = navigator.language || navigator.userLanguage, date
> > = new Date();
> > 3
> > 4 if (uiLang.match(/^\w+/) === lang) {
> > 5 date.setTime(date.getTime() - 1);
> > 6 } else {
> > 7 date.setFullYear(date.getFullYear() + 1);
> > 8 }
> > 9
> > 10 document.cookie = "searchLang=" + lang + ";expires=" +
> > date.toUTCString() + ";domain=" + location.host + ";";
> > 11 }
> >
> > Depending on the evaluation result of the conditional expression on line
> > 4, "uiLang.match(/^\w+/) === lang", the function leaves or dose not leave
> > the selected language information on the user's computer through a cookie.
> > But we found that the expression, "uiLang.match(/^\w+/) === lang", always
> > evaluates to false, which results in that the function always leaves
> > cookies on users' computers. We think that changing the contidional
> > expression, "uiLang.match(/^\w+/) === lang", to the expression,
> > "uiLang.match(/^\w+/) == lang", will solve the problem.
> >
> > This problem may occur in the main web pages of all the Wikimedia sites.
> > Could you kindly let us know what you think? Thank you in advance.
> >
> > Best,
> > Changhee Park and Sukyoung Ryu
> >
> >
> > _______________________________________________
> > Wikitech-l mailing list
> > Wikitech-l(a)lists.wikimedia.org
> > https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> _______________________________________________
> Wikitech-l mailing list
> Wikitech-l(a)lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>
Are you good in swearing? WE NEED YOU
Huggle 3 comes with vandalism-prediction as it is precaching the diffs
even before they are enqueued including their contents. Each edit has
so called "score" which is a numerical value that if higher, the edit
is more likely a vandalism.
If you want to help us improve this feature, it is necessary to define
a "score words" list for every wiki where huggle is about to be used,
for example on English wiki.
Each list has following syntax:
(see https://en.wikipedia.org/w/index.php?title=Wikipedia:Huggle/Config&diff=573…)
score-words(score):
list of words separated by comma, can contain newlines but comma
must be present
example
score-words(200):
these, are, some, words, which, presence, of, increases, the, score,
each, word, by, 200,
So, if you know english better than me, which you likely do, go ahead
and improve the configuration file there, no worries, huggle's config
parser is very syntax-error proof.
If you have any other suggestion how to improve huggle's prediction,
go ahead and tell us!
Hello everyone,
I am Diwanshi Pandey, an OPW intern. I'd like to have your feedback on the
course I have created on codecademy for mediawiki api with help of my
mentor Yuri Astrakhan.
A little insight:
The course is about parsing and querying mediawiki api.
Initially we created one course which included 44 exercises but according
to codecademy's guidelines their course are for beginners and should have
maximum 30 exercises in one course.
So we did a split up into two courses:
One is Introduction to Wikipedia
API<http://www.codecademy.com/courses/web-beginner-en-vj9nh/0/1>and
other is Wikipedia:Query
API <http://www.codecademy.com/courses/web-beginner-en-yd3lp/0/1>.
Also due to api security and restrictions we couldn't implement tutorial on
"editing wiki pages through api call" from a non wiki site yet. We are
waiting till we find a good and easy way to demo that.
Feedback may include:
* Are the exercises easy to understand for novice users/developers?
* Are changes needed in the look of exercises?
* Are there any exercises which need not to be implemented or in too depth?
* Any other thing?
Thanks,
--
*Regards,*
*Diwanshi Pandey*
Hi everyone,
A big thank you to everyone who participated in the Architecture Summit
this year! We covered a lot of ground this year, and collectively learned
a lot about how to put these things together.
A lot of our work from this summit on this is only just beginning.
Speaking of that, just the act of processing the notes is going to be the
first step. MatmaRex and Legoktm pulled together a concise list of the
etherpads here, which I've annotated with the corresponding agenda pages:
https://www.mediawiki.org/wiki/Architecture_Summit_2014/Retrospective#Notes…
Our next course of action is to get all of these copied to mediawiki.org to
the agenda pages, so that we have a record of each session that will
outlive Etherpad's temporary storage. I've done a little bit (the
Architecture value, process, and guidelines discussion), but I'd love help
from others on both getting it all copied in place, and wikifying it. Any
takers?
Those of you that couldn't make it, I'd encourage you to read through the
notes We'll be discussing a lot of this over the coming days, so it'll be
useful context to bring you up to speed on these things.
Rob
I have a log of what happens on when the commands:
sudo apt-get install mediawiki2latex
mediawiki2latex -u https://en.wikipedia.org/wiki/Adam_Ries -o AdamRies.pdf
are entered on the command line of ubuntu (13.10) Better than TV...
Happy to send it to anyone.
Fred
I just stumbled across <https://github.com/wikimedia/mediawiki-core/pull/19>,
a small but useful contribution to core from an HHVM developer. It has gone
unnoticed for two months, which is a bit sad.
Is there a way to accept pull-requests from GitHub? According to <
https://github.com/wikimedia/mediawiki-core/settings/hooks> (may not be
visible to non-Wikimedians, sorry), the WebHook receiver <
http://tools.wmflabs.org/suchaserver/cgi-bin/receiver.py> is defunct.
Anyone know the story there?
It'd be good if some additional people were watching (that is, receiving
notifications for) <https://github.com/wikimedia/mediawiki-core/>.
I haven't responded yet, by the way, so feel free to if you know the
answers to these questions. I don't know what effect accepting the
pull-request will have on the code in master, and telling someone who has
already submitted a patch to go sign up for Gerrit seems impolite.
Ori
Micru's Associated namespaces RfC is up for discussion this week.
https://www.mediawiki.org/wiki/Architecture_meetings/RFC_review_2014-04-23
(We also have room for 1 more RfC to discuss.)
Micru said about
https://www.mediawiki.org/wiki/Requests_for_comment/Associated_namespaces :
> The intended outcome would be:
> 1) find out if there is any objection against the "Namespace registry and
> association handlers" that Mark proposed
> 2) discuss possible problems with this approach
> 3) see if there would be any hands available to work on it; it is a
> delicate topic that might need someone with a deep understanding of MediaWiki
Micru also noted that we've had a previous suggestion for a namespace
manager https://www.mediawiki.org/wiki/Namespace_manager .
> AFIK, it never materialized because back then there was not such a great
> need as there is now - or at least the current use cases didn't exist back
> then.
> I hope this RFC moves forward because it affects important upcoming and
> already deployed projects (Commons migration, templates, Visual editor, WD,
> etc).
Come to #wikimedia-office at 2100 UTC this Wednesday
http://www.worldtimebuddy.com/?qm=1&lid=2950159,2147714,5391959,100&h=29501…
or reply/comment with your comments/suggestions.
2300 Berlin
5pm New York
2pm California
7am Sydney
--
Sumana Harihareswara
Senior Technical Writer
Wikimedia Foundation
Hi, in response to bug 54607 [1], we've changed the semantics of the
mobileformat parameter to action=parse
== Summary ==
Previously, it used to accept strings 'html' or 'wml', later just
'html' and modify the structure of output (see below). This was problematic
because you needed to retrieve the HTML from output in different ways,
depending on whether mobileformat is specified or not. Now,
mobileformat is a boolean parameter, that is if there's a 'mobileformat'
parameter in request, it will be treated as "the output should be
mobile-friendly", regardless of value. And the output structure will
be the same. For compatibility with older callers,
mobileformat=(html|wml) will be special-cased to return the older
structure at least for 6 month from now. These changes will start
being rolled out to the WMF sites starting from tomorrow, Tuesday
October 24th and this process will be complete by October 31st.
== Examples ==
=== Non-mobile parse ===
api.php?action=parse&format=xml
{
"parse": {
"title": "...",
"text": {
"*": "foo"
}
}
}
api.php?action=parse&format=json
<?xml version="1.0"?>
<api>
<parse title="..." displaytitle="...">
<text xml:space="preserve">foo</text>
</parse>
</api>
=== Parse that outputs mobile HTML, old style ===
api.php?action=parse&format=json&mobileformat=html
{
"parse": {
"title": "API",
"text": "foo"
}
}
api.php?action=parse&format=xml&mobileformat=html
<?xml version="1.0"?>
<api>
<parse title="..." text="foo" displaytitle="...">
</parse>
</api>
=== Parse that outputs mobile HTML, new style ===
api.php?action=parse&format=...&mobileformat
Same as for non-mobile parses.
== FAQ ==
Q: I didn't use mobileformat before, does anything change for me?
A: No.
Q: I use mobileformat=html, will my bot/tool be broken now?
A: No, you will have 6 months to switch to new style.
Q: I'm only planning to use mobileformat, what should I do?
A: Just use the new style.
Q: How did this format discrepancy appear in the first place?
A: To err is human.
-----
[1] https://bugzilla.wikimedia.org/show_bug.cgi?id=54607
--
Best regards,
Max Semenik ([[User:MaxSem]])
Should the Config and GlobalConfig classes and the associated
RequestContext methods be reverted from 1.23 as an incomplete feature?
As far as I can tell, it is not yet used anywhere, so reverting it
should be easy.
getConfig() was added to IContextSource in 101a2a160b05[1]. Then
the method was changed to return a new class of object (Config) instead
of a SiteConfiguration object in fbfe789b987b[2]; however, the Config
class faces significant changes in I5a5857fc[3].
[1]: https://gerrit.wikimedia.org/r/#/c/92004/
[2]: https://gerrit.wikimedia.org/r/#/c/109266/
[3]: https://gerrit.wikimedia.org/r/#/c/109850/
--
Kevin Israel - MediaWiki developer, Wikipedia editor
http://en.wikipedia.org/wiki/User:PleaseStand