Event Details
# Date: 2013-06-12
# Time: 1700-1800 UTC, 1000-1100 PDT
#IRC channel: #wikimedia-office on irc.freenode.net
The Wikimedia Language Engineering team [1] invites everyone to join the
team’s monthly office hour on June 12, 2013 (Wednesday) at 1700 UTC/1000
PDT in #wikimedia-office. During this session we would be talking about
some our recent activities and updates from the ongoing projects. The
provisional agenda is outlined below.
See you all at the IRC office hour!
Siebrand Mazeland
Product Manager Language Engineering
Wikimedia Foundation
Agenda
# Introductions
# Universal Language Selector - Phase 1 deployment was on Tuesday
2013-06-11 [2,3,4]
# Universal Language Selector - Phase 2 and later
# Q/A - We shall be taking questions during the session. Questions can also
be sent to "siebrand at wikimedia dot org" before the event, and will be
addressed during the office hour.
[1] http://wikimediafoundation.org/wiki/Language_Engineering_team
[2] https://www.mediawiki.org/wiki/Universal_Language_Selector
[3] https://www.mediawiki.org/wiki/Universal_Language_Selector/FAQ
[4] https://www.mediawiki.org/wiki/Universal_Language_Selector/Design
Sorry to reply on a thread that will probably not sort nicely on the
mailman web interface or threading mail clients. Anybody know of an
easy way to reply to digest email in Gmail such that mailman will
retain threading?
I'll be working on the conversion of Wikipedia Zero banners to ESI.
There will be good lessons here I think for looking at dynamic loading
in other parts of the interface.
Arthur, to address your questions in the parent to Mark's reply:
"is there any reason you'd need serve different HTML than what is
already being served by MobileFrontend?"
Currently, some languages are not whitelisted at carriers, meaning
that users may get billed when they hit <language>.m.wikipedia.org or
<language>.zero.wikipedia.org. Thus, a number of <a href>s are
rewritten to include interstitials if the link is going to result in a
charge. By the way, we see some shortcomings in the existing rewrites
that need to be corrected (e.g., some URLs don't have interstitials,
but should), but that's a separate bug.
My thinking is that we start intercepting all clicks by JavaScript if
it's a Javascripty browser, or via a default interceptor at the URL on
that <language>.(m|zero).wikipedia.org's language's corresponding
subdomain otherwise. In either case, if the destination link is on a
non-whitelisted Wikipedia domain for the carrier or if the link is
external of Wikipedia, the user should land at an interstitial page
hosted on the same whitelisted subdomain from whence the user came.
"Out of curiosity, is there WAP support in Zero? I noticed some
comments like '# WAP' in the varnish acls for Zero, so I presume so.
Is the Zero WAP experience different than the MobileFrontend WAP
experience?"
No special WAP considerations in ZeroRatedMobileAccess above and
beyond MobileFrontend, as I recall. The "# WAP" comments is just for
us to remember in case a support case comes up with that particular
carrier.
We'll want to keep in mind impacts for USSD/SMS support. I think
Jeremy had some good conversations at the Wikimedia Hackathon in
Amsterdam that will help him to refine how his middleware receives and
transforms content.
Mark Bergsma mark at wikimedia.org
Fri May 31 09:44:48 UTC 2013
Previous message: [Wikitech-l] ZERO architecture
Next message: [Wikitech-l] ZERO architecture
Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
________________________________
>> * feature phones -- HTML only, the banner is inserted by the ESI
>> ** for carriers with free images
>> ** for carriers without free images
>>
>
> What about including ESI tags for banners for smart devices as well as
> feature phones, then either use ESI to insert the banner for both device
> types or, alternatively, for smart devices don't let Varnish populate the
> ESI chunk and instead use JS to replace the ESI tags with the banner? That
> way we can still serve the same HTML for smart phones and feature phones
> with images (one less thing for which to vary the cache).
I think the verdict is still out on whether it's better to use ESI for
Banners in Varnish or use JS for that client-side. I guess we'll have
to test and see.
> Are there carrier-specific things that would result in different HTML for
> devices that do not support JS, or can you get away with providing the same
> non-js experience for Zero as MobileFrontend (aside from the
> banner, presumably handled by ESI)? If not currently, do you think its
> feasible to do that (eg make carrier-variable links get handled via special
> pages so we can always rely on the same URIs)? Again, it would be nice if
> we could just rely on the same HTML to further reduce cache variance. It
> would be cool if MobileFrontend and Zero shared buckets and they were
> limited to:
>
> * HTML + images
> * HTML - images
> * WAP
That would be nice.
> Since we improved MobileFrontend to no longer vary the cache on X-Device,
> I've been surprised to not see a significant increase in our cache hit
> ratio (which warrants further investigation but that's another email). Are
> there ways we can do a deeper analysis of the state of the varnish cache to
> determine just how fragmented it is, why, and how much of a problem it
> actually is? I believe I've asked this before and was met with a response
> of 'not really' - but maybe things have changed now, or others on this list
> have different insight. I think we've mostly approached the issue with a
> lot more assumption than informed analysis, and if possible I think it
> would be good to change that.
Yeah, we should look into that. We've already flagged a few possible
culprits, and we're also working on the migration of the desktop wiki
cluster from Squid to Varnish, which has some of the same issues with
variance (sessions, XVO, cookies, Accept-Language...) as
MobileFrontend does. After we've finished migrating that and confirmed
that it's working well, we want to unify those clusters'
configurations a bit more, and that by itself should give us
additional opportunity to compare some strategies there.
We've since also figured out that the way we've calculate cache
efficiency with Varnish is not exactly ideal; unlike Squid, cache
purges are done as HTTP requests to Varnish. Therefore in Varnish,
those cache lookups are calculated into the cache hit rate, which
isn't very helpful. To make things worse, the few hundreds of purges a
second vs actual client traffic matter a lot more on the mobile
cluster (with much less traffic but a big content set) than it does
for our other clusters. So until we can factor that out in the Varnish
counters (might be possible in Varnish 4.0), we'll have to look at
other metrics.
More useful therefore is to check the actual backend fetches
("backend_req"), and these appear to have gone down some. Annoyingly,
every time we restart a Varnish instance we get a spike in the Ganglia
graphs, making the long-term graphs pretty much unusable. To fix that
we'll either need to patch Ganglia itself or move to some other stats
engine (statsd?). So we have a bit of work to do there on the Ops
front.
Note that we're about to replace all Varnish caches in eqiad by
(fewer) newer, much bigger boxes, and we've decided to also upgrade
the 4 mobile boxes with those same specs. And we're also doing that in
our new west coast caching data center as well as esams. This will
increase the mobile cache size a lot, and will hopefully help by
throwing resources at the problem.
--
Mark Bergsma <mark at wikimedia.org>
Lead Operations Architect
Wikimedia Foundation
"CloudOpen Europe is a conference celebrating and exploring the open
source projects, technologies and companies who make up the cloud."
It's in Edinburgh, UK, October 21-23, 2013.
They want 50-minute presentations, Bird of a Feather sessions, &
2-hour tutorials about Puppet, OpenStack, Hadoop, Chef, Gluster,
filesystems, etc. The CfP closes July 21st; you'll get
acceptance/rejection notification by August 1.
https://events.linuxfoundation.org/events/cloudopen-europe/program/cfp
This conference is eligible for subsidy of travel costs -- see
Participation Support
https://meta.wikimedia.org/wiki/Participation:Support to put in your
request.
--
Sumana Harihareswara
Engineering Community Manager
Wikimedia Foundation
So when did we get this new repo browser? Looks pretty nice, but I should
note that all on-wiki gitweb links have now been broken.
*-- *
*Tyler Romeo*
Stevens Institute of Technology, Class of 2016
Major in Computer Science
www.whizkidztech.com | tylerromeo(a)gmail.com
Quick two questions:
1) Was that bug with E:CentralAuth that prevented us from turning on
$wgSecureLogin ever fixed? If so, can we attempt another deployment?
2) I noticed something in the signpost:
Walsh told the *Signpost* that while moving to an https default is a goal
> the WMF is actively working on, doing so is not "trivial"—it is a delicate
> process that the WMF plans to enable in graduated steps, from logged-in
> users to testing on smaller wikis before making it the default for
> anonymous users and readers on all projects.
Is this at all true? Because from what I've been told on bug reports it
seems like turning on HTTPS would indeed be a trivial step and that the
operations team has confirmed we can do it at will. I also question the
definition of "actively working on". ;)
*-- *
*Tyler Romeo*
Stevens Institute of Technology, Class of 2016
Major in Computer Science
www.whizkidztech.com | tylerromeo(a)gmail.com
Hi -
In the extension development I'm doing, I need a custom file-upload
interface. I'm building it around the existing Special:Upload, and as a
side benefit I've been building a rewritten version of
Extension:MultiUpload.
In order to do this in what seems to me a reasonable, future-compatible
way - particularly by calling Special:Upload's methods rather than
duplicating their code in extension classes - I've needed to split out
some of Special:Upload's code into separate functions that can be
overridden in subclasses. Those changes are in gerrit and bugzilla now:
https://gerrit.wikimedia.org/r/#/c/67173/https://bugzilla.wikimedia.org/show_bug.cgi?id=48581
I'm posting here in case people want to discuss those changes. Ideally
I'd like to backport that to 1.19 so I can support LTS users.
Also, I'd like to submit my MultiUpload code for review, but I'm not
sure how to do that, because it looks like Extension:MultiUpload hasn't
been brought over from svn to gerrit. I'd either submit it as a commit
that replaces most of the extension's code, or propose it as a separate
extension. Please advise me...
Thanks!
Lee Worden
There isn't much documentation apart from the scripts that create a
release for creating a MediaWiki release.
As a result, there were some failures that could have been avoided --
branching the extensions at the same time core was branched, for example.
To improve the process for any future releases, we need to document the
process. To that end, I've started
https://www.mediawiki.org/wiki/Tarball_release_process to begin to
collect ideas and, ultimately, provide a recipe for making a release
that anyone can follow.
Please help me improve future releases by adding to the page.
--
http://hexmode.com/
Love alone reveals the true shape of the universe.
-- "Everywhere Present", Stephen Freeman
Hello everyone,
I want to modify the default Vector theme that comes with 1.21.1. But before I do that I want to rename it. I want to name it nighttime. I created a folder called nighttime and copied all the vector files into it. Then I made a copy of Vector.php and called it Nighttime.php. I then modified the appropriate contents of Nighttime.php as follows...
---------------
class SkinNighttime extends SkinTemplate {
protected static $bodyClasses = array( 'vector-animateLayout' );
var $skinname = 'nighttime', $stylename = 'nighttime',
$template = 'NighttimeTemplate', $useHeadElement = true;
...
unction setupSkinUserCss( OutputPage $out ) {
parent::setupSkinUserCss( $out );
$out->addModuleStyles( 'skins.nighttime' );
...
class NighttimeTemplate extends BaseTemplate {
-----------------
You can see what the site looks after I renamed everything at http://beta.dropshots.com/j00100/media/75373447 It appears as if there is no formatting.
I did some searching on Google but everything I found dealt with older versions. Does anyone know how to rename Vector and have it working on 1.21?
Thanks
Not sure who to add, but if somebody familiar with the TablePager class
could review this patchset I'd appreciate it:
https://gerrit.wikimedia.org/r/67627
First time I've used the class. It works, but I want to make sure I'm using
it properly.
*-- *
*Tyler Romeo*
Stevens Institute of Technology, Class of 2016
Major in Computer Science
www.whizkidztech.com | tylerromeo(a)gmail.com