Hi everyone!
How do you test Vector-compatible skins? I mean there are a lot of css
classes that embedded in the Vector and in the extensions so I guess
some data for tests should exist somewhere.
-----
Yury Katkov, WikiVote
Hi all,
I'd like to announce a recently created tool that might help the Wikimedia
technical community find stuff more easily. Sometimes relevant information
is buried in IRC chat logs, messages in any of several mailing lists, pages
in mediawiki.org, commit messages, etc. This tool (essentially a custom
google search engine that filters results to a few relevant URL patterns)
is aimed at relieving this problem. Test it here: http://hexm.de/mw-search
The motivation for the tool came from a post by Niklas [1], specifically
the section "Coping with the proliferation of tools within your community".
In the comments section, Nemo announced his initiative to create a custom
google search to fit at least some of the requirements presented in that
section, and I've offered to help him tweak it further. The URL list is
still incomplete and can be customized by editing the page
http://www.mediawiki.org/wiki/Wikimedia_technical_search (syncing with the
actual engine still will have to happen by hand, but should be quick).
Besides feedback on whether the engine works as you'd expect, I would like
to start some discussion about the ability for Google's bots to crawl some
of the resources that are currently included in the URL filters, but return
no results. For example, the IRC logs at bots.wmflabs.org/~wm-bot/logs/.
Some workarounds are used (e.g. using github for code search since gitweb
isn't crawlable) but that isn't possible for all resources. What can we do
to improve the situation?
--Waldir
1.
http://laxstrom.name/blag/2013/02/11/fosdem-talk-reflections-23-docs-code-a…
Sorry to reply on a thread that will probably not sort nicely on the
mailman web interface or threading mail clients. Anybody know of an
easy way to reply to digest email in Gmail such that mailman will
retain threading?
I'll be working on the conversion of Wikipedia Zero banners to ESI.
There will be good lessons here I think for looking at dynamic loading
in other parts of the interface.
Arthur, to address your questions in the parent to Mark's reply:
"is there any reason you'd need serve different HTML than what is
already being served by MobileFrontend?"
Currently, some languages are not whitelisted at carriers, meaning
that users may get billed when they hit <language>.m.wikipedia.org or
<language>.zero.wikipedia.org. Thus, a number of <a href>s are
rewritten to include interstitials if the link is going to result in a
charge. By the way, we see some shortcomings in the existing rewrites
that need to be corrected (e.g., some URLs don't have interstitials,
but should), but that's a separate bug.
My thinking is that we start intercepting all clicks by JavaScript if
it's a Javascripty browser, or via a default interceptor at the URL on
that <language>.(m|zero).wikipedia.org's language's corresponding
subdomain otherwise. In either case, if the destination link is on a
non-whitelisted Wikipedia domain for the carrier or if the link is
external of Wikipedia, the user should land at an interstitial page
hosted on the same whitelisted subdomain from whence the user came.
"Out of curiosity, is there WAP support in Zero? I noticed some
comments like '# WAP' in the varnish acls for Zero, so I presume so.
Is the Zero WAP experience different than the MobileFrontend WAP
experience?"
No special WAP considerations in ZeroRatedMobileAccess above and
beyond MobileFrontend, as I recall. The "# WAP" comments is just for
us to remember in case a support case comes up with that particular
carrier.
We'll want to keep in mind impacts for USSD/SMS support. I think
Jeremy had some good conversations at the Wikimedia Hackathon in
Amsterdam that will help him to refine how his middleware receives and
transforms content.
Mark Bergsma mark at wikimedia.org
Fri May 31 09:44:48 UTC 2013
Previous message: [Wikitech-l] ZERO architecture
Next message: [Wikitech-l] ZERO architecture
Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
________________________________
>> * feature phones -- HTML only, the banner is inserted by the ESI
>> ** for carriers with free images
>> ** for carriers without free images
>>
>
> What about including ESI tags for banners for smart devices as well as
> feature phones, then either use ESI to insert the banner for both device
> types or, alternatively, for smart devices don't let Varnish populate the
> ESI chunk and instead use JS to replace the ESI tags with the banner? That
> way we can still serve the same HTML for smart phones and feature phones
> with images (one less thing for which to vary the cache).
I think the verdict is still out on whether it's better to use ESI for
Banners in Varnish or use JS for that client-side. I guess we'll have
to test and see.
> Are there carrier-specific things that would result in different HTML for
> devices that do not support JS, or can you get away with providing the same
> non-js experience for Zero as MobileFrontend (aside from the
> banner, presumably handled by ESI)? If not currently, do you think its
> feasible to do that (eg make carrier-variable links get handled via special
> pages so we can always rely on the same URIs)? Again, it would be nice if
> we could just rely on the same HTML to further reduce cache variance. It
> would be cool if MobileFrontend and Zero shared buckets and they were
> limited to:
>
> * HTML + images
> * HTML - images
> * WAP
That would be nice.
> Since we improved MobileFrontend to no longer vary the cache on X-Device,
> I've been surprised to not see a significant increase in our cache hit
> ratio (which warrants further investigation but that's another email). Are
> there ways we can do a deeper analysis of the state of the varnish cache to
> determine just how fragmented it is, why, and how much of a problem it
> actually is? I believe I've asked this before and was met with a response
> of 'not really' - but maybe things have changed now, or others on this list
> have different insight. I think we've mostly approached the issue with a
> lot more assumption than informed analysis, and if possible I think it
> would be good to change that.
Yeah, we should look into that. We've already flagged a few possible
culprits, and we're also working on the migration of the desktop wiki
cluster from Squid to Varnish, which has some of the same issues with
variance (sessions, XVO, cookies, Accept-Language...) as
MobileFrontend does. After we've finished migrating that and confirmed
that it's working well, we want to unify those clusters'
configurations a bit more, and that by itself should give us
additional opportunity to compare some strategies there.
We've since also figured out that the way we've calculate cache
efficiency with Varnish is not exactly ideal; unlike Squid, cache
purges are done as HTTP requests to Varnish. Therefore in Varnish,
those cache lookups are calculated into the cache hit rate, which
isn't very helpful. To make things worse, the few hundreds of purges a
second vs actual client traffic matter a lot more on the mobile
cluster (with much less traffic but a big content set) than it does
for our other clusters. So until we can factor that out in the Varnish
counters (might be possible in Varnish 4.0), we'll have to look at
other metrics.
More useful therefore is to check the actual backend fetches
("backend_req"), and these appear to have gone down some. Annoyingly,
every time we restart a Varnish instance we get a spike in the Ganglia
graphs, making the long-term graphs pretty much unusable. To fix that
we'll either need to patch Ganglia itself or move to some other stats
engine (statsd?). So we have a bit of work to do there on the Ops
front.
Note that we're about to replace all Varnish caches in eqiad by
(fewer) newer, much bigger boxes, and we've decided to also upgrade
the 4 mobile boxes with those same specs. And we're also doing that in
our new west coast caching data center as well as esams. This will
increase the mobile cache size a lot, and will hopefully help by
throwing resources at the problem.
--
Mark Bergsma <mark at wikimedia.org>
Lead Operations Architect
Wikimedia Foundation
I was going through our code contemplating dropping XHTML 1.1 support and
ran into the RDFa support stuff and realized how out of date and limited
it is.
I've put together an RFC for replacing our code that appears to be based
on the RDFa 1.0 from 2008 with RDFa 1.1 and expanding support for RDFa.
https://www.mediawiki.org/wiki/Requests_for_comment/Update_our_code_to_use_…
--
~Daniel Friesen (Dantman, Nadir-Seen-Fire) [http://danielfriesen.name/]
When looking for resources to answer Tim's question at
<https://www.mediawiki.org/wiki/Architecture_guidelines#Clear_separation_of_…>,
I found a very nice and concise overview of principles to follow for writing
testable (and extendable, and maintainable) code:
"Writing Testable Code" by Miško Hevery
<http://googletesting.blogspot.de/2008/08/by-miko-hevery-so-you-decided-to.h…>.
It's just 10 short and easy points, not some rambling discussion of code philosophy.
As far as I am concerned, these points can be our architecture guidelines.
Beyond that, all we need is some best practices for dealing with legacy code.
MediaWiki violates at least half of these principles in pretty much every class.
I'm not saying we should rewrite MediaWiki to conform. But I'd wish that it was
recommended for all new code to follow these principles, and that (local) "just
in time" refactoring of old code in accordance with these guidelines was encouraged.
-- daniel
*Marc-Andre Pelletier discovered a vulnerability in the MediaWiki OpenID
extension for the case that MediaWiki is used as a “provider” and the wiki
allows renaming of users.
All previous versions of the OpenID extension used user-page URLs as
identity URLs. On wikis that use the OpenID extension as “provider” and
allows user renames, an attacker with rename privileges could rename a user
and could then create an account with the same name as the victim. This
would have allowed the attacker to steal the victim’s OpenID identity.
Version 3.00 fixes the vulnerability by using Special:OpenIDIdentifier/<id>
as the user’s identity URL, <id> being the immutable MediaWiki-internal
userid of the user. The user’s old identity URL, based on the user’s
user-page URL, will no longer be valid.
The user’s user page can still be used as OpenID identity URL, but will
delegate to the special page.
This is a breaking change, as it changes all user identity URLs. Providers
are urged to upgrade and notify users, or to disable user renaming.
Respectfully,
Ryan Lane
https://gerrit.wikimedia.org/r/#/c/52722
Commit: f4abe8649c6c37074b5091748d9e2d6e9ed452f2*
How to load up high-resolution imagery on high-density displays has been an
open question for a while; we've wanted this for the mobile web site since
the Nexus One and Droid brought 1.5x, and the iPhone 4 brought 2.0x density
displays to the mobile world a couple years back.
More recently, tablets and a few laptops are bringing 1.5x and 2.0x density
displays too, such as the new Retina iPad and MacBook Pro.
A properly responsive site should be able to detect when it's running on
such a display and load higher-density image assets automatically...
Here's my first stab:
https://bugzilla.wikimedia.org/show_bug.cgi?id=36198#c6https://gerrit.wikimedia.org/r/#/c/24115/
* adds $wgResponsiveImages setting, defaulting to true, to enable the
feature
* adds jquery.hidpi plugin to check window.devicePixelRatio and replace
images with data-src-1-5 or data-src-2-0 depending on the ratio
* adds mediawiki.hidpi RL script to trigger hidpi loads after main images
load
* renders images from wiki image & thumb links at 1.5x and 2.0x and
includes data-src-1-5 and data-src-2-0 attributes with the targets
Note that this is a work in progress. There will be places where this
doesn't yet work which output their imgs differently. If moving from a low
to high-DPI screen on a MacBook Pro Retina display, you won't see images
load until you reload.
Confirmed basic images and thumbs in wikitext appear to work in Safari 6 on
MacBook Pro Retina display. (Should work in Chrome as well).
Same code loaded on MobileFrontend display should also work, but have not
yet attempted that.
Note this does *not* attempt to use native SVGs, which is another potential
tactic for improving display on high-density displays and zoomed windows.
This loads higher-resolution raster images, including rasterized SVGs.
There may be loads of bugs; this is midnight hacking code and I make no
guarantees of suitability for any purpose. ;)
-- brion
Hi all,
After a talk with Brad Jorsch during the Hackathon (thanks again Brad for
your patience), it became clear to me that Lua modules can be localized
either by using system messages or by getting the project language code
(mw.getContentLanguage().getCode()) and then switching the message. This
second option is less integrated with the translation system, but can serve
as intermediate step to get things running.
For Wikisource it would be nice to have a central repository (sitting on
wikisource.org) of localized Lua modules and associated templates. The
documentation could be translated using Extension:Translate. These modules,
templates and associated documentation would be then synchronized with all
the language wikisources that subscribe to an opt-in list. Users would be
then advised to modify the central module, thus all language versions would
benefit of the improvements. This could be the first experiment of having a
centralized repository of modules.
What do you think of this? Would be anyone available to mentor an Outreach
Program for Women project?
Thanks,
David Cuenca --Micru
For years, I have weeped and wailed about people adding complicated maps
and diagrams as 220px thumbnail images to Wikipedia articles. These sort
of images are virtually useless within an article unless they are
displayed at relatively large sizes. Unfortunately, including them at
large sizes creates a whole new set of problems. Namely, large images
mess up the formatting of the page and cause headers, edit links, and
other images to get jumbled around into strange places (or even
overlapping each other on occasion), especially for people on tablets or
other small screens. The problem is even worse for videos. Who wants to
watch a hi-res video in a tiny 220px inline viewer? If there are
subtitles, you can't even read them. But should we instead include them
as giant 1280px players within the article? That seems like it would be
obnoxious.
What if instead we could mark such complicated images and high-res
videos to be shown in modal viewers when the user clicks on them? For
example: [[File:Highres-video1.webm|thumb|right|modal|A high res
video]]. When you clicked on the thumbnail, instead of going to Commons,
a modal viewer would overlay across the screen and let you view the
video/image at high resolution (complete with a link to Commons and the
attribution information). Believe it or not, this capability already
exists for videos on Wikipedia, but it's basically a hidden feature of
TimedMediaHandler. If you include a video in a page and set the size as
200px or less, it activates the modal behavior. Unfortunately, the
default size for videos is 220px (as of 2010) so you will almost never
see this behavior on a real article. If you want to see it, go to
https://en.wikipedia.org/wiki/American_Sign_Language#Variation and click
on one of the videos. Compare that with the video viewing experience at
https://en.wikipedia.org/wiki/Congenital_insensitivity_to_pain. It's a
world of difference. Now imagine that same modal behavior at
https://en.wikipedia.org/wiki/Cathedral_Peak_Granodiorite#Geological_overvi…
and https://en.wikipedia.org/wiki/Battle_of_Jutland.
Such an idea would be relatively trivial to implement. The steps would be:
1. Add support for a 'modal' param to the [[File:]] handler
(https://gerrit.wikimedia.org/r/#/c/66062/)
2. Add support for the 'modal' param to TimedMediaHandler
(https://gerrit.wikimedia.org/r/#/c/66063/)
3. Add support for the 'modal' param to images via some core JS module
(not done yet)
As you can see, I've already gotten started on adding this feature for
videos via TimedMediaHandler, but I haven't done anything for images
yet. I would like to hear people's thoughts on this potential feature
and how it could be best implemented for images before doing anything
else with it. What are your thoughts, concerns, ideas?
Ryan Kaldari