For some work I'm doing on a project, I'm using the Pager class.
This is really good (and I've learned a lot along the way), but I would
like to use a slider (like http://jqueryui.com/slider/) to allow the
user to jump to different places in the paged results.
I think that to do this right I need to find the total number of results
and then tell Pager to serve the results that are associated with that
selected spot on the slider.
For example, when the user selects a point 25% along the slider, I'll be
able to find out there are 1024 results and jump to the set containing
the 256th result.
For other pager tasks, I've been using the IndexPager, but this pager is
obviously the wrong one for this job.
Is there a better one? Is there a pager-with-slider implementation out
there already that I'm just not aware of?
--
http://hexmode.com/
Love alone reveals the true shape of the universe.
-- "Everywhere Present", Stephen Freeman
FYI, for the sake of transparency; WMF engineering is kicking off its
department-level goalsetting process and now is a good time to follow
along. Some commitments have already been made for the purpose of the
Annual Plan 2013-14 through team-level deliberation and are reflected
as such, and some commitments are pre-existing (e.g. we'll need to
continue to support Wikidata development on the WMF side; we're
planning to ramp down the Tampa data-center), but a lot of details are
open to being negotiated now, so feel free to add your thoughts on the
talk page as well.
And in general, as noted below, we're aiming for a more flexible
planning process that reflects a general desire to be able to
continuously adapt our objectives to changing circumstances and
opportunities. So, very little is set in stone and not open to being
revisited through the course of the year.
Cheers,
Erik
---------- Forwarded message ----------
From: Erik Moeller <erik(a)wikimedia.org>
Date: Mon, Jun 3, 2013 at 3:56 PM
Subject: Engineering/Product Goals for 2013-14
To: WMF Engineering/Product
Dear all,
as those of you who’ve worked on individual goals have seen, we’re
only looking for focus areas and individual professional development
goals through that part of the process. We’re aiming to separately
develop a single goals document for all of engineering/product which
breaks down planned team activity. It will live here:
https://www.mediawiki.org/wiki/Wikimedia_Engineering/2013-14_Goals
Important difference from last year: The template is much simpler. The
idea is to provide a quarterly breakdown of planned activities plus a
summary of the team and any dependencies on the rest of the
organization.
The reason for the simplicity is that we want to more explicitly
iterate on this document through the year, so keeping it lightweight
keeps the cost of change low. Your team can do this through quarterly
review/planning cycles if you’re already following that model.
The only area where iteration is harder are the specific commitments
that we put in the Annual Plan. These apply to Mobile, Editor
Engagement (E2 & E3) and Visual Editor, and the respective teams are
already aware of the Annual Plan commitments, which are reasonably
high level. If we do end up needing to change any of them, we’ll need
to notify the Board of such changes.
This also means that at this point you’re only making your best guess
as to what you’re going to work on through the year, and you shouldn’t
panic about the level of precision. Obviously some facts are known
well ahead of time (e.g. ramping down the Tampa DC) while others are
much harder to pin down (e.g. what exciting mobile feature will we be
working on in April 2014).
It’s the responsibility of product managers (where applicable),
technical leads and engineering managers to organize the development
of this document through June/July. I’d like to have a complete first
version no later than end of July. This should give us plenty of time.
I hope the lightweight approach and the built-in assumption of
continuous iteration will make this feel minimally burdensome and more
like part of your normal day-to-day work.
Let me know if you have any questions :)
Erik
--
Erik Möller
VP of Engineering and Product Development, Wikimedia Foundation
--
Erik Möller
VP of Engineering and Product Development, Wikimedia Foundation
*Marc-Andre Pelletier discovered a vulnerability in the MediaWiki OpenID
extension for the case that MediaWiki is used as a “provider” and the wiki
allows renaming of users.
All previous versions of the OpenID extension used user-page URLs as
identity URLs. On wikis that use the OpenID extension as “provider” and
allows user renames, an attacker with rename privileges could rename a user
and could then create an account with the same name as the victim. This
would have allowed the attacker to steal the victim’s OpenID identity.
Version 3.00 fixes the vulnerability by using Special:OpenIDIdentifier/<id>
as the user’s identity URL, <id> being the immutable MediaWiki-internal
userid of the user. The user’s old identity URL, based on the user’s
user-page URL, will no longer be valid.
The user’s user page can still be used as OpenID identity URL, but will
delegate to the special page.
This is a breaking change, as it changes all user identity URLs. Providers
are urged to upgrade and notify users, or to disable user renaming.
Respectfully,
Ryan Lane
https://gerrit.wikimedia.org/r/#/c/52722
Commit: f4abe8649c6c37074b5091748d9e2d6e9ed452f2*
Hi everyone,
Many of you already know Brian Wolff, who has been a steady
contributor to MediaWiki in the the past several years (User:Bawolff),
having gotten a start during Google Summer of Code 2010[1].
Brian is back for another summer working with us, working generally to
improve our multimedia contribution and review pipeline. In addition
to his normal GMail address, he's also available at
bawolff(a)wikimedia.org, and is on Freenode as bawolff.
Welcome Brian! (again! \o/)
Rob
[1] Signpost article on Brian's contribution for GSoC 2010
http://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/2010-08-23/Techno…
How to load up high-resolution imagery on high-density displays has been an
open question for a while; we've wanted this for the mobile web site since
the Nexus One and Droid brought 1.5x, and the iPhone 4 brought 2.0x density
displays to the mobile world a couple years back.
More recently, tablets and a few laptops are bringing 1.5x and 2.0x density
displays too, such as the new Retina iPad and MacBook Pro.
A properly responsive site should be able to detect when it's running on
such a display and load higher-density image assets automatically...
Here's my first stab:
https://bugzilla.wikimedia.org/show_bug.cgi?id=36198#c6https://gerrit.wikimedia.org/r/#/c/24115/
* adds $wgResponsiveImages setting, defaulting to true, to enable the
feature
* adds jquery.hidpi plugin to check window.devicePixelRatio and replace
images with data-src-1-5 or data-src-2-0 depending on the ratio
* adds mediawiki.hidpi RL script to trigger hidpi loads after main images
load
* renders images from wiki image & thumb links at 1.5x and 2.0x and
includes data-src-1-5 and data-src-2-0 attributes with the targets
Note that this is a work in progress. There will be places where this
doesn't yet work which output their imgs differently. If moving from a low
to high-DPI screen on a MacBook Pro Retina display, you won't see images
load until you reload.
Confirmed basic images and thumbs in wikitext appear to work in Safari 6 on
MacBook Pro Retina display. (Should work in Chrome as well).
Same code loaded on MobileFrontend display should also work, but have not
yet attempted that.
Note this does *not* attempt to use native SVGs, which is another potential
tactic for improving display on high-density displays and zoomed windows.
This loads higher-resolution raster images, including rasterized SVGs.
There may be loads of bugs; this is midnight hacking code and I make no
guarantees of suitability for any purpose. ;)
-- brion
I've been seeing a lot of WMUK messages ending up in GMail spam of late.
Anyone else seeing this? (Assuming this message doesn't ahahahaha do
the same.) Any idea what's causing this?
- d.
Hi all,
After a talk with Brad Jorsch during the Hackathon (thanks again Brad for
your patience), it became clear to me that Lua modules can be localized
either by using system messages or by getting the project language code
(mw.getContentLanguage().getCode()) and then switching the message. This
second option is less integrated with the translation system, but can serve
as intermediate step to get things running.
For Wikisource it would be nice to have a central repository (sitting on
wikisource.org) of localized Lua modules and associated templates. The
documentation could be translated using Extension:Translate. These modules,
templates and associated documentation would be then synchronized with all
the language wikisources that subscribe to an opt-in list. Users would be
then advised to modify the central module, thus all language versions would
benefit of the improvements. This could be the first experiment of having a
centralized repository of modules.
What do you think of this? Would be anyone available to mentor an Outreach
Program for Women project?
Thanks,
David Cuenca --Micru
For years, I have weeped and wailed about people adding complicated maps
and diagrams as 220px thumbnail images to Wikipedia articles. These sort
of images are virtually useless within an article unless they are
displayed at relatively large sizes. Unfortunately, including them at
large sizes creates a whole new set of problems. Namely, large images
mess up the formatting of the page and cause headers, edit links, and
other images to get jumbled around into strange places (or even
overlapping each other on occasion), especially for people on tablets or
other small screens. The problem is even worse for videos. Who wants to
watch a hi-res video in a tiny 220px inline viewer? If there are
subtitles, you can't even read them. But should we instead include them
as giant 1280px players within the article? That seems like it would be
obnoxious.
What if instead we could mark such complicated images and high-res
videos to be shown in modal viewers when the user clicks on them? For
example: [[File:Highres-video1.webm|thumb|right|modal|A high res
video]]. When you clicked on the thumbnail, instead of going to Commons,
a modal viewer would overlay across the screen and let you view the
video/image at high resolution (complete with a link to Commons and the
attribution information). Believe it or not, this capability already
exists for videos on Wikipedia, but it's basically a hidden feature of
TimedMediaHandler. If you include a video in a page and set the size as
200px or less, it activates the modal behavior. Unfortunately, the
default size for videos is 220px (as of 2010) so you will almost never
see this behavior on a real article. If you want to see it, go to
https://en.wikipedia.org/wiki/American_Sign_Language#Variation and click
on one of the videos. Compare that with the video viewing experience at
https://en.wikipedia.org/wiki/Congenital_insensitivity_to_pain. It's a
world of difference. Now imagine that same modal behavior at
https://en.wikipedia.org/wiki/Cathedral_Peak_Granodiorite#Geological_overvi…
and https://en.wikipedia.org/wiki/Battle_of_Jutland.
Such an idea would be relatively trivial to implement. The steps would be:
1. Add support for a 'modal' param to the [[File:]] handler
(https://gerrit.wikimedia.org/r/#/c/66062/)
2. Add support for the 'modal' param to TimedMediaHandler
(https://gerrit.wikimedia.org/r/#/c/66063/)
3. Add support for the 'modal' param to images via some core JS module
(not done yet)
As you can see, I've already gotten started on adding this feature for
videos via TimedMediaHandler, but I haven't done anything for images
yet. I would like to hear people's thoughts on this potential feature
and how it could be best implemented for images before doing anything
else with it. What are your thoughts, concerns, ideas?
Ryan Kaldari
Hi everyone,
I'm working on a prototype for the Wikidata Entity Suggester (Bug
#46555<https://bugzilla.wikimedia.org/show_bug.cgi?id=46555>).
As of now, it is a command-line client, completely written in Java, that
fetches recommendations from a Myrrix server layer.
Please take a look at the GitHub repository here:
https://github.com/nilesh-c/wikidata-entity-suggester/
I would really appreciate it if you can take the time to go through the
README and provide me with some much-needed feedback. Any questions or
suggestions are welcome. If you're curious, you can set up the whole thing
on your own machine.
Check out a few examples too:
https://github.com/nilesh-c/wikidata-entity-suggester/wiki/Examples
It can suggest properties and values for new/not-yet-created items (and
also currently present items), if it's given a few properties/values as
input data.
I intend to write a REST API and/or a simple PHP frontend for it before I
set it up on a remote VPS, so that everyone can test it out. Some
experimentation and quality optimization is also due.
Cheers,
Nilesh
(User Page - https://www.mediawiki.org/wiki/User:Nilesh.c)
--
A quest eternal, a life so small! So don't just play the guitar, build one.
You can also email me at contact(a)nileshc.com or visit my
website<http://www.nileshc.com/>