Editors want to be able to patrol edits on pages that they care about.
Currently they use:
1) Watchlist
2) Article History
3) Recent Changes
The mechanism for "fixing" a problem with an edit is doing a revert.
Reverting allows you to go back to a revision of the article that existed
before the change that you want to "fix". Specifically it populates an edit
with the content of that previous revision and then you are actually able
to make any additional changes on top of that old content and then save.
There are two special cases of reverting that are especially useful to
users:
1) Undo - this is when you want to "fix" an edit but there have been edits
since the problematic edit that were productive. Undo tries to just undo
that specific edit in question instead of reverting all the way to the
revision before the edit. The reason to use undo is that sometimes there
was a problematic edit but since then there have been productive edits.
Specifically, what undo does is it tries to revert to the revision before
the problematic edit and then computationally add back in the edits since
then. Sometimes this isn't possible. Sometimes it is. When it is possible
to undo automatically the user gets the revision plus the "productive
edits" that occurred since the edit being undone, all of that content is
populated into an edit interface and the user can make any additional edits
(sometimes necessary to make the article make sense after the undo) and
then save.
2) Rollback - this is when you take all of the edits of the last user and
revert to the revision before those edits. The purpose of this is when
there is a user that has been committing vandalism you can quickly rollback
those edits. This is a one step process because it just does the revert and
saves automatically.
note 1: generally speaking vandalism gets caught quickly and is often the
most recent or most recent set of edie by a single user i.e. the situation
that rollback is designed for
note 2: undo occurs on an edit, revert and rollback operate on revisions.
on desktop the list of edits and the list of revisions is the same
interface but it may make sense to divide these on mobile. Thus on mobile
we may have separately a list of revisions (possibly grouped by user) that
may allow reverts and rollbacks, and a list of edits that allows for unto.
We will likely prioritize revert/rollbacks because that covers the biggest
use case (vandalism on articles that are changing at a moderate velocity. Also,
this may have implications for how we display watchlist items: considering
grouping edits by user, and only displaying most recent edits (i.e. only
rollback eligible edits)
There a detailed view of revisions in addition to a list view of revisions
and. We need to understand what goes into a detailed view of a revision.
Maybe we show the diff to current version because this is what would be
affected by a revert, actions, username, time, other details.We may want to
consider changing the interaction of reverts a bit (maybe should be 2 click
action instead of putting user into edit).
Let me know if I missed anything.
--
Kenan Wang
Product Manager, Mobile
Wikimedia Foundation
(Replying on list, I assume it was just my reply-to's fault.)
Quim Gil, 05/04/2014 07:41:
> On Tue, Apr 1, 2014 at 1:24 PM, Federico Leva (Nemo) <nemowiki(a)gmail.com> wrote:
>> Once again, please just use core's style. There was a huge discussion about
>> it back then, there is no need to reinvent the wheel and reopen flames.
>
> I'll say it once, and then I'll go back to minding my own business. :)
>
> Not using the red-green convention actually sounds like trying to
> reinvent the wheel.
Possible. However, if it was such a big problem in core, it should be
addressed there and not just circumvented.
> No, I wasn't here when that long discussion
> happen, no I haven't read the archives. I just look what everybody
> else is doing nowadays in diff land. Could you run that blind
> simulator against the conventional red-green combination (see links
> below) to see how it performs against the current colors used in
> MediaWiki core? Just to know the results.
>
> In fact my only complaint against the red-green approach taken by the
> Mobile team is that they chose their own red-green contrast, a lot
> stronger than what is usually found elsewhere.
>
> The colors used in core are difficult to read compared to the average
> red-green schema found elsewhere. I'm not even talking about users
> with special needs. I just need my own eyes with not the best laptop
> or desktop monitors, not in the perfect angle of view, or not with the
> best light conditions. This happens already with regular text, when it
> comes to white spaces it is so bad that I filed
> https://bugzilla.wikimedia.org/show_bug.cgi?id=59093
That's really unrelated from the colours change. In fact, small diffs
were completely invisibile with the old style. See also the quotation at
https://www.mediawiki.org/wiki/MediaWiki_1.20#Diff
Nemo
>
> Samples:
>
> GitHub, Gerrit, and Phabricator use the same colors more or less:
>
> https://github.com/wikimedia/mediawiki-core/commit/811b2e613861a2cc9c6c09eb…
>
> https://gerrit.wikimedia.org/r/#/c/118761/4/README
>
> http://fab.wmflabs.org/T49
>
>
> Meanwhile, in MediaWiki desktop and mobile...
>
> https://www.mediawiki.org/w/index.php?title=User%3AQgil%2FSandbox&diff=9498…
>
> https://m.mediawiki.org/wiki/Special:MobileDiff/949852...949853
>
>
> Samples from other services are welcome.
>
+mobile-l
mobile-l recipients, if replying, if you would please reply-all in case any
people on the CC: line aren't on mobile-l, it would be appreciated.
Nik,
Thanks for the update. Glad to hear there's even faster performance coming
and also that there's no need to structure too much fallback stuff
depending on whether the reflex time is okay. With any luck, it would be
just fast enough. I don't think there'd be too much hammering on the
suggest term; only if the resultset is insufficient does it seem like it
would make sense to orchestrate the client side (or server side, for that
matter) call. The apps do have a key tap timer thing on them to help avoid
spurious searching, so that should help. I think I understand the ellipse
related stuff - parsing the snippet text is no problem, but if there's an
even simpler way to get text condensed to the point where there's no work
to avoid wrapping on most form factors, cool!...and if I misunderstood,
well, we'll get to the bottom of that on Friday.
-Adam
On Tue, Apr 1, 2014 at 3:53 PM, Dan Garry <dgarry(a)wikimedia.org> wrote:
> I don't want to bloat the meeting into something massive, but I did invite
> Kenan and Howie since we're going to be talking about product consistency
> and that's something that should involve them.
>
> Thanks for setting this up, Jared!
>
> Dan
>
>
> On 1 April 2014 15:05, Jared Zimmerman <jared.zimmerman(a)wikimedia.org>wrote:
>
>> *...but how about setting up a google hangout or something?*
>>
>> done.
>>
>>
>>
>> *Jared Zimmerman * \\ Director of User Experience \\ Wikimedia
>> Foundation
>> M : +1 415 609 4043 | : @JaredZimmerman<https://twitter.com/JaredZimmerman>
>>
>>
>>
>> On Tue, Apr 1, 2014 at 3:00 PM, Nikolas Everett <neverett(a)wikimedia.org>wrote:
>>
>>>
>>>
>>>
>>> On Tue, Apr 1, 2014 at 5:13 PM, Adam Baso <abaso(a)wikimedia.org> wrote:
>>>
>>>> My email got a little buried in the thread.
>>>>
>>>> You guys on mobile-l? It would be nice to bring the conversation there
>>>> if possible. Understood if not; maybe we can get mobile-tech and any other
>>>> necessary lists here in that case?
>>>>
>>>
>>> I imagine the right thing is to add them to the email chain and we'll
>>> all keep reply-alling.
>>>
>>>>
>>>> During the mobile quarterly planning kickoff this morning, I mentioned
>>>> that I had started on a patchset for iOS and that I think it would be cool
>>>> if we could try this first in apps, then hopefully roll to mobile web (to
>>>> ease into load, but also to learn on any other fronts we haven't
>>>> considered). Here's the WIP patchset.
>>>>
>>>> https://gerrit.wikimedia.org/r/#/c/121562/
>>>>
>>>> See in particular the comments in the first code file in that patchset
>>>> for some of my thought process.
>>>>
>>>> Chad, that patchset is the thing I was talking about the other day for
>>>> list=search.
>>>>
>>>> It queries like the following:
>>>>
>>>>
>>>> https://en.m.wikipedia.org/w/api.php?action=query&list=search&srsearch=popu…<https://en.m.wikipedia.org/w/api.php?action=query&list=search&srsearch=popu…>
>>>>
>>>> It would be really neat to make the app the first place in a user's
>>>> mind where s/he's going to search for factual information even when doing
>>>> so via unstructured search terms. I think for people without the app, they
>>>> will of course always go through more conventional channels to enter
>>>> queries that aren't perfectly structured for title-starts-with; my hope is
>>>> that if we give them this goodie early on they'll be pleasantly surprised
>>>> and see it as a good reason to use the official apps.
>>>>
>>>>
>>> Sounds good to me.
>>>
>>>
>>>> Sounds like we may need to reconcile caching and general load
>>>> performance items when using CirrusSearch for the backend...although if
>>>> it's possible to do this fulltext magic by default on *just* the apps
>>>> to start, without making CirrusSearch come to a halt (!), that would be
>>>> totally sweet.
>>>>
>>>
>>> So prefix search is on the order of 4-5 ms for Elasticsearch to service,
>>> and it is cached. Full text search varies from 30ms to 500ms for
>>> "acceptable" performance. Not great, but ok.
>>>
>>> Some queries take even longer but we're working on speeding it up. On
>>> Thursday I'm pushing something that'll save about 25% off of particularly
>>> slow queries. We'll get another 20%ish on top of that when we upgrade to
>>> Elasticsearch 1.1.0 next week because that'll bring to bear some work I did
>>> upstream a few weeks ago. We'll also be able to start using some work I
>>> did back in January that can cut really nasty queries by orders of
>>> magnitude, but we'll need to make some cirrus changes for that so it'll
>>> probably hit enwiki in a few weeks. So, yeah, we're working on it.
>>>
>>> But the upshot is we'll have to be real careful if we want actual full
>>> text search to be fast enough for find as you type. We can save a bunch of
>>> time by not running the "did you mean" if you don't use them. Beyond that
>>> we'd have to look at things like the phrase match boost and highlighting
>>> the results text. You may want to be even more careful about the number of
>>> requests you search for because once you start cutting to the bone
>>> highlighting the results starts to show up (20ms for 50 results normally,
>>> can get higher).
>>>
>>> One idea, although less than ideal just from a coding perspective
>>>> (especially if perf is not an issue), would be to make the client-side do
>>>> lag detection or to observe a server-issued feature flag (there will be
>>>> several of these for the app already), or both, such that if lag is
>>>> unacceptable client side it would fallback to opensearch.
>>>>
>>>
>>> Probably not worth it initially. Maybe a good idea to keep in our back
>>> pocket if we find it is just too slow.
>>>
>>>
>>>
>>>> I don't have it in there yet for the GUI rendering (I was just working
>>>> in the confines of the existing iOS code to see how it would play), but I
>>>> was thinking to put the snippet text in a smaller font below the title text
>>>> in this iOS POC to help the user have a little bit more context about
>>>> *why* a result came back...that's helpful particularly if the page
>>>> title in the result set isn't obviously related to the search stem or
>>>> expansion; as you know! So instead of just
>>>>
>>>> San Francisco
>>>>
>>>> it would instead look like, say
>>>>
>>>> San Francisco
>>>> ...San Francisco City and county City and County...
>>>>
>>>> The client-side code could even try to opportunistically slice the
>>>> snippet text in some sensible fashion to try to provide reasonable context
>>>> without wrapping text, and if that fails, just start from the beginning and
>>>> add the ellipsis as appropriate to not wrap the result item's snippet text
>>>> to the next line.
>>>>
>>>
>>> We should talk more about this. I've spent a bunch of time over the
>>> past two weeks working on a better highlighter then the one we are using
>>> now. It'll be faster and require less disk space. I wonder how stupid an
>>> idea it'd be to try to highlighter within a pixel size with a certain
>>> font.....
>>>
>>>
>>>>
>>>> Any ideas if this is achievable? Fulltext search feels so much more
>>>> natural. I guess there's maybe also this notion of search within title (it
>>>> does look like srwhat=title is currently disabled for the CirrusSearch
>>>> provider, at least for API), with a suggest term backing (ideally, the API
>>>> would just magically augment results with suggest term autolookup, but the
>>>> orchestration is obviously possible client side, too) to help deal with
>>>> misspelling, which is even more likely on the mobile app.
>>>>
>>>> Okay, hope that helps a bit.
>>>>
>>>> Any ideas for short, medium, and longer term approach?
>>>>
>>>
>>> Got to go, but how about setting up a google hangout or something?
>>>
>>>
>>>>
>>>> -Adam
>>>>
>>>>
>>>>
>>>>
>>>> On Tue, Apr 1, 2014 at 1:21 PM, Adam Baso <abaso(a)wikimedia.org> wrote:
>>>>
>>>>> Let me reply-all in a couple minutes.
>>>>>
>>>>>
>>>>> On Tue, Apr 1, 2014 at 1:15 PM, Chad Horohoe <chorohoe(a)wikimedia.org>wrote:
>>>>>
>>>>>> On Tue, Apr 1, 2014 at 1:10 PM, Jared Zimmerman <
>>>>>> jared.zimmerman(a)wikimedia.org> wrote:
>>>>>>
>>>>>>> I think that the mobile app is returning results that match user
>>>>>>> expectations (in this case RESULTS rather than NO RESULTS) so I'd urge the
>>>>>>> team to figure out how to resolve this issue even if there are
>>>>>>> technological or performance issues to overcome.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>> That is not consistent with how the search box has ever worked. It's
>>>>>> meant to
>>>>>> be a suggestion for page titles, not a a list of full-text results
>>>>>> (that may contain
>>>>>> nothing in common between their title and what you typed). Once you
>>>>>> complete
>>>>>> the search (if you don't end on a direct title match), you'll get the
>>>>>> full-text results.
>>>>>>
>>>>>> If the mobile app is presenting full-text results as suggestions I'd
>>>>>> say that's the
>>>>>> wrong way to go. I'll also note our behavior is consistent with how
>>>>>> Google works as well.
>>>>>>
>>>>>> -Chad
>>>>>>
>>>>>
>>>>>
>>>>
>>>
>>
>
>
> --
> Dan Garry
> Associate Product Manager for Platform
> Wikimedia Foundation
>
I would like to have my
function JidanniLoginFormMessage(&$template){global $wgSitename;$template->set( 'header',
"(需帳號來編輯者,請聯絡<a href=\"/index.php?title=User:WikiSysop\"><strong>${wgSitename}</strong>管理員</a>。)" );
return true;}
$wgHooks['UserLoginForm'][]='JidanniLoginFormMessage';
also work on mobile.
One notices there is
$wgHooks['UserLoginForm'][] = 'MobileFrontendHooks::onUserLoginForm';
but I am afraid I need someone to tell me how to proceed.
How does one put e.g., Special:SpecialPages into the MobileFrontend menu?
Can I get away with doing it in my central LocalSettings.php or must I
on each wiki create another copy of
http://radioscanningtw.jidanni.org/index.php?title=MediaWiki:Sidebar
(which it would be nice if MobileFrontend would consult by default for
hints,) and name it what? Thanks.
I'm a bit concerned that with all the work we've been doing on the
mobile site for the purposes of Wikimedia projects we seem to have
been neglecting the 3rd party use case for MobileFrontend. I was
curious if any volunteers would like to step up and become a
MobileFrontend 3rd party champion and take ownership of these bugs and
help us improve the software for these types of users.
Essentially this would mean ensuring the 3rd party user's voice is
heard and that we keep our code as generic as possible. It would also
make sure important things like anonymous editing get thought about
before we have the capacity to do so (which in turn will speed up a
lot of that development)
We have a ton of bugs around this - these 3 for example:
https://bugzilla.wikimedia.org/show_bug.cgi?id=63328https://bugzilla.wikimedia.org/show_bug.cgi?id=63459https://bugzilla.wikimedia.org/show_bug.cgi?id=63458
if anyone is wanting a reason to code, is interested in mobile and
getting sick of their code for code waiting for review in Gerrit, I
urge you to step forward.
I'm happy to mentor/make sure your code gets review and importantly
help get all these bugs get fixed.
Message me privately if you are interested or come visit us in
#wikimedia-mobile on irc.freenode.org
Hello -
I work on the mobile partner engineering team, probably known better as the
Wikipedia Zero people.
If our servers see that an IP address matches with a given partner mobile
network operator (MNO), the pertinent Wikipedia Zero text banner is shown
to the mobile web user; additionally, offsite links are rewritten to warn a
user when s/he may be entering into a zone that requires data access
charges.
This is all fine and well, but partner operators sometimes change IP
addresses and our systems get out of sync. Our partners are busy and our
partner management team does of course strive to work with partners to
proactively manage IP and other technical updates, yet inevitably
information can fall through the cracks. Consequently, when IPs drift, the
system doesn't show banners and do URL rewriting as well as it could.
In order to in part more proactively remediate the drift of operator exit
IP addresses, we're interested in logging two pieces of information
server-side via the forthcoming rewrites of the Wikipedia for Android and
Wikipedia for iOS apps, and in some future state a rewritten Firefox OS app:
(1) MCC-MNC identification code of operator if present (e.g., 123-45 if the
connection is on a particular operator - MCC-MNC is in the format ###-##)
(2) Exit IP address (typically, gateway/proxy in MNO infrastructure)
The MCC-MNC identification code is embedded on SIM cards and accessible by
routine app APIs.
We would not want to log this information alongside the other Apache
webserver-style elements, but instead have just these two columns in a
separate nonpublic file location, purging records older than 90 days.
The thought is to just have the app code add the MCC-MNC value to an HTTP
header once in a given app session using an MCC-MNC bearing connection
(cellular data), and let the server detect the IP as per normal server
operation.
In a nutshell, after normalizing the MCC/MNC codes (some are likely to be
malformed) and cross-checking against our own MCC/MNC database, we'd be
able to see if the IPs are askew, and then reach out to operators to ask if
they have any updated IPs since the last time we received an official
update.
We think this is a simple and fairly easy way to observe updated IP
addresses for operator partners, and prompt the partner management team to
reach out to operators for updated official source IP addresses.
The information could incidentally be useful in gauging rough demand for
Wikipedia in markets germane to Wikipedia Zero (e.g., higher data costs,
lower disposable income), although that is secondary to keeping the IPs
accurate.
Internal review suggests this is in alignment with privacy policy, and we
wanted to see if there were other thoughts on this approach. We plan to
move the discussion over to wikitech-l and then later a broader list, but
to avoid cross-posting problems with people having one membership in one
list but not another, wanted to start on mobile-l first.
One last thing - the set of IPs for a given MNO are relatively small. For
example, an MNO may have 100 IP addresses representing 1 million cellular
subscribers. This has two practical consequences: (1) troublingly, even
just a handful of missing IPs has outsize impact, and (2) highly targeted
geography and behavioral inferences are unlikely for a data set composed of
just two types of data elements submitted.
Thanks.
-Adam
My wiki is read-only. Is there a simple line or two that I can put in
LocalSetttings.php to stop the MobileFrontend page-actions (ca-edit,
ca-upload, and watch-this-article) icons from appearing on each page?