Hi everyone,
*tl;dr: We'll be stripping all content contained inside brackets from the
first sentence of articles in the Wikipedia app.*
The Mobile Apps Team is focussed on making the app a beautiful and engaging
reader experience, and trying to support use cases like wanting to look
something up quickly to find what it is. Unfortunately, there are several
aspects of Wikipedia at present that are actively detrimental to that goal.
One example of this are the lead sentences.
As mentioned in the other thread on this matter
<https://lists.wikimedia.org/pipermail/mobile-l/2015-March/008715.html>,
lead sentences are poorly formatted and contain information that is
detrimental to quickly looking up a topic. The team did a quick audit
<https://docs.google.com/a/wikimedia.org/spreadsheets/d/1BJ7uDgzO8IJT0M3UM2q…>
of
the information available inside brackets in the first sentences, and
typically it is pronunciation information which is probably better placed
in the infobox rather than breaking up the first sentence. The other
problem is that this information was typically inserted and previewed on a
platform where space is not at a premium, and that calculation is different
on mobile devices.
In order to better serve the quick lookup use case, the team has reached
the decision to strip anything inside brackets in the first sentence of
articles in the Wikipedia app.
Stripping content is not a decision to be made lightly. People took the
time to write it, and that should be respected. We realise this is
controversial. That said, it's the opinion of the team that the problem is
pretty clear: this content is not optimised for users quickly looking
things up on mobile devices at all, and will take a long time to solve
through alternative means. A quicker solution is required.
The screenshots below are mockups of the before and after of the change.
These are not final, I just put them together quickly to illustrate what
I'm talking about.
- Before: http://i.imgur.com/VwKerbv.jpg
- After: http://i.imgur.com/2A5PLmy.jpg
If you have any questions, let me know.
Thanks,
Dan
--
Dan Garry
Associate Product Manager, Mobile Apps
Wikimedia Foundation
Fwd, forgot to press answer all :(
Gesendet mit meinem HTC
----- Weitergeleitete Nachricht -----
Von: "Florian Schmidt" <florian.schmidt.welzow(a)t-online.de>
An: "Aaron Halfaker" <ahalfaker(a)wikimedia.org>
Betreff: AW: [WikimediaMobile] [Wikitech-l] Anonymous editing impact on mobile
Datum: Do., Apr. 30, 2015 16:26
Great to read this, thanks Aaron :)
Gesendet mit meinem HTC
----- Nachricht beantworten -----
Von: "Aaron Halfaker" <ahalfaker(a)wikimedia.org>
An: "Wikimedia developers" <wikitech-l(a)lists.wikimedia.org>, "mobile-l" <mobile-l(a)lists.wikimedia.org>
Betreff: [WikimediaMobile] [Wikitech-l] Anonymous editing impact on mobile
Datum: Do., Apr. 30, 2015 01:09
Hey folks,
As requested, I started a research project page to do some analysis around
this. See
https://meta.wikimedia.org/wiki/Research:Mobile_anonymous_apocalypse
It's just a stub now. I'll have to clear a few other projects off my plate
in order to pick this one up. You should expect to see updates there in
2-3 weeks.
-Aaron
On Wed, Apr 29, 2015 at 12:10 PM, Jon Robson <jdlrobson(a)gmail.com> wrote:
> On Wed, Apr 29, 2015 at 8:19 AM, Robert Rohde <rarohde(a)gmail.com> wrote:
> >
> > On Tue, Apr 28, 2015 at 10:31 PM, Jon Robson <jrobson(a)wikimedia.org>
> wrote:
> >
> > > <snip>
> >
> > Any community members interested in helping out here? I'm very sad the
> > > increase in errors wasn't picked up sooner... :-/
> > >
> >
> > What does event_action = 'error' actually mean?
> >
> > If the action is stopped by the AbuseFilter is that counted as an
> "error"?
>
> It means at some point during the editing workflow the user hit an
> error that stopped them from finishing their edit.
> We do capture AbuseFilter hits in this process (but with some of these
> errors you can recover and complete the edit.
>
> We also store the error associated, although a quick scan shows this
> is currently not very helpful.
>
> In theory 'http' error should only happen when a user cannot get an
> edit token - I've updated the bug for those interested.
>
> >
> > -Robert
> > _______________________________________________
> > Wikitech-l mailing list
> > Wikitech-l(a)lists.wikimedia.org
> > https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>
>
>
>
> --
> Jon Robson
> * http://jonrobson.me.uk
> * https://www.facebook.com/jonrobson
> * @rakugojona
>
> _______________________________________________
> Wikitech-l mailing list
> Wikitech-l(a)lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>
Due to labs outage, the Android alpha build has not been run since
Wednesday.
If you want to grab the latest and greatest newly merged alpha apk then you
can do so from Gerrit > Jenkins. Since the apk built there do use the same
signing certificate yet, you would have to uninstall the old alpha apk.
Here is a direct link to the alpha apk that's merged right now:
wikipedia-alpha-debug.apk
<https://integration.wikimedia.org/ci/job/apps-android-wikipedia-gradlew/409…>
If you want the very latest you can follow these steps instead:
1. Pick the latest (top result) from Gerrit:
https://gerrit.wikimedia.org/r/#/q/status:merged+project:apps/android/wikip…
2. From the patch details, e.g. https://gerrit.wikimedia.org/r/#/c/219295/,
scroll all the way down to the bottom to the last link that says
"apps-android-wikipedia-gradlew". Follow that link.
3. On the Jenkins page that shows the Console output Click on the Status
link.
4. On the build status page download the file linked as
wikipedia-alpha-debug.apk.
Sorry for the inconvenience. I plan to respond to this thread when the
regular alpha builds are working again.
Bernd
Some of you may have noticed a bot [1] providing reviews for the
Mobilefrontend and Gather extensions.
This is a grass routes experiment [2] to see if we can reduce
regressions by running browser tests against every single commit. It's
very crude, and we're going to have to maintain it but we see this as
a crude stop gap solution until we get gerrit-bot taking care of this
for us.
Obviously we want to do this for all extensions but we wanted to get
something good enough that is not scaleable to start exploring this.
So far it has caught various bugs for us and our browser test builds
are starting to finally becoming consistently green, a few beta labs
flakes aside [3].
Running tests on beta labs is still useful but now we can use it to
identify tests caused by other extensions. We were finding too often
our tests were failing due to us neglecting them.
In case others are interested in how this is working and want to set
one up themselves I've documented this here:
https://www.mediawiki.org/wiki/Reading/Setting_up_a_browser_test_bot
Please let me now if you have any questions and feel free to edit and
improve this page. If you want to jump into the code that's doing this
and know Python check out:
https://github.com/jdlrobson/Barry-the-Browser-Test-Bot
(Patches welcomed and apologies in advance for the code)
[1] https://gerrit.wikimedia.org/r/#/q/reviewer:jdlrobson%252Bbarry%2540gmail.c…
[2] https://phabricator.wikimedia.org/T100293
[3] https://integration.wikimedia.org/ci/view/Mobile/job/browsertests-MobileFro…
Hey y'all,
I watch a lot of talks in my downtime. I even post the ones I like to a
Tumblr… sometimes [0]. I felt like sharing Derek Prior's "Implementing a
Strong Code Review Culture" from RailsConf 2015 in particular because it's
relevant to the conversations that the Reading Web team are having around
process and quality. You can watch the talk on YouTube [1] and, if you're
keen, you can read the paper that's referenced over at Microsoft Research
[2].
I particularly like the challenge of providing two paragraphs of context in
a commit message – to introduce the problem and your solution – and trying
to overcome negativity bias in written communication* by offering
compliments whenever possible and asking, not telling, while providing
critical feedback.
I hope you enjoy the talk as much as I did.
–Sam
[0] http://sometalks.tumblr.com/
[1] https://www.youtube.com/watch?v=PJjmw9TRB7s
[2] http://research.microsoft.com/apps/pubs/default.aspx?id=180283
* The speaker said "research has shown" but I didn't see a citation
*Notes (width added emphasis)*
- Code review isn't for catching bugs
- "Expectations, Outcomes, and Challenges of Modern Code Review"
- Chief benefits of code review:
- Knowledge transfer
- Increased team awareness
- Finding alternative solutions
- Code review is "the discipline of explaining your code to your peers"
- Process is more important than the result
- Goes on to define code review as "the discipline of discussing your
code with your peers"
- If we get better at code review, then we'll get better at
communicating technically as a team
Rules of Engagement
- As an author, provide context
- "If content is king, then context is God"
- *In a pull request (patch set) the code is the content and the
commit message is the context*
- Provide sufficient context - bring the reviewer up to speed with
what you've been doing in the past X hours
- *Challenge: provide at least two paragraphs of context in your
commit message*
- This additional context lives on in the commit history whereas
links to issue trackers might not
- As a reviewer, ask questions rather than making demands
- Research has shown that there's a negativity bias in written
communication. *Offer compliments whenever you can*
- *When you need to provide critical feedback, ask, don't tell*, e.g.
"extract a service to reduce some of this duplication" could be
formulated
as "what do you think about extracting a service to reduce some of this
duplication?"
- "Did you consider?", "can you clarify?"
- "Why didn't you just..." is framed negatively and includes the
word just
- Use the Socratic method: asking and answering questions to
stimulate critical thinking and to illuminate ideas
Insist on high quality reviews, but agree to disagree
- Conflict is good. *Conflict drives a higher standard of coding
provided there's healthy debate*
- Everyone has a minimum bar to entry for quality. Once that bar is met,
then everything else is a trade-off
- Reasonable people disagree all the time
- Review what's important to you
- SRP (Single Responsibility Principle) (the S from SOLID)
- Naming
- Complexity
- Test Coverage
- ... (whatever else you're comfortable in giving feedback on)
- What about style?
- Style is important
- "People who received style comments on their code perceived that
review negatively"
- Adopt a styleguide
Benefits of a Strong Code Review Culture
- Better code
- Better developers through constant knowledge transfer
- Team ownership of code, which leads to fewer silos
- Healthy debate
[Long email, sorry]
Hi friends,
There's been some talk about how we currently phabricate and there doesn't
seem
to be much satisfaction in our current workflows, and as part of the new
team
we are getting more software artifacts so we are going to have to adapt to
this
new situation.
For that, I've mapped how we do things now, and proposed how we could move
forward to effectively avoid more conphusion (sorry 'bout that).
## Current workflows
Reading web currently uses 3 permanent boards + current sprint board.
Reading-web:
* Type: Team project.
* Contains: Epics, bugs, features.
* Board: Columns with should have, could have, etc. Mixed columns.
Triaging column.
MobileFrontend:
* Type: Software project.
* Contains: Bugs, tech tasks, features, discussions.
* Board: Backlog.
Gather:
* Type: Team project + Software project (mix between the two above).
* Contains: Epics, bugs, tech tasks, features.
* Board: Columns with should have, could have, etc. Mixed columns.
Triaging column.
### Bugs
MobileFrontend bugs are added to MobileFrontend and Reading-web. They get
categorized and triaged in Reading-web by team/standup and brought into the
sprint if appropriate.
Gather bugs are added to Gather & current sprint. Gather is triaged by JonR
--
high priority are brought straight into the sprint.
### Features
MobileFrontend/Gather features added to Reading-Web **Needs triage** column.
Have MobileFrontend or Gather project added by TechPro as needed. Moved to
**Must Have**, **Should Have**, or **Could Have** as necessary
### Problems
* Workflow is different for MobileFrontend and Gather.
* Boards have mixed responsibilities.
* No one place to triage bugs and features.
* Reading-Web is *noisey*.
* Tasks with several tags -- leads to noise across all projects.
* No clear high level overview of Reading-Web workload/workflow.
On top of how we are currently working, we are getting a bunch of software
artifacts that we are going to have to triage & maintain really soon. We
need
to adapt to deal with multiple projects/boards and maintain team vision of
what
we are doing. (Current process probably won't scale well with more than a
few
software projects).
## Proposal
Taking into account that we want to:
* Be able to work across multiple phabricator projects for different
software
artifacts.
* Maintain a high level overview of team focus through time (so that we all
know what we are focusing on mainly).
* Have a clear place to triage for the projects we are responsible for.
### Solution
#### phStructure
Reading web will use N+2 boards. N+1 permanent boards + current sprint
board.
* *N being the number of software projects we maintain*.
Reading-web:
* Type: Team project.
* Contains: Epics.
* Board: Time based columns, left is closer, right is further on time. (Can
be
sprint based + Quarter based, or something else)
(Ex: In progress | Next sprint | ...).
MobileFrontend:
* Type: Software project.
* Contains: Bugs, tech tasks, small features, discussions.
* Board: Backlog | Discussion.
Gather:
* Type: Software project.
* Contains: Bugs, tech tasks, small features, discussions.
* Board: Backlog | Discussion.
PageImages:
* Type: Software project.
* Contains: Bugs, tech tasks, small features, discussions.
* Board: Backlog | Discussion.
TextExtracts:
* Type: Software project.
* Contains: Bugs, tech tasks, small features, discussions.
* Board: Backlog | Discussion.
ETC. (same for the rest of the software projects we're getting).
Suggestion is that software projects get the same layout, but not necessary.
#### Phrocess
##### Bugs and pheatures
Such tasks will get submitted against the corresponding software project
that
involves them.
If there is a bug on the mobile web, it'll get submitted to MobileFrontend,
if
there is a bug with collections, it'll get submitted against Gather. Same
for
feature requests. If there is a bug on MobileFrontend & PageImages it'll get
submitted to both.
Default priority *Needs triage* means task hasn't been triaged yet.
Each project has tasks that involve that project. Reading-web stays a high
level view of the team's focuses through time available for everybody.
##### Triaging
We'll have a **saved query** available to everybody to triage (standups or
else). Example:
https://phabricator.wikimedia.org/maniphest/query/c_ZQlOwVk9I8/
(note this looks like shit because we don't use priorities at all right
now).
When triaging a bug, we'll set it's priority to something that makes sense
regarding severity and add to current sprint project if priority is high.
When triaging a feature, we'll set it's priority to something that makes
sense
regarding perceived importance and ping product owner. PO should add as
subtask
of epic if makes sense.
##### Sprint preparation. Task creation.
>From the epics on Reading-web we'll spin off subtasks of specific work
that'll
get tagged with the concrete software project where they need to be acted
upon
with a priority.
Sprint grooming and prioritisation will put subtasks of epics in *Next
sprint*
into the next sprint board. Also give a pass on work on software artifacts
projects that needs to be added to sprint. Then we'll analyse and estimate.
------
I've mapped the proposed workflow: https://i.imgur.com/Wu7crcB.png
(I've tried with the current workflow but I don't even know how to
accurately draw it. Sorry).
---
What do you guys think? Beginning of next quarter is approaching so it
would be great if we discussed this and arrived somewhere better than our
current workflow.
Thanks!
Joaquin
I noticed a banner on the mobile site that renders the site unusable:
http://imgur.com/qVGz3mZ
I'm not sure who is responsible for "Freedom of Panorama in Europe in
2015" but can someone disable this on mobile asap or make it work on
mobile?
Please also reach out to us on the mobile-l mailing list ahead of
running these campaigns if you are unsure how to test campaigns, we're
happy to help.
Jon
Hi,
I think it was Joaquin, Sam, and Bryan with whom I discussed relative skins
usage on some popular skins, in the context of which skins Reading is on
the hook for in the sense of full maintenance or high priority fixes (even
if only coordinating) when issues arise. Here's that data, based on some
queries Timo ran recently (thanks, Timo!).
* en.wikipedia.org, users active since 2015-01-01, skin=cologneblue: 7k
users.
* en.wikipedia.org, users active since 2015-01-01, skin=monobook: 193k
users.
* en.wikipedia.org, users active since 2015-01-01: 3.1m users.
* en.wikipedia.org, users active since 2015-03-01, skin=cologneblue: 6k
users.
* en.wikipedia.org, users active since 2015-03-01, skin=monobook: 179k
users.
* en.wikipedia.org, users active since 2015-03-01: 2.3m users.
* commons.wikimedia.org, users active since 2015-03-01, skin=cologneblue:
1k users.
* commons.wikimedia.org, users active since 2015-03-01, skin=monobook: 59k
users.
* commons.wikimedia.org, users active since 2015-03-01: 687k users.
-Adam