Hi all,
The Discovery team is currently assisting the Services team in helping to
establish offline use case support and performance enhancement for
Wikimedia. Our role is quite small at this time as we assist others in
using the Wikipedia.org [1] portal page as a test page for offline use.
More information can be found in this ticket [2].
Cheers,
The Discovery Portal Team
[1] https://www.wikipedia.org
[2] https://phabricator.wikimedia.org/T150200
--
deb tankersley
Product Manager, Discovery
irc: debt
Wikimedia Foundation
TLDR: How fast should new content (maps) features be rolled out, and how
ready should they be? Constant but smaller improvements seems better.
As we gradually roll out maps to wider and wider audience, I would like to
get some feedback on how we approach new feature roll-outs.
Frankly, WMF has botched a number of releases in the past. In a way, this
is great, because it means both WMF and volunteers are still eager to
improve Wikipedia, we are still trying to make things better. On the other
hand, it is never a good thing to irritate the most important group - our
community. So there is always a compromise: when and what to roll out, how
to make it least disruptive vs how to improve the usability and the content
quality.
For wikis, there are two types of improvements: user interface and content.
User interface features change how one views and edits site's content, so
any change immediately affects everyone. We try to mitigate it with "Beta
features" -- logged in users may enable new functionality before it is
enabled by default for everyone, but the vast majority of the readers are
not logged in, so when enabled, it is still a serious and instant change.
Maps are not really a part of interface because they only appear as part of
the page content, which is fully controlled by the community.
Content features allow editors to add new type of content - maps, graphs,
sheet music, text formatting or new types of templates. There is no way to
hide a new content feature behind a "beta flag" because everyone sees the
same content, but content features are not disruptive because they depend
on the editors to add them to the pages. Community has full control, and if
it does not like a feature, or if it feels the feature is not ready yet, it
will not be used. The only time content changes are disruptive is when the
support for a widely used feature changes or gets disabled. This is clearly
not the case with the maps roll out on Wikipedia.
The WOW effect, and marketing in general, have both positive and negative
effect on a feature roll out. If a feature is quietly enabled, only the
more engaged community members will experiment with it and discuss the
feature's best usage, give feedback on how to improve it, and eventually
enable it at its own pace. A massive marketing of a feature would attract a
lot of attention and expedite adaption, but may also create some amount of
negativity if the community feels the feature is not yet ready.
I also feel it is very dangerous to delay releasing new features until they
are perfect. We (developers/PMs/...) may think we know what feature is
needed, but most likely we are wrong. If we delay, we may spend a lot of
resources on polishing something that is not needed. Instead, by releasing
early, community's feedback would put us back on the right track. Yes, it
may not be as good, but at least we will quickly change direction,
producing a truly needed feature that can be polished later. Of course this
is much easier to do with the content features rather than user interface
changes (hence the UI's "feature flag").
In light of this, I feel it is better to continuously roll-out small
content-related features without much publicity (e.g. Village pump is OK,
blog might be less so), and continuously improve based on community
feedback. Once the feature has been out for some time and there is a
general consensus that the feature is good, we can start the marketing
push. This approach creates less stress on the community lesions,
developers, and servers. Feedback is received in smaller portions and can
be properly acted on.
Hello,
The search-as-you-type completion suggester, which powers the search
function at the top right of every page (or in the sidebar in Monobook),
can now be configured at Special:Preferences. The default setting includes
our most recent improvements to search while the new options make it easy
to restrict the completion suggester. This is useful when searching for
specific text in search queries. A description of the preferences can be
found on MediaWiki.org [0] or inline at Special:Preferences. Feedback and
questions are welcome. [1]
[0]
https://www.mediawiki.org/wiki/Extension:CirrusSearch/CompletionSuggester
[1] https://www.mediawiki.org/wiki/Help:CirrusSearch/CompletionSuggester
Yours,
Chris Koerner
Community Liaison - Discovery
Wikimedia Foundation
We noticed this morning in our weekly relevance meeting that the Q2 goal
said it was about enabling cross-wiki search on all wikimedia projects. To
keep this in-line with what we have been talking about I've updated the
goal to say 'Enable the backend for cross-project searching from
Wikipedia's to their sister projects'. As far as I'm aware this is our
primary goal, and fits with the overall intention of bringing more exposure
to sister projects. Feel free to adjust if i was wrong.
https://www.mediawiki.org/w/index.php?title=Wikimedia_Engineering/2016-17_Q…
Howdy! The dashboards are down. The labs instance that's hosting
http://discovery.wmflabs.org is not responding (not even to reboot calls
from Horizon) so I might have to get someone from Ops/Labs to terminate it
and then I'd have to recreate it, which would take a while.
The back-up instance (http://discovery-beta.wmflabs.org) IS responding but
there's a totally different error that I'm troubleshooting right now.
Looking at labs mailing list and #wikimedia-labs, this looks like it could
be related to current labs maintenance work. In any case, I created a task
to track progress: https://phabricator.wikimedia.org/T149735
I apologize for any inconvenience.
– Mikhail on behalf of Discovery/Analysis
For a little backstory, in discernatron multiple judges provide scores in
from 0 to 3 for results. Typically we only request a single query to be
reviewed by two judges. We would like to measure the level of disagreement
between these two judges, and if it crosses some threshold get two more
scores, so we can then measure disagreement in the group of 4. Somehow
though, we need to define how to measure that level of disagreement and
what the threshold for needing more scores is.
Some specialized concerns:
* It is probably important to include not just that the users gave
different values, but also how far apart they are. The difference between a
3 and a 2 is much smaller than between a 2 and a 0.
* If the users agree that 80% of the results are all 0, but disagree on the
last 20%, even though the average disagreement is low it's probably still
important? Might be worthwhile to take all the agreements about irrelevant
results and remove them before calculating disagreement? Not sure...
I know we have a few math nerds here on the list, so hoping someone has a
few ideas.