Hi Jan,
[cc discovery mailing list]
I'm glad you reach out to this list because I'm very interested to learn
more about this session.
The closest report we have concerning usage of search special syntax is
an analysis done to classify zero result rate by query feature[1].
Unfortunately this analysis is not focused on search special syntax and
address only few of the keywords supported by CirrusSearch.
I created a ticket to learn more about this. Once resolved we will just
have to wait to gather some data and we will be able to provide this
information.
PS. If you more info about what happened during this session it'd be
much appreciated.
Thanks!
David.
[1]
https://upload.wikimedia.org/wikipedia/commons/2/28/From_Zero_to_Hero_-_Ant…
[2] https://phabricator.wikimedia.org/T147045
Le 30/09/2016 à 09:25, Jan Dittrich a écrit :
> Hello Analytics,
>
> Wikipedia’s search function exposes several modifiers
> (https://www.mediawiki.org/wiki/Help:CirrusSearch)
> On the recent German Wikicon there was a workshop on search and
> several community members seemed to be enthusiastic about these functions.
>
> I wonder if there is existing information about the current use of
> such queries. I did some research, but I could not find out much.
> Such information could help to improve the search function, since
> sometimes a few modifiers are heavily used (despite them being hard to
> access) and could e.g. be exposed via the user interface.
>
> Jan
>
> --
> Jan Dittrich
> UX Design/ User Research
>
> Wikimedia Deutschland e.V. | Tempelhofer Ufer 23-24 | 10963 Berlin
> Phone: +49 (0)30 219 158 26-0
> http://wikimedia.de
>
> Imagine a world, in which every single human being can freely share in
> the sum of all knowledge. That‘s our commitment.
>
> Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e.
> V. Eingetragen im Vereinsregister des Amtsgerichts
> Berlin-Charlottenburg unter der Nummer 23855 B. Als gemeinnützig
> anerkannt durch das Finanzamt für Körperschaften I Berlin,
> Steuernummer 27/029/42207.
>
>
> _______________________________________________
> Analytics mailing list
> Analytics(a)lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/analytics
Hi everyone,
I've created a first draft of a small glossary of terms we use in search,
including internal-only vocab (PaulScore, Discernatron, RelForge, etc.) and
some general vocab (recall, precision, F1, DCG, etc).
The glossary lives on mediawiki.org:
https://www.mediawiki.org/wiki/Wikimedia_Discovery/Search/Glossary
This isn't an overly formal glossary, so some of my opinions may have made
it into the definitions.
Feel free to edit, expand, editorialize, or even suggest new items to be
defined.
Thanks,
—Trey
Trey Jones
Software Engineer, Discovery
Wikimedia Foundation
Hello,
The Discovery team has been making good progress in enabling cross language
search results on several wiki's and now we need help in translating a
phrase: "*showing results from"*.
We recently deployed <https://phabricator.wikimedia.org/T142413> [1] a
language detection algorithm on the Portuguese and Japanese wiki's that
will detect if certain languages are being keyed in using languages other
than the main language of the wiki.
For instance, we're now able to detect the following languages that are
typed into a query on these primary language wiki's:
Portuguese: PT, EN, RU, HE, AR, ZH, KO, EL
Japanese: JA, EN, RU, KO, AR, HE
But, we have a need for the system message to be translated - the system
message that notifies the user that the results displayed are from a
different language wiki. Here are working links from PT
<https://pt.wikipedia.org/w/index.php?search=Washington+Township%2C+Licking+…>
[2] and JA
<https://ja.wikipedia.org/w/index.php?search=Washington+Township%2C+Licking+…>
[3] that show a search example with the results displayed.
*Image
<https://commons.wikimedia.org/wiki/File:Showing_results_from-russian.png>
[4] showing
the sample results from an English search typed into the Russian Wikipedia
search box.*
It would be great if we can get these translations into translatewiki so
that the Discovery team can use them using these message keys (and this
message group link: https://translatewiki.net/wiki/Special:Translate?
group=ext-wikimediainterwikisearchresults):
Portugese
<https://translatewiki.net/w/i.php?title=Special:Translate&group=ext-wikimed…>
[5]:
search-interwiki-results-enwiki
search-interwiki-results-ruwiki
search-interwiki-results-hewiki
search-interwiki-results-arwiki
search-interwiki-results-zhwiki
search-interwiki-results-kowiki
search-interwiki-results-elwiki
Japanese
<https://translatewiki.net/w/i.php?title=Special:Translate&group=ext-wikimed…>
[6]:
search-interwiki-results-enwiki
search-interwiki-results-ruwiki
search-interwiki-results-kowiki
search-interwiki-results-arwiki
search-interwiki-results-hewiki
Cheers from the Discovery Search Team!
[1] https://phabricator.wikimedia.org/T142413
[2] https://pt.wikipedia.org/w/index.php?search=Washington+
Township%2C+Licking+County%2C+Ohio&title=%D0%A1%D0%BB%D1%83%
D0%B6%D0%B5%D0%B1%D0%BD%D0%B0%D1%8F:%D0%9F%D0%BE%D0%B8%D1%
81%D0%BA&go=%D0%9F%D0%B5%D1%80%D0%B5%D0%B9%D1%82%D0%B8&searchToken=
34w86qi6kx0l5ax7jm0ewuuii
[3] https://ja.wikipedia.org/w/index.php?search=Washington+
Township%2C+Licking+County%2C+Ohio&title=%D0%A1%D0%BB%D1%83%
D0%B6%D0%B5%D0%B1%D0%BD%D0%B0%D1%8F:%D0%9F%D0%BE%D0%B8%D1%
81%D0%BA&go=%D0%9F%D0%B5%D1%80%D0%B5%D0%B9%D1%82%D0%B8&searchToken=
cbgdevpo338175t32wggwbhqh
[4] https://commons.wikimedia.org/wiki/File:Showing_results_from-russian.png
[5] https://translatewiki.net/w/i.php?title=Special:
Translate&group=ext-wikimediainterwikisearchresults&
language=pt&filter=&action=translate
[6] https://translatewiki.net/w/i.php?title=Special:
Translate&group=ext-wikimediainterwikisearchresults&
language=ja&filter=&action=translate
--
Deb Tankersley
Product Manager, Discovery
IRC: debt
Wikimedia Foundation
Hello all!
We had an interesting discussion yesterday with David about the way we
do sharding of our indices on elasticsearch. Here are a few notes for
whoever finds the subject interesting and wants to jump in the
discussion:
Context:
We recently activated row aware shard allocation on our elasticsearch
search clusters. This means that we now have one additional constraint
on shard allocation: spread copies of shards across multiple
datacenter rows, so that if we loose a full row, we still have a copy
of all the data. During an upgrade of elasticsearch, another
constraint comes into play: a shard can move from a node with an older
version of elasticsearch to a node with a newer version, but not the
other way around. This leads to elasticsearch struggling to allocate
all shards during the recent codfw upgrade to elasticsearch 2.3.5.
While it is not the end of the world (we can still server traffic if
some indices don't have all shards allocated), this is something we
need to improve.
Number of shards / number of replicas:
An elasticsearch index is split at creation in a number of shards. A
number of replica per shard is configured [1]. The total number of
shards for an index is "number_of_shards * (number_of_replicas + 1)".
Increasing the number of shards per index allow to execute read
operation in parallel over the different shards and aggregate the
results at the end, improving response time Increasing the number of
replicas allow to distribute the read load over more nodes (and
provides some redundancy in case we loose one server). As term
frequency [2] is calculated over a shard and not over the full index,
There is some black magic involved in how we shard our indices, but
most of it is documented [3]
The enwiki_content example:
enwiki_content index is configured to have 6 shards and 3 replicas,
for a total number of 24 shards. It also has the additional constraint
that there is at most 1 enwiki_content per node. This ensures a
maximum spread of enwiki_content shards over the cluster. Since
enwiki_content is one of the index with the most traffic, this ensure
that the load is well distributed over the cluster.
Now the bad news: for codfw, which is a 24 node cluster, it means that
reaching this perfect equilibrium of 1 shard per node is a serious
challenge if you take into account the other constraint in place. Even
with relaxing the constraint to 2 enwiki shards per node, we have seen
unassigned shards during elasticsearch upgrade.
Potential improvements:
While ensuring that a large index has a number of shards close to the
number of nodes in the cluster allows for optimally spreading load
over the cluster, it degrade fast if all the stars are not aligned
perfectly. There are 2 opposite solutions
1) decrease the number of shards to leave some room to move them around
2) increase the number of shards and allow multiple shards of the same
index to be allocated on the same node
1) is probably impractical on our large indices, enwiki_content shards
are already ~30Gb and this makes it impractical to move them around
during relocation and recovery
2) is probably our best bet. More smaller shards means that a single
query load will be spread over more nodes, potentially improving
response time. Increasing number of shards for enwiki_content from 6
to 20 (total shards = 80) means we have 80 / 24 = 3.3 shards per node.
Removing the 1 shards per node constraint and letting elasticsearch
spread the shards as best as it can means that in case 1 node is
missing, or during an upgrade, we still have the ability to move
shards around. Increasing this number even more might help keep the
load evenly spread across the cluster (the difference between 8 or 9
shards per node is smaller than the difference between 3 or 4 shards
per node).
David is going to do some tests to validate that those smaller shards
don't impact the scoring (smaller shards mean worse frequency
analysis).
I probably forgot a few points, but this email is more than long
enough already...
Thanks to all of you who kept reading until the end!
MrG
[1] https://www.elastic.co/guide/en/elasticsearch/reference/current/_basic_conc…
[2] https://www.elastic.co/guide/en/elasticsearch/guide/current/scoring-theory.…
[3] https://wikitech.wikimedia.org/wiki/Search#Estimating_the_number_of_shards_…
--
Guillaume Lederrey
Operations Engineer, Discovery
Wikimedia Foundation
UTC+2 / CEST
The Search Team in Discovery needs your help! Discernatron [1] is a search
relevance tool developed by the Discovery department. Its goal is to help
improve search relevance - showing articles that are most relevant to
search queries - with human assistance. We need your help grading search
results!
Join us for lunch at 12pm (SF time) in the 5th floor lounge on Tuesday 13th
September! In the Discernatron lunch, we'll give a brief overview of what
the Discernatron is, then ask people to get rating queries, so bring your
laptops! We're hoping a limited amount of food will be provided for the
event, but you can only eat it if you agree to rate queries for us. ;-)
We'll also be set up for remote participation on Hangouts and IRC, and the
session will be recorded.
Hangout: https://hangouts.google.com/hangouts/_/7pcv3gtfcbczzhaxbezyqhedfee
YouTube stream: https://www.youtube.com/watch?v=q4W9t6IcjWk
Thanks! If there are any questions, let me know!
Dan
[1]: https://www.mediawiki.org/wiki/Discernatron
--
Dan Garry
Lead Product Manager, Discovery
Wikimedia Foundation
Hi Discovery,
I'm wondering if there are any significant improvements coming in the next
+/- 18 months for multimedia search on Commons. Finding images can be a
very time-consuming job for Wikimedians and other users of the site.
For example, it would be nice to be able to do a "join" search with
categories, so that only media files that appear in 2+ selected categories
are shown in search results.
As an example, suppose that I want an image of a blue tile roof in China.
The search "blue tile roof China" doesn't show any results that interest me
on the first page. However,
https://commons.wikimedia.org/wiki/File:Wuhan_University_-_roof_tiles.JPG
is an image that would interest me. That file is several layers deep in
subcategories under the China category. It would be nice to be able to find
that image by searching a join of the China category and its subcategories,
with the "blue roof" category.
Thanks,
Pine