It measures the zero results rate for 1 in 10 search requests via CirrusSearchUserTesting log that we used last quarter.

On Mon, Nov 2, 2015 at 6:01 PM, Oliver Keyes <okeyes@wikimedia.org> wrote:
Define this "does it do anything?" test?

On 2 November 2015 at 19:58, Erik Bernhardson
<ebernhardson@wikimedia.org> wrote:
> Now that we have the feature deployed (behind a feature flag), and have an
> initial "does it do anything?" test going out today, along with an upcoming
> integration with our satisfaction metrics, we need to come up with how will
> will try to further move the needle forward.
>
> For reference these are our Q2 goals:
>
> Run A/B test for a feature that:
>
> Uses a library to detect the language of a user's search query.
> Adjusts results to match that language.
>
> Determine from A/B test results whether this feature is fit to push to
> production, with the aim to:
>
> Improve search user satisfaction by 10% (from 15% to 16.5%).
> Reduce zero results rate for non-automata search queries by 10%.
>
> We brainstormed a number of possibilities here:
>
> https://etherpad.wikimedia.org/p/LanguageSupportBrainstorming
>
>
> We now need to decide which of these ideas we should prioritize. We might
> want to take into consideration which of these can be pre-tested with our
> relevancy lab work, such that we can prefer to work on things we think will
> move the needle the most. I'm really not sure which of these to push forward
> on, so let us know which you think can have the most impact, or where the
> expected impact could be measured with relevancy lab with minimal work.
>
>
>
> _______________________________________________
> discovery mailing list
> discovery@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/discovery
>



--
Oliver Keyes
Count Logula
Wikimedia Foundation

_______________________________________________
discovery mailing list
discovery@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/discovery