Hi, we'll have to deprecate both usages due to how the software is
configured.
If migrating the clients turns out to be a pain for some reason, we can
negotiate the deprecation date, of course…
-Adam
On Mon, Aug 27, 2018 at 1:57 PM Ryan Kaldari <rkaldari(a)wikimedia.org> wrote:
> Thanks for the update (and the switch to a less confusing score name)! Just
> to clarify, when you say "we might pull the plug after four weeks after
> this announcement," does that refer to wp10 just in the generic scores
> request (e.g. https://ores.wikimedia.org/v3/scores/enwiki/855137823) or
> also in the specific scores request (e.g.
> https://ores.wikimedia.org/v3/scores/enwiki/855137823/wp10)? In other
> words, will https://ores.wikimedia.org/v3/scores/enwiki/855137823/wp10
> still work a month from now, or does that also need to be migrated to
> "articlequality"?
>
> On Mon, Aug 27, 2018 at 1:50 PM Amir Sarabadani <
> amir.sarabadani(a)wikimedia.de> wrote:
>
> > Hello,
> > If you don't use ORES API, please ignore this email.
> >
> > If you are using wp10 models in your tool, gadget, or research, please
> note
> > that these models are now renamed to "articlequality" to better reflect
> > what they are (in comparison to "editquality"). articlequality models are
> > deployed on English, Russian, French, Persian, Turkish, and Basque
> > Wikipedia languages.
> >
> > So URLs like this:
> > https://ores.wikimedia.org/v3/scores/enwiki/855137823/wp10
> > Need to be changed to something like this:
> > https://ores.wikimedia.org/v3/scores/enwiki/855137823/articlequality
> >
> > Same goes with parsing the results.
> >
> > The "wp10" still exists as an alias and if you don't determine models
> > (meaning you want scores for all models) we respond with wp10 and
> > articlequality data duplicated [1] but we might pull the plug after four
> > weeks after this announcement.
> >
> > [1]: For example see:
> > https://ores.wikimedia.org/v3/scores/enwiki/855137823
> >
> > For more information see: https://phabricator.wikimedia.org/T196240
> >
> > Best
> > --
> > Amir Sarabadani
> > Software Engineer
> >
> > Wikimedia Deutschland e. V. | Tempelhofer Ufer 23-24 | 10963 Berlin
> > Tel. (030) 219 158 26-0
> > http://wikimedia.de
> >
> > Stellen Sie sich eine Welt vor, in der jeder Mensch an der Menge allen
> > Wissens frei teilhaben kann. Helfen Sie uns dabei!
> > http://spenden.wikimedia.de/
> >
> > Wikimedia Deutschland – Gesellschaft zur Förderung Freien Wissens e. V.
> > Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg
> unter
> > der Nummer 23855 B. Als gemeinnützig anerkannt durch das Finanzamt für
> > Körperschaften I Berlin, Steuernummer 27/029/42207.
> > _______________________________________________
> > Wikitech-l mailing list
> > Wikitech-l(a)lists.wikimedia.org
> > https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> _______________________________________________
> Wikitech-l mailing list
> Wikitech-l(a)lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Good morning/afternoon/evening everyone,
If you are an editor of the French, Italian or English Wikipedia, and you
are curious about how to contribute to technologies for improving
verifiability of Wikipedia articles, please read on—we need your help!
In the context of the Knowledge integrity
<https://meta.wikimedia.org/wiki/Knowledge_Integrity> program, we (the
WMF Research
team <http://research.wikimedia.org>) are studying ways to flag unsourced
statements needing a citation
<https://meta.wikimedia.org/wiki/Research:Identification_of_Unsourced_Statem…>
using machine learning, with the aim of identifying areas where adding high
quality citations is particularly urgent or important. Following the
success of the first labeling campaign
<https://meta.wikimedia.org/wiki/Research:Identification_of_Unsourced_Statem…>,
we now need to collect additional, high-quality labeled data regarding
why sentences
need citations.
You are invited to participate in a second annotation task
<https://meta.wikimedia.org/wiki/Research:Identification_of_Unsourced_Statem…>.
We used your input from the last experiment to generate a taxonomy of
reasons
<https://meta.wikimedia.org/wiki/Research:Identification_of_Unsourced_Statem…>
why editors add citations. With this taxonomy now embedded in the
interface, the annotation experience will be much faster and fun.
If you are interested in participating, please go to
http://labels.wmflabs.org/ui/enwiki/ (replace enwiki with itwki or frwiki
if you speak Italian or French), login, and from *'**Labeling Unsourced
Statements II’**,* request one (or more) workset. For each task in a
workset, the tool will show you an unsourced sentence in an article and ask
you to annotate it. You can then label the sentence as needing an inline
citation or not, and specify a reason for your choice from a drop-down
menu. If you can't respond please select 'skip'. You can also sign up by
(optionally) adding your name on this page
<https://meta.wikimedia.org/wiki/Research:Identification_of_Unsourced_Statem…>
to receive updates about future campaigns and results from this research
If you have any question/comment on this project, please let us know by
contacting miriam(a)wikimedia.org or leaving a message on the talk page of
the project
<https://meta.wikimedia.org/wiki/Research_talk:Identification_of_Unsourced_S…>.
Thank you for your time!
Miriam, Jonathan, and Dario
--
Jonathan T. Morgan
Senior Design Researcher
Wikimedia Foundation
User:Jmorgan (WMF) <https://meta.wikimedia.org/wiki/User:Jmorgan_(WMF)>
Forwarding.
Pine
( https://meta.wikimedia.org/wiki/User:Pine )
---------- Forwarded message ---------
From: Sarah R <srodlund(a)wikimedia.org>
Date: Fri, Aug 10, 2018 at 10:46 PM
Subject: [Analytics] Wikimedia Research Showcase August 13 2018 at 11:30 AM
(PDT) 18:30 UTC
To: <wikimedia-l(a)lists.wikimedia.org>, <wiki-research-l(a)lists.wikimedia.org>,
<analytics(a)lists.wikimedia.org>
Hi Everyone,
The next Wikimedia Research Showcase will be live-streamed Wednesday,
August 13 2018 at 11:30 AM (PDT) 18:30 UTC.
YouTube stream: https://www.youtube.com/watch?v=OGPMS4YGDMk
As usual, you can join the conversation on IRC at #wikimedia-research. And,
you can watch our past research showcases here.
<https://www.mediawiki.org/wiki/Wikimedia_Research/Showcase#Upcoming_Showcase>
Hope to see you there!
This month's presentations is:
*Quicksilver: Training an ML system to generate draft Wikipedia articles
and Wikidata entries simultaneously*
John Bohannon and Vedant Dharnidharka, Primer
The automatic generation and updating of Wikipedia articles is usually
approached as a multi-document summarization task: Given a set of source
documents containing information about an entity, summarize the entity.
Purely sequence-to-sequence neural models can pull that off, but getting
enough data to train them is a challenge. Wikipedia articles and their
reference documents can be used for training, as was recently done
<https://arxiv.org/abs/1801.10198> by a team at Google AI. But how do you
find new source documents for new entities? And besides having humans read
all of the source documents, how do you fact-check the output? What is
needed is a self-updating knowledge base that learns jointly with a
summarization model, keeping track of data provenance. Lucky for us, the
world’s most comprehensive public encyclopedia is tightly coupled with
Wikidata, the world’s most comprehensive public knowledge base. We have
built a system called Quicksilver uses them both.
_______________________________________________
Analytics mailing list
Analytics(a)lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/analytics