Hi Tom and thanks Lydia for the clarification,
that request for comments (RFC)  aims at gathering feedback both on
the primary sources tool and the available datasets (especially StrepHit
), which are closely intertwined: the dataset is in the tool, so
people can play with both in one single interaction and leave their
thoughts in the RFC.
Sorry if the title is misleading: the pipeline is indeed semi-automatic,
as the StrepHit dataset is generated automatically, while its validation
requires human attention.
Since I'm trying to centralize the discussion, it would be great if you
could expand in the RFC the 3 fundamental questions you raised.
On Tue, Jun 14, 2016 at 1:03 AM Tom Morris
I'm confused by this from today's Wikidata weekly summary:
* New request for comments: Semi-automatic Addition of References
to Wikidata Statements - feedback on the Primary Sources Tool
First of all, the title makes no sense because "semi-automatic
addition of references to Wikidata statements" is one of the main
things that the tool can't currently do. You'll almost always end up
with duplicate statements if there's an existing statement, rather
than the desired behavior of just adding the statement.
Second, I'm not sure who "Hjfocs" is (why does everyone have to make
up fake wikinames?), but why are they asking for more feedback when
there's been *ample* feedback already? There hasn't been an issue
with getting people to test the tool or provide feedback based on
the testing. The issue has been with getting anyone to *act* on the
feedback. Everything is a) "too hard," or b) "beyond our
or depends on something in category a or b, or is incompatible with
the arbitrary implementation scheme chosen, or some other excuse.
We're 12-18+ months into the project, depending on how you measure,
and not only is the tool not usable yet, but it's no longer
improving, so I think it's time to take a step back and ask some
- Is the current data pipeline and front end gadget the right
approach and the right technology for this task? Can they be fixed
to be suitable for users?
- If so, should Google continue to have sole responsibility for it
or should it be transferred to the Wikidata team or someone else
who'll actually work on it?
- If not, what should the data pipeline and tooling look like to
make maximum use of the Freebase data?
The whole project needs a reboot.
I realize you are upset but you are really barking up the wrong tree.
Marco is trying to give the whole thing more structure and sort through
all the requests to find a way forward. He is actually doing something
constructive about the issues you are raising.
Lydia Pintscher - http://about.me/lydia.pintscher
Product Manager for Wikidata
Wikimedia Deutschland e.V.
Tempelhofer Ufer 23-24
Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e. V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg
unter der Nummer 23855 Nz. Als gemeinnützig anerkannt durch das
Finanzamt für Körperschaften I Berlin, Steuernummer 27/029/42207.
Wikidata mailing list