Hoi, Two things, not everybody has the capacity to run an instance of the toolkit. When there are other reasons as well for needing a toolkit than query does not cope, it makes sense to have instances of toolkit on labs where queries like this can be run.
Your response is technical and seriously, query is a tool and it should function for people. When the tool is not good enough fix it. You cannot expect people to engage in the toolkit because most queries are community incidentals and not part of a scientific endeavour. Thanks, GerardM
On 11 February 2016 at 01:34, Stas Malyshev smalyshev@wikimedia.org wrote:
Hi!
I try to extract all mappings from wikidata to the GND authority file, along with the according wikipedia pages, expecting roughly 500,000 to 1m triples as result.
As a starting note, I don't think extracting 1M triples may be the best way to use query service. If you need to do processing that returns such big result sets - in millions - maybe processing the dump - e.g. with wikidata toolkit at https://github.com/Wikidata/Wikidata-Toolkit - would be better idea?
However, with various calls, I get much less triples (about 2,000 to 10,000). The output seems to be truncated in the middle of a statement,
e.g.
It may be some kind of timeout because of the quantity of the data being sent. How long does such request take?
-- Stas Malyshev smalyshev@wikimedia.org
Wikidata mailing list Wikidata@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikidata