Hoi,
This is the kind of (technical) feedback that makes sense as it is centred
on need. It acknowledges that more needs to be done as we are not ready for
what we expect of ourselves in the first place.
In this day and age of big data, we are a very public place where a lot of
initiatives gravitate to. If the WMF wants to retain its relevance, it is
to face its challenges. Maybe the WDQS can steal a page out of the
architecture in what Magnus build. It is very much replicable and multiple
instances have been running. This is not to say that it becomes more and
more relevant to have the Wikidata toolkit available from Labs with as many
instances as needed.
Thanks,
GerardM
On 12 February 2016 at 00:04, Stas Malyshev <smalyshev(a)wikimedia.org> wrote:
Hi!
We basically have two choices: either we offer a
limited interface that
only
allows for a narrow range of queries to be run at
all. Or we offer a very
general interface that can run arbitrary queries, but we impose limits
on time
and memory consumption. I would actually prefer
the first option,
because it's
more predictable, and doesn't get
people's hopes up too far. What do you
think?
That would require implementing pretty smart SPARQL parser... I don't
think it worth the investment of time. I'd rather put caps on runtime
and maybe also on parallel queries per IP, to ensure fair access. We may
also have a way to run longer queries - in fact, we'll need it anyway if
we want to automate lists - but that is longer term, we'll need to
figure out infrastructure for that and how we allocate access.
--
Stas Malyshev
smalyshev(a)wikimedia.org
_______________________________________________
Wikidata mailing list
Wikidata(a)lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata