Hello Martynas,
Interesting to read about ESI at http://en.wikipedia.org/wiki/Edge_Side_Includes.
I recall that a query facility is intended for Phase 3, but I have no idea the kind of store. I'd think that a quad-store is appropriate to storing provenance data in mind for each triple. It'd be interesting to know whether existing quad-stores can handle the PROV namespace; I see some interesting references at the end of http://www.w3.org/TR/2012/WD-prov-aq-20120619/ to explore!
Best - john
On 21.06.2012 14:27, Martynas Jusevičius wrote:
John,
I pretty much second your concerns.
Do you
know Edge Side Includes (ESI)? I was thinking about using them
with
XSLT and Varnish to compose pages from remote XHTML fragments.
Regarding scalability -- I can only see those possible cases: either
Wikidata will not have any query language, or it's query language will
be SQL with never ending JOINs too complicated to be useful, or it's
gonna be another query language translated to SQL -- for example
SPARQL, which is doable but attempts have shown it doesn't scale. A
native RDF store is much more performant.
Martynas graphity.org
On Thu, Jun 21, 2012 at 11:34 PM, jmcclure@hypergrove.com wrote:
Hello, For the current demo system, how many triple store retrievals
are being performed per Statement per page? Is this more or less or the same as expected under the final design? Under the suggested pure-transclusion approach, I believe the answer is "zero" since all retrievals are performed asynchronously* with respect to client wiki transclusion requests. Are additional triple store retrievals (or updates) occurring? Such as ones to inform a client wikipedia about the currency of Statements previously retrieved from wikidata? In a pure-transclusion approach, such info is "easy" to get at: clients query the [[modification date::]] of each transclusion. Can you point me to a (transaction-level) design for keeping client wikis in sync with Statement-level wikidata content? I'm also concerned about stability & scalability. What happens to the performance of client wikis should the wikidata host be hobbled by DOS attacks, or inadvertent long-running queries, or command line maintenance scripts, or poorly designed wikibots or, as expected, by possilby tens of thousands of wikis accessing the central wikidata host? Under the pure-transclusion approach, my concerns are not nearly the same since all transcludable content is cached on squid servers.... Thanks - john _______________________________________________ Wikidata-l mailing list Wikidata-l@lists.wikimedia.org [1] https://lists.wikimedia.org/mailman/listinfo/wikidata-l [2]
_______________________________________________
Wikidata-l mailing
list
Wikidata-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata-l
Links: ------ [1] mailto:Wikidata-l@lists.wikimedia.org [2] https://lists.wikimedia.org/mailman/listinfo/wikidata-l