On 12/21/16 2:52 PM, Ruben Verborgh wrote:
For this, I'd like to point to the overall aim of the LDF project, as documented on our website and papers. Summarizing: the SemWeb community has almost exclusively cared about speed so far concerning query execution. This has resulted in super-fast, but super-expensive services, which simply don't work on the public Web. More than half of all public SPARQL endpoints are down for more than 1.5 days each month [1].
Ruben,
The Semantic Web community hasn't focused exclusively on query execution speed.
Anyone that encounters a service (Web or Semantic Web) expects results in acceptable timeframes (typically <= 250ms) , that's a function of user behavior on the Web or anywhere else. Thus, a less overarching characterization would be as follows: The Linked Open Data community, a sub segment of the Semantic Web community, has focused on providing solutions that work, a prominent example (that I know well) is DBpedia, and many bubbles around it in the LOD Cloud.
You will find that Wikidata, is doing the very same thing, but with much more hardware at their disposal, since they have more funding than DBpedia, at this point in time.
That basic response time expectations of users drives everything, all the time.
The key issue here is all about what method a given service providers chooses en route to addressing the expectations of users, as I've outlined above. Fundamentally, each service provider will use a variety of solution deployment techniques that boil down to:
1. Massive Server Clusters (sharded) and Proxies
2. Fast multi-threaded instances (no sharding but via replication topologies) behind proxies (functioning as cops, so to speak).
Your "Simply doesn't work on the public Web" claim is subjective, I've told you that repeatedly. I am sure others will ultimately tell you the very same thing :)