Thanks for your response. Actually the systems it's being run on are pretty well equipped with multiple cores and plenty of memory.
I believe the problem arose from the fact that the rccontinue parameter is not being carried forward from previous calls to the wikibase API. Refactoring the code to do so seems to have fixed the problem.
Cheers,
On 11/02/2016 11:57 AM, Stas Malyshev wrote:
Hi!
We've been using a locally installed wikidata stand-alone service (https://www.mediawiki.org/wiki/Wikidata_query_service/User_Manual#Standalone...) for several months now. Recently the service went down for a significant amount of time, and when we ran runUpdate.sh -n wdq, instead of catching up to real time as it usually does, the update process lagged, failing even to keep parity with real time.
Hmm... This usually means that the Blazegaph install is underpowered and the queries for update can't run in time. Try increasing batch size, maybe, but usually that doesn't change much, if the host is not performant enough to keep with the data.
INFO org.wikidata.query.rdf.tool.RdfRepository - HTTP request failed: org.apache.http.NoHttpResponseException: wikidata.cb.ntent.com:9999 failed to respond, retrying in 2175 ms.
Do you have any other exceptions surrounding it, or any accompanying exceptions on Blazegraph side?
This problem started about 3 days ago, and we're now polling up to a point in time 18 hours earlier than real time.
It also can happen if the edit volume spikes, and then it should catch up when the spike passes. But if that's not the case, I'd try to run Blazegraph on stronger machine.
Also: is this an appropriate list to write to with such problems? Are there more appropriate places?
Blazegraph list could help too, for BG-specific questions: Bigdata-developers@lists.sourceforge.net There is a good platform to discuss performance/optimization questions.