Hello,

I am currently downloading the latest ttl file. On a 250gig ram machine. I will see if that is sufficient to run the conversion Otherwise we have another busy one with  around 310 gig.
For querying I use the Jena query engine. I have created a module called HDTQuery located http://download.systemsbiology.nl/sapp/ which is a simple program and under development that should be able to use the full power of SPARQL and be more advanced than grep… ;)

If this all works out I will see with our department if we can set up if it is still needed a weekly cron job to convert the TTL file. But as it is growing rapidly we might run into memory issues later?





On 1 Nov 2017, at 00:32, Stas Malyshev <smalyshev@wikimedia.org> wrote:

Hi!

OK. I wonder though, if it would be possible to setup a regular HDT
dump alongside the already regular dumps. Looking at the dumps page,
https://dumps.wikimedia.org/wikidatawiki/entities/, it looks like a
new dump is generated once a week more or less. So if a HDT dump
could

True, the dumps run weekly. "More or less" situation can arise only if
one of the dumps fail (either due to a bug or some sort of external
force majeure).
--
Stas Malyshev
smalyshev@wikimedia.org

_______________________________________________
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata