Yes you just run it should get a sufficient help and if not… I am more than happy to polish the code…
java -jar /Users/jasperkoehorst/Downloads/HDTQuery.jar The following option is required: -query Usage: <main class> [options] Options: --help
-debug Debug mode Default: false -e SPARQL endpoint -f Output format, csv / tsv Default: csv -i HDT input file(s) for querying (comma separated) -o Query result file * -query SPARQL Query or FILE containing the query to execute
* required parameter
On 1 Nov 2017, at 07:59, Laura Morales lauretas@mail.com wrote:
I am currently downloading the latest ttl file. On a 250gig ram machine. I will see if that is sufficient to run the conversion Otherwise we have another busy one with around 310 gig.
Thank you!
For querying I use the Jena query engine. I have created a module called HDTQuery located http://download.systemsbiology.nl/sapp/ which is a simple program and under development that should be able to use the full power of SPARQL and be more advanced than grep… ;)
Does this tool allow to query HDT files from command-line, with SPARQL, and without the need to setup a Fuseki endpoint?
If this all works out I will see with our department if we can set up if it is still needed a weekly cron job to convert the TTL file. But as it is growing rapidly we might run into memory issues later?
Thank you!