Hey :)
We are currently discussing if we should also offer .nt dumps: https://phabricator.wikimedia.org/T144103 Since we'd need to set that up and maintain it I want to make sure there is actually demand for it. So if you'd like to have it and use it please let me know.
Cheers Lydia
Hi, great idea since I thik .nt format is much easier to process and read. In fact .nt is the format I use in my work so I support this inittiative.
-violeta
Violeta Ilik
On Fri, Nov 4, 2016 at 4:43 AM, Lydia Pintscher < lydia.pintscher@wikimedia.de> wrote:
Hey :)
We are currently discussing if we should also offer .nt dumps: https://phabricator.wikimedia.org/T144103 Since we'd need to set that up and maintain it I want to make sure there is actually demand for it. So if you'd like to have it and use it please let me know.
Cheers Lydia
-- Lydia Pintscher - http://about.me/lydia.pintscher Product Manager for Wikidata
Wikimedia Deutschland e.V. Tempelhofer Ufer 23-24 10963 Berlin www.wikimedia.de
Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e. V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg unter der Nummer 23855 Nz. Als gemeinnützig anerkannt durch das Finanzamt für Körperschaften I Berlin, Steuernummer 27/029/42207.
Wikidata mailing list Wikidata@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikidata
+1 for dumps in NT, if possible also for the direct-statement fragment (the wdt: triples) :)
Regards, Fariz
On Nov 4, 2016 1:02 PM, "violeta ilik" ilik.violeta@gmail.com wrote:
Hi, great idea since I thik .nt format is much easier to process and read. In fact .nt is the format I use in my work so I support this inittiative.
-violeta
Violeta Ilik
On Fri, Nov 4, 2016 at 4:43 AM, Lydia Pintscher < lydia.pintscher@wikimedia.de> wrote:
Hey :)
We are currently discussing if we should also offer .nt dumps: https://phabricator.wikimedia.org/T144103 Since we'd need to set that up and maintain it I want to make sure there is actually demand for it. So if you'd like to have it and use it please let me know.
Cheers Lydia
-- Lydia Pintscher - http://about.me/lydia.pintscher Product Manager for Wikidata
Wikimedia Deutschland e.V. Tempelhofer Ufer 23-24 10963 Berlin www.wikimedia.de
Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e. V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg unter der Nummer 23855 Nz. Als gemeinnützig anerkannt durch das Finanzamt für Körperschaften I Berlin, Steuernummer 27/029/42207.
Wikidata mailing list Wikidata@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikidata
Wikidata mailing list Wikidata@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikidata
On 11/4/16 5:43 AM, Lydia Pintscher wrote:
Hey :)
We are currently discussing if we should also offer .nt dumps: https://phabricator.wikimedia.org/T144103 Since we'd need to set that up and maintain it I want to make sure there is actually demand for it. So if you'd like to have it and use it please let me know.
Cheers Lydia
Lydia,
It is an important "best practice" to release RDF dumps e.g., RDF-Ntriples, RDF-Turtle, RDF-XML file collections.
BTW -- does the Wikidata SPARQL endpoint currently support CONSTRUCT and DESCRIBE queries? I ask because you can use that as one of the many options for dump production.
Hi!
BTW -- does the Wikidata SPARQL endpoint currently support CONSTRUCT and DESCRIBE queries? I ask because you can use that as one of the many options for dump production.
Yes, such queries are supported, but using this for dump production is both not possible - since dumps are what the data in SPARQL service is loaded from - and very inefficient, we don't really need SPARQL server for that, it can be done with much simpler code.
On 11/4/16 12:55 PM, Stas Malyshev wrote:
Hi!
BTW -- does the Wikidata SPARQL endpoint currently support CONSTRUCT and DESCRIBE queries? I ask because you can use that as one of the many options for dump production.
Yes, such queries are supported, but using this for dump production is both not possible - since dumps are what the data in SPARQL service is loaded from - and very inefficient, we don't really need SPARQL server for that, it can be done with much simpler code.
I was trying to explain to you that you can use CONSTRUCT to produce a one-off dump, per whatever you have in the dataset modifier (or body) part of the SPARQL Query .