Hi all,
The new version of the "Linked Open data Cloud" graph http://lod-cloud.net/ is out ... and still no Wikidata in it. According to this Twitter discussion https://twitter.com/AmrapaliZ/status/990927835400474626, this would be due to a lack of metadata on Wikidata. No way to fix that easily? The LOD cloud is cited in many scientific papers, it is not a simple gadget.
Cheers,
Ettore Rizza
On Mon, Apr 30, 2018 at 9:19 PM Ettore RIZZA ettorerizza@gmail.com wrote:
Hi all,
The new version of the "Linked Open data Cloud" graph is out ... and
still no Wikidata in it. According to this Twitter discussion, this would be due to a lack of metadata on Wikidata. No way to fix that easily? The LOD cloud is cited in many scientific papers, it is not a simple gadget.
When I last talked to them about getting Wikidata included it wasn't possible because the website handling the datasets was changed and no longer worked for it. Seems they've changed that now. Lucas is in touch to figure out what's needed now. Let's hope we can finally get this solved now and see where Wikidata ends up in the cloud. The suspense is killing me :D
Cheers Lydia
The suspense is killing me :D
Me too ! :D
Thanks Lydia, and Lucas of course, looking forward to see a big Wikidata bubble in the middle of this cloud.
Cheers,
Ettore Rizza
2018-04-30 22:13 GMT+02:00 Lydia Pintscher Lydia.Pintscher@wikimedia.de:
On Mon, Apr 30, 2018 at 9:19 PM Ettore RIZZA ettorerizza@gmail.com wrote:
Hi all,
The new version of the "Linked Open data Cloud" graph is out ... and
still no Wikidata in it. According to this Twitter discussion, this would be due to a lack of metadata on Wikidata. No way to fix that easily? The LOD cloud is cited in many scientific papers, it is not a simple gadget.
When I last talked to them about getting Wikidata included it wasn't possible because the website handling the datasets was changed and no longer worked for it. Seems they've changed that now. Lucas is in touch to figure out what's needed now. Let's hope we can finally get this solved now and see where Wikidata ends up in the cloud. The suspense is killing me :D
Cheers Lydia
-- Lydia Pintscher - http://about.me/lydia.pintscher Product Manager for Wikidata
Wikimedia Deutschland e.V. Tempelhofer Ufer 23-24 10963 Berlin www.wikimedia.de
Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e. V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg unter der Nummer 23855 Nz. Als gemeinnützig anerkannt durch das Finanzamt für Körperschaften I Berlin, Steuernummer 27/029/42207.
Wikidata mailing list Wikidata@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikidata
Yes, it would be nice to have Wikidata there, provided that Wikidata satisfies the requirements. There are already several mentions of Wikidata in the data behind the diagram.
I don't think that Freebase satisfies the stated requirement because its URIs no longer "resolve, with or without content negotiation, to /RDF data/ in one of the popular RDF formats (RDFa, RDF/XML, Turtle, N-Triples)". I wonder why Freebase is still in the diagram.
Does the way that Wikidata serves RDF (https://www.wikidata.org/wiki/Special:EntityData/Q52000000.rdf) satisfy this requirement? (If it doesn't, it might be easy to change.)
peter
On 04/30/2018 12:17 PM, Ettore RIZZA wrote:
Hi all,
The new version of the "Linked Open data Cloud" graph http://lod-cloud.net/ is out ... and still no Wikidata in it. According to this Twitter discussion https://twitter.com/AmrapaliZ/status/990927835400474626, this would be due to a lack of metadata on Wikidata. No way to fix that easily? The LOD cloud is cited in many scientific papers, it is not a simple gadget.
Cheers,
Ettore Rizza
Wikidata mailing list Wikidata@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikidata
Peter F. Patel-Schneider, 30/04/2018 23:32:
Does the way that Wikidata serves RDF (https://www.wikidata.org/wiki/Special:EntityData/Q52000000.rdf) satisfy this requirement?
I think that part was already settled with: https://lists.wikimedia.org/pipermail/wikidata/2017-October/011314.html
More information: https://phabricator.wikimedia.org/T85444
Federico
Does it? The point is not just that Wikidata has real pointers to external resources.
Wikidata needs to serve RDF (e.g., in Turtle) in an accepted fashion. Is having https://www.wikidata.org/wiki/Special:EntityData/Q52000000.ttl available and linked to with an alternate link count when the "real" URI is https://www.wikidata.org/wiki/Q52000000?%C2%A0 I don't know enough about this corner of web standards to know.
peter
On 04/30/2018 01:45 PM, Federico Leva (Nemo) wrote:
Peter F. Patel-Schneider, 30/04/2018 23:32:
Does the way that Wikidata serves RDF (https://www.wikidata.org/wiki/Special:EntityData/Q52000000.rdf) satisfy this requirement?
I think that part was already settled with: https://lists.wikimedia.org/pipermail/wikidata/2017-October/011314.html
More information: https://phabricator.wikimedia.org/T85444
Federico
The real URI (without scare quotes :) ) is not https://www.wikidata.org/wiki/Q52000000 but http://www.wikidata.org/entity/Q52000000 – and depending on your Accept header, that will redirect you to the wiki page, JSON dump, or RDF data (in XML or Turtle formats). Since the LOD Cloud criteria explicitly mentions content negotiation, I think we’re good :)
Cheers, Lucas
On 30.04.2018 23:08, Peter F. Patel-Schneider wrote:
Does it? The point is not just that Wikidata has real pointers to external resources.
Wikidata needs to serve RDF (e.g., in Turtle) in an accepted fashion. Is having https://www.wikidata.org/wiki/Special:EntityData/Q52000000.ttl available and linked to with an alternate link count when the "real" URI is https://www.wikidata.org/wiki/Q52000000?%C2%A0 I don't know enough about this corner of web standards to know.
peter
On 04/30/2018 01:45 PM, Federico Leva (Nemo) wrote:
Peter F. Patel-Schneider, 30/04/2018 23:32:
Does the way that Wikidata serves RDF (https://www.wikidata.org/wiki/Special:EntityData/Q52000000.rdf) satisfy this requirement?
I think that part was already settled with: https://lists.wikimedia.org/pipermail/wikidata/2017-October/011314.html
More information: https://phabricator.wikimedia.org/T85444
Federico
Wikidata mailing list Wikidata@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikidata
Really looking forward to finally seeing WD in the LOD Cloud!
-fariz
On Tue, May 1, 2018, 04:53 Lucas Werkmeister mail@lucaswerkmeister.de wrote:
The real URI (without scare quotes :) ) is not https://www.wikidata.org/wiki/Q52000000 but http://www.wikidata.org/entity/Q52000000 – and depending on your Accept header, that will redirect you to the wiki page, JSON dump, or RDF data (in XML or Turtle formats). Since the LOD Cloud criteria explicitly mentions content negotiation, I think we’re good :)
Cheers, Lucas
On 30.04.2018 23:08, Peter F. Patel-Schneider wrote:
Does it? The point is not just that Wikidata has real pointers to
external
resources.
Wikidata needs to serve RDF (e.g., in Turtle) in an accepted fashion. Is having https://www.wikidata.org/wiki/Special:EntityData/Q52000000.ttl available and linked to with an alternate link count when the "real" URI
is
https://www.wikidata.org/wiki/Q52000000? I don't know enough about this corner of web standards to know.
peter
On 04/30/2018 01:45 PM, Federico Leva (Nemo) wrote:
Peter F. Patel-Schneider, 30/04/2018 23:32:
Does the way that Wikidata serves RDF (https://www.wikidata.org/wiki/Special:EntityData/Q52000000.rdf)
satisfy this
requirement?
I think that part was already settled with: https://lists.wikimedia.org/pipermail/wikidata/2017-October/011314.html
More information: https://phabricator.wikimedia.org/T85444
Federico
Wikidata mailing list Wikidata@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikidata
Wikidata mailing list Wikidata@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikidata
As far as I can tell real IRIs for Wikidata are https URIs. The http IRIs redirect to https IRIs. As far as I can tell no content negotiation is done.
peter
idefix merging> curl -I http://www.wikidata.org/wiki/Q52000000 HTTP/1.1 301 TLS Redirect Date: Tue, 01 May 2018 01:13:09 GMT Server: Varnish X-Varnish: 227838359 X-Cache: cp1068 int X-Cache-Status: int-front Set-Cookie: WMF-Last-Access=01-May-2018;Path=/;HttpOnly;secure;Expires=Sat, 02 Jun 2018 00:00:00 GMT Set-Cookie: WMF-Last-Access-Global=01-May-2018;Path=/;Domain=.wikidata.org;HttpOnly;secure;Expires=Sat, 02 Jun 2018 00:00:00 GMT X-Client-IP: 199.4.160.88 Location: https://www.wikidata.org/wiki/Q52000000 Content-Length: 0 Connection: keep-alive
idefix merging> curl -I https://www.wikidata.org/wiki/Q52000000 HTTP/2 200 date: Tue, 01 May 2018 01:14:58 GMT content-type: text/html; charset=UTF-8 server: mw1252.eqiad.wmnet x-content-type-options: nosniff p3p: CP="This is not a P3P policy! See https://www.wikidata.org/wiki/Special:CentralAutoLogin/P3P for more info." x-powered-by: HHVM/3.18.6-dev content-language: en link: </static/images/project-logos/wikidatawiki.png>;rel=preload;as=image vary: Accept-Encoding,Cookie,Authorization x-ua-compatible: IE=Edge backend-timing: D=75094 t=1525107829593021 x-varnish: 754403290 624210434, 194797954 924438274 via: 1.1 varnish (Varnish/5.1), 1.1 varnish (Varnish/5.1) age: 29467 x-cache: cp1067 hit/8, cp1068 hit/9 x-cache-status: hit-front set-cookie: CP=H2; Path=/; secure strict-transport-security: max-age=106384710; includeSubDomains; preload set-cookie: WMF-Last-Access=01-May-2018;Path=/;HttpOnly;secure;Expires=Sat, 02 Jun 2018 00:00:00 GMT set-cookie: WMF-Last-Access-Global=01-May-2018;Path=/;Domain=.wikidata.org;HttpOnly;secure;Expires=Sat, 02 Jun 2018 00:00:00 GMT x-analytics: ns=0;page_id=52899665;https=1;nocookies=1 x-client-ip: 199.4.160.88 cache-control: private, s-maxage=0, max-age=0, must-revalidate set-cookie: GeoIP=US:MA:Woburn:42.49:-71.16:v4; Path=/; secure; Domain=.wikidata.org accept-ranges: bytes
idefix merging> curl -I -H "Accept: text/turtle" https://www.wikidata.org/wiki/Q52000000 HTTP/2 200 date: Tue, 01 May 2018 01:15:52 GMT content-type: text/html; charset=UTF-8 server: mw1252.eqiad.wmnet x-content-type-options: nosniff p3p: CP="This is not a P3P policy! See https://www.wikidata.org/wiki/Special:CentralAutoLogin/P3P for more info." x-powered-by: HHVM/3.18.6-dev content-language: en link: </static/images/project-logos/wikidatawiki.png>;rel=preload;as=image vary: Accept-Encoding,Cookie,Authorization x-ua-compatible: IE=Edge backend-timing: D=75094 t=1525107829593021 x-varnish: 754403290 624210434, 160015159 924438274 via: 1.1 varnish (Varnish/5.1), 1.1 varnish (Varnish/5.1) age: 29522 x-cache: cp1067 hit/8, cp1068 hit/10 x-cache-status: hit-front set-cookie: CP=H2; Path=/; secure strict-transport-security: max-age=106384710; includeSubDomains; preload set-cookie: WMF-Last-Access=01-May-2018;Path=/;HttpOnly;secure;Expires=Sat, 02 Jun 2018 00:00:00 GMT set-cookie: WMF-Last-Access-Global=01-May-2018;Path=/;Domain=.wikidata.org;HttpOnly;secure;Expires=Sat, 02 Jun 2018 00:00:00 GMT x-analytics: ns=0;page_id=52899665;https=1;nocookies=1 x-client-ip: 199.4.160.88 cache-control: private, s-maxage=0, max-age=0, must-revalidate set-cookie: GeoIP=US:MA:Woburn:42.49:-71.16:v4; Path=/; secure; Domain=.wikidata.org accept-ranges: bytes
On 04/30/2018 02:53 PM, Lucas Werkmeister wrote:
The real URI (without scare quotes :) ) is not https://www.wikidata.org/wiki/Q52000000 but http://www.wikidata.org/entity/Q52000000 – and depending on your Accept header, that will redirect you to the wiki page, JSON dump, or RDF data (in XML or Turtle formats). Since the LOD Cloud criteria explicitly mentions content negotiation, I think we’re good :)
Cheers, Lucas
On 30.04.2018 23:08, Peter F. Patel-Schneider wrote:
Does it? The point is not just that Wikidata has real pointers to external resources.
Wikidata needs to serve RDF (e.g., in Turtle) in an accepted fashion. Is having https://www.wikidata.org/wiki/Special:EntityData/Q52000000.ttl available and linked to with an alternate link count when the "real" URI is https://www.wikidata.org/wiki/Q52000000?%C2%A0 I don't know enough about this corner of web standards to know.
peter
On 04/30/2018 01:45 PM, Federico Leva (Nemo) wrote:
Peter F. Patel-Schneider, 30/04/2018 23:32:
Does the way that Wikidata serves RDF (https://www.wikidata.org/wiki/Special:EntityData/Q52000000.rdf) satisfy this requirement?
I think that part was already settled with: https://lists.wikimedia.org/pipermail/wikidata/2017-October/011314.html
More information: https://phabricator.wikimedia.org/T85444
Federico
Wikidata mailing list Wikidata@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikidata
Wikidata mailing list Wikidata@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikidata
On 01/05/2018 03:25, Peter F. Patel-Schneider wrote:
As far as I can tell real IRIs for Wikidata are https URIs. The http IRIs redirect to https IRIs.
That's right.
As far as I can tell no content negotiation is done.
No, you're mistaken. Your tried the URL of a wikipage in your curl command. Those are for human consumption, thus not available in turtle.
The "real IRIs" of Wikidata entities are like this: https://www.wikidata.org/entity/Q%7BNUMBER%7D
However, they 303 redirect to https://www.wikidata.org/wiki/Special:EntityData/Q%7BNUMBER%7D
which is the identifier of a schema:Dataset. Then, if you HTTP GET these URIs, you can content negotiate them to JSON (https://www.wikidata.org/wiki/Special:EntityData/Q%7BNUMBER%7D.json) or to turtle (https://www.wikidata.org/wiki/Special:EntityData/Q%7BNUMBER%7D.ttl).
Suprisingly, there is no connection between the entity IRIs and the wikipage URLs. If one was given the IRI of an entity from Wikidata, and had no further information about how Wikidata works, they would not be able to retrieve HTML content about the entity.
BTW, I'm not sure the implementation of content negotiation in Wikidata is correct because the server does not tell me the format of the resource to which it redirects (as opposed to what DBpedia does, for instance).
--AZ
peter
idefix merging> curl -I http://www.wikidata.org/wiki/Q52000000 HTTP/1.1 301 TLS Redirect Date: Tue, 01 May 2018 01:13:09 GMT Server: Varnish X-Varnish: 227838359 X-Cache: cp1068 int X-Cache-Status: int-front Set-Cookie: WMF-Last-Access=01-May-2018;Path=/;HttpOnly;secure;Expires=Sat, 02 Jun 2018 00:00:00 GMT Set-Cookie: WMF-Last-Access-Global=01-May-2018;Path=/;Domain=.wikidata.org;HttpOnly;secure;Expires=Sat, 02 Jun 2018 00:00:00 GMT X-Client-IP: 199.4.160.88 Location: https://www.wikidata.org/wiki/Q52000000 Content-Length: 0 Connection: keep-alive
idefix merging> curl -I https://www.wikidata.org/wiki/Q52000000 HTTP/2 200 date: Tue, 01 May 2018 01:14:58 GMT content-type: text/html; charset=UTF-8 server: mw1252.eqiad.wmnet x-content-type-options: nosniff p3p: CP="This is not a P3P policy! See https://www.wikidata.org/wiki/Special:CentralAutoLogin/P3P for more info." x-powered-by: HHVM/3.18.6-dev content-language: en link: </static/images/project-logos/wikidatawiki.png>;rel=preload;as=image vary: Accept-Encoding,Cookie,Authorization x-ua-compatible: IE=Edge backend-timing: D=75094 t=1525107829593021 x-varnish: 754403290 624210434, 194797954 924438274 via: 1.1 varnish (Varnish/5.1), 1.1 varnish (Varnish/5.1) age: 29467 x-cache: cp1067 hit/8, cp1068 hit/9 x-cache-status: hit-front set-cookie: CP=H2; Path=/; secure strict-transport-security: max-age=106384710; includeSubDomains; preload set-cookie: WMF-Last-Access=01-May-2018;Path=/;HttpOnly;secure;Expires=Sat, 02 Jun 2018 00:00:00 GMT set-cookie: WMF-Last-Access-Global=01-May-2018;Path=/;Domain=.wikidata.org;HttpOnly;secure;Expires=Sat, 02 Jun 2018 00:00:00 GMT x-analytics: ns=0;page_id=52899665;https=1;nocookies=1 x-client-ip: 199.4.160.88 cache-control: private, s-maxage=0, max-age=0, must-revalidate set-cookie: GeoIP=US:MA:Woburn:42.49:-71.16:v4; Path=/; secure; Domain=.wikidata.org accept-ranges: bytes
idefix merging> curl -I -H "Accept: text/turtle" https://www.wikidata.org/wiki/Q52000000 HTTP/2 200 date: Tue, 01 May 2018 01:15:52 GMT content-type: text/html; charset=UTF-8 server: mw1252.eqiad.wmnet x-content-type-options: nosniff p3p: CP="This is not a P3P policy! See https://www.wikidata.org/wiki/Special:CentralAutoLogin/P3P for more info." x-powered-by: HHVM/3.18.6-dev content-language: en link: </static/images/project-logos/wikidatawiki.png>;rel=preload;as=image vary: Accept-Encoding,Cookie,Authorization x-ua-compatible: IE=Edge backend-timing: D=75094 t=1525107829593021 x-varnish: 754403290 624210434, 160015159 924438274 via: 1.1 varnish (Varnish/5.1), 1.1 varnish (Varnish/5.1) age: 29522 x-cache: cp1067 hit/8, cp1068 hit/10 x-cache-status: hit-front set-cookie: CP=H2; Path=/; secure strict-transport-security: max-age=106384710; includeSubDomains; preload set-cookie: WMF-Last-Access=01-May-2018;Path=/;HttpOnly;secure;Expires=Sat, 02 Jun 2018 00:00:00 GMT set-cookie: WMF-Last-Access-Global=01-May-2018;Path=/;Domain=.wikidata.org;HttpOnly;secure;Expires=Sat, 02 Jun 2018 00:00:00 GMT x-analytics: ns=0;page_id=52899665;https=1;nocookies=1 x-client-ip: 199.4.160.88 cache-control: private, s-maxage=0, max-age=0, must-revalidate set-cookie: GeoIP=US:MA:Woburn:42.49:-71.16:v4; Path=/; secure; Domain=.wikidata.org accept-ranges: bytes
On 04/30/2018 02:53 PM, Lucas Werkmeister wrote:
The real URI (without scare quotes :) ) is not https://www.wikidata.org/wiki/Q52000000 but http://www.wikidata.org/entity/Q52000000 – and depending on your Accept header, that will redirect you to the wiki page, JSON dump, or RDF data (in XML or Turtle formats). Since the LOD Cloud criteria explicitly mentions content negotiation, I think we’re good :)
Cheers, Lucas
On 30.04.2018 23:08, Peter F. Patel-Schneider wrote:
Does it? The point is not just that Wikidata has real pointers to external resources.
Wikidata needs to serve RDF (e.g., in Turtle) in an accepted fashion. Is having https://www.wikidata.org/wiki/Special:EntityData/Q52000000.ttl available and linked to with an alternate link count when the "real" URI is https://www.wikidata.org/wiki/Q52000000?%C2%A0 I don't know enough about this corner of web standards to know.
peter
On 04/30/2018 01:45 PM, Federico Leva (Nemo) wrote:
Peter F. Patel-Schneider, 30/04/2018 23:32:
Does the way that Wikidata serves RDF (https://www.wikidata.org/wiki/Special:EntityData/Q52000000.rdf) satisfy this requirement?
I think that part was already settled with: https://lists.wikimedia.org/pipermail/wikidata/2017-October/011314.html
More information: https://phabricator.wikimedia.org/T85444
Federico
Wikidata mailing list Wikidata@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikidata
Wikidata mailing list Wikidata@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikidata
Wikidata mailing list Wikidata@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikidata
Suprisingly, there is no connection between the entity IRIs and the wikipage URLs. If one was given the IRI of an entity from Wikidata, and had no further information about how Wikidata works, they would not be able to retrieve HTML content about the entity.
If you require the "text/html" MIME type, you are going to be redirected to the HTML content. For example, you could try to go to https://www.wikidata.org/entity/Q42 using your favorite browser or execute:
curl --header 'Accept: text/html' https://www.wikidata.org/wiki/Special:EntityData/Q42 -v
Cheers,
Thomas
Le 1 mai 2018 à 10:03, Antoine Zimmermann antoine.zimmermann@emse.fr a écrit :
On 01/05/2018 03:25, Peter F. Patel-Schneider wrote:
As far as I can tell real IRIs for Wikidata are https URIs. The http IRIs redirect to https IRIs.
That's right.
As far as I can tell no content negotiation is done.
No, you're mistaken. Your tried the URL of a wikipage in your curl command. Those are for human consumption, thus not available in turtle.
The "real IRIs" of Wikidata entities are like this: https://www.wikidata.org/entity/Q%7BNUMBER%7D
However, they 303 redirect to https://www.wikidata.org/wiki/Special:EntityData/Q%7BNUMBER%7D
which is the identifier of a schema:Dataset. Then, if you HTTP GET these URIs, you can content negotiate them to JSON (https://www.wikidata.org/wiki/Special:EntityData/Q%7BNUMBER%7D.json) or to turtle (https://www.wikidata.org/wiki/Special:EntityData/Q%7BNUMBER%7D.ttl).
Suprisingly, there is no connection between the entity IRIs and the wikipage URLs. If one was given the IRI of an entity from Wikidata, and had no further information about how Wikidata works, they would not be able to retrieve HTML content about the entity.
BTW, I'm not sure the implementation of content negotiation in Wikidata is correct because the server does not tell me the format of the resource to which it redirects (as opposed to what DBpedia does, for instance).
--AZ
peter idefix merging> curl -I http://www.wikidata.org/wiki/Q52000000 HTTP/1.1 301 TLS Redirect Date: Tue, 01 May 2018 01:13:09 GMT Server: Varnish X-Varnish: 227838359 X-Cache: cp1068 int X-Cache-Status: int-front Set-Cookie: WMF-Last-Access=01-May-2018;Path=/;HttpOnly;secure;Expires=Sat, 02 Jun 2018 00:00:00 GMT Set-Cookie: WMF-Last-Access-Global=01-May-2018;Path=/;Domain=.wikidata.org;HttpOnly;secure;Expires=Sat, 02 Jun 2018 00:00:00 GMT X-Client-IP: 199.4.160.88 Location: https://www.wikidata.org/wiki/Q52000000 Content-Length: 0 Connection: keep-alive idefix merging> curl -I https://www.wikidata.org/wiki/Q52000000 HTTP/2 200 date: Tue, 01 May 2018 01:14:58 GMT content-type: text/html; charset=UTF-8 server: mw1252.eqiad.wmnet x-content-type-options: nosniff p3p: CP="This is not a P3P policy! See https://www.wikidata.org/wiki/Special:CentralAutoLogin/P3P for more info." x-powered-by: HHVM/3.18.6-dev content-language: en link: </static/images/project-logos/wikidatawiki.png>;rel=preload;as=image vary: Accept-Encoding,Cookie,Authorization x-ua-compatible: IE=Edge backend-timing: D=75094 t=1525107829593021 x-varnish: 754403290 624210434, 194797954 924438274 via: 1.1 varnish (Varnish/5.1), 1.1 varnish (Varnish/5.1) age: 29467 x-cache: cp1067 hit/8, cp1068 hit/9 x-cache-status: hit-front set-cookie: CP=H2; Path=/; secure strict-transport-security: max-age=106384710; includeSubDomains; preload set-cookie: WMF-Last-Access=01-May-2018;Path=/;HttpOnly;secure;Expires=Sat, 02 Jun 2018 00:00:00 GMT set-cookie: WMF-Last-Access-Global=01-May-2018;Path=/;Domain=.wikidata.org;HttpOnly;secure;Expires=Sat, 02 Jun 2018 00:00:00 GMT x-analytics: ns=0;page_id=52899665;https=1;nocookies=1 x-client-ip: 199.4.160.88 cache-control: private, s-maxage=0, max-age=0, must-revalidate set-cookie: GeoIP=US:MA:Woburn:42.49:-71.16:v4; Path=/; secure; Domain=.wikidata.org accept-ranges: bytes idefix merging> curl -I -H "Accept: text/turtle" https://www.wikidata.org/wiki/Q52000000 HTTP/2 200 date: Tue, 01 May 2018 01:15:52 GMT content-type: text/html; charset=UTF-8 server: mw1252.eqiad.wmnet x-content-type-options: nosniff p3p: CP="This is not a P3P policy! See https://www.wikidata.org/wiki/Special:CentralAutoLogin/P3P for more info." x-powered-by: HHVM/3.18.6-dev content-language: en link: </static/images/project-logos/wikidatawiki.png>;rel=preload;as=image vary: Accept-Encoding,Cookie,Authorization x-ua-compatible: IE=Edge backend-timing: D=75094 t=1525107829593021 x-varnish: 754403290 624210434, 160015159 924438274 via: 1.1 varnish (Varnish/5.1), 1.1 varnish (Varnish/5.1) age: 29522 x-cache: cp1067 hit/8, cp1068 hit/10 x-cache-status: hit-front set-cookie: CP=H2; Path=/; secure strict-transport-security: max-age=106384710; includeSubDomains; preload set-cookie: WMF-Last-Access=01-May-2018;Path=/;HttpOnly;secure;Expires=Sat, 02 Jun 2018 00:00:00 GMT set-cookie: WMF-Last-Access-Global=01-May-2018;Path=/;Domain=.wikidata.org;HttpOnly;secure;Expires=Sat, 02 Jun 2018 00:00:00 GMT x-analytics: ns=0;page_id=52899665;https=1;nocookies=1 x-client-ip: 199.4.160.88 cache-control: private, s-maxage=0, max-age=0, must-revalidate set-cookie: GeoIP=US:MA:Woburn:42.49:-71.16:v4; Path=/; secure; Domain=.wikidata.org accept-ranges: bytes On 04/30/2018 02:53 PM, Lucas Werkmeister wrote:
The real URI (without scare quotes :) ) is not https://www.wikidata.org/wiki/Q52000000 but http://www.wikidata.org/entity/Q52000000 – and depending on your Accept header, that will redirect you to the wiki page, JSON dump, or RDF data (in XML or Turtle formats). Since the LOD Cloud criteria explicitly mentions content negotiation, I think we’re good :)
Cheers, Lucas
On 30.04.2018 23:08, Peter F. Patel-Schneider wrote:
Does it? The point is not just that Wikidata has real pointers to external resources.
Wikidata needs to serve RDF (e.g., in Turtle) in an accepted fashion. Is having https://www.wikidata.org/wiki/Special:EntityData/Q52000000.ttl available and linked to with an alternate link count when the "real" URI is https://www.wikidata.org/wiki/Q52000000? I don't know enough about this corner of web standards to know.
peter
On 04/30/2018 01:45 PM, Federico Leva (Nemo) wrote:
Peter F. Patel-Schneider, 30/04/2018 23:32:
Does the way that Wikidata serves RDF (https://www.wikidata.org/wiki/Special:EntityData/Q52000000.rdf) satisfy this requirement?
I think that part was already settled with: https://lists.wikimedia.org/pipermail/wikidata/2017-October/011314.html
More information: https://phabricator.wikimedia.org/T85444
Federico
Wikidata mailing list Wikidata@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikidata
Wikidata mailing list Wikidata@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikidata
Wikidata mailing list Wikidata@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikidata
Wikidata mailing list Wikidata@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikidata
On 01/05/2018 10:21, Thomas Pellissier Tanon wrote:
Suprisingly, there is no connection between the entity IRIs and the wikipage URLs. If one was given the IRI of an entity from Wikidata, and had no further information about how Wikidata works, they would not be able to retrieve HTML content about the entity.
If you require the "text/html" MIME type, you are going to be redirected to the HTML content. For example, you could try to go to https://www.wikidata.org/entity/Q42 using your favorite browser or execute:
curl --header 'Accept: text/html' https://www.wikidata.org/wiki/Special:EntityData/Q42 -v
You're right. I thought I tried it, but apparently I made a mistake in my command.
--AZ
Cheers,
Thomas
Le 1 mai 2018 à 10:03, Antoine Zimmermann antoine.zimmermann@emse.fr a écrit :
On 01/05/2018 03:25, Peter F. Patel-Schneider wrote:
As far as I can tell real IRIs for Wikidata are https URIs. The http IRIs redirect to https IRIs.
That's right.
As far as I can tell no content negotiation is done.
No, you're mistaken. Your tried the URL of a wikipage in your curl command. Those are for human consumption, thus not available in turtle.
The "real IRIs" of Wikidata entities are like this: https://www.wikidata.org/entity/Q%7BNUMBER%7D
However, they 303 redirect to https://www.wikidata.org/wiki/Special:EntityData/Q%7BNUMBER%7D
which is the identifier of a schema:Dataset. Then, if you HTTP GET these URIs, you can content negotiate them to JSON (https://www.wikidata.org/wiki/Special:EntityData/Q%7BNUMBER%7D.json) or to turtle (https://www.wikidata.org/wiki/Special:EntityData/Q%7BNUMBER%7D.ttl).
Suprisingly, there is no connection between the entity IRIs and the wikipage URLs. If one was given the IRI of an entity from Wikidata, and had no further information about how Wikidata works, they would not be able to retrieve HTML content about the entity.
BTW, I'm not sure the implementation of content negotiation in Wikidata is correct because the server does not tell me the format of the resource to which it redirects (as opposed to what DBpedia does, for instance).
--AZ
peter idefix merging> curl -I http://www.wikidata.org/wiki/Q52000000 HTTP/1.1 301 TLS Redirect Date: Tue, 01 May 2018 01:13:09 GMT Server: Varnish X-Varnish: 227838359 X-Cache: cp1068 int X-Cache-Status: int-front Set-Cookie: WMF-Last-Access=01-May-2018;Path=/;HttpOnly;secure;Expires=Sat, 02 Jun 2018 00:00:00 GMT Set-Cookie: WMF-Last-Access-Global=01-May-2018;Path=/;Domain=.wikidata.org;HttpOnly;secure;Expires=Sat, 02 Jun 2018 00:00:00 GMT X-Client-IP: 199.4.160.88 Location: https://www.wikidata.org/wiki/Q52000000 Content-Length: 0 Connection: keep-alive idefix merging> curl -I https://www.wikidata.org/wiki/Q52000000 HTTP/2 200 date: Tue, 01 May 2018 01:14:58 GMT content-type: text/html; charset=UTF-8 server: mw1252.eqiad.wmnet x-content-type-options: nosniff p3p: CP="This is not a P3P policy! See https://www.wikidata.org/wiki/Special:CentralAutoLogin/P3P for more info." x-powered-by: HHVM/3.18.6-dev content-language: en link: </static/images/project-logos/wikidatawiki.png>;rel=preload;as=image vary: Accept-Encoding,Cookie,Authorization x-ua-compatible: IE=Edge backend-timing: D=75094 t=1525107829593021 x-varnish: 754403290 624210434, 194797954 924438274 via: 1.1 varnish (Varnish/5.1), 1.1 varnish (Varnish/5.1) age: 29467 x-cache: cp1067 hit/8, cp1068 hit/9 x-cache-status: hit-front set-cookie: CP=H2; Path=/; secure strict-transport-security: max-age=106384710; includeSubDomains; preload set-cookie: WMF-Last-Access=01-May-2018;Path=/;HttpOnly;secure;Expires=Sat, 02 Jun 2018 00:00:00 GMT set-cookie: WMF-Last-Access-Global=01-May-2018;Path=/;Domain=.wikidata.org;HttpOnly;secure;Expires=Sat, 02 Jun 2018 00:00:00 GMT x-analytics: ns=0;page_id=52899665;https=1;nocookies=1 x-client-ip: 199.4.160.88 cache-control: private, s-maxage=0, max-age=0, must-revalidate set-cookie: GeoIP=US:MA:Woburn:42.49:-71.16:v4; Path=/; secure; Domain=.wikidata.org accept-ranges: bytes idefix merging> curl -I -H "Accept: text/turtle" https://www.wikidata.org/wiki/Q52000000 HTTP/2 200 date: Tue, 01 May 2018 01:15:52 GMT content-type: text/html; charset=UTF-8 server: mw1252.eqiad.wmnet x-content-type-options: nosniff p3p: CP="This is not a P3P policy! See https://www.wikidata.org/wiki/Special:CentralAutoLogin/P3P for more info." x-powered-by: HHVM/3.18.6-dev content-language: en link: </static/images/project-logos/wikidatawiki.png>;rel=preload;as=image vary: Accept-Encoding,Cookie,Authorization x-ua-compatible: IE=Edge backend-timing: D=75094 t=1525107829593021 x-varnish: 754403290 624210434, 160015159 924438274 via: 1.1 varnish (Varnish/5.1), 1.1 varnish (Varnish/5.1) age: 29522 x-cache: cp1067 hit/8, cp1068 hit/10 x-cache-status: hit-front set-cookie: CP=H2; Path=/; secure strict-transport-security: max-age=106384710; includeSubDomains; preload set-cookie: WMF-Last-Access=01-May-2018;Path=/;HttpOnly;secure;Expires=Sat, 02 Jun 2018 00:00:00 GMT set-cookie: WMF-Last-Access-Global=01-May-2018;Path=/;Domain=.wikidata.org;HttpOnly;secure;Expires=Sat, 02 Jun 2018 00:00:00 GMT x-analytics: ns=0;page_id=52899665;https=1;nocookies=1 x-client-ip: 199.4.160.88 cache-control: private, s-maxage=0, max-age=0, must-revalidate set-cookie: GeoIP=US:MA:Woburn:42.49:-71.16:v4; Path=/; secure; Domain=.wikidata.org accept-ranges: bytes On 04/30/2018 02:53 PM, Lucas Werkmeister wrote:
The real URI (without scare quotes :) ) is not https://www.wikidata.org/wiki/Q52000000 but http://www.wikidata.org/entity/Q52000000 – and depending on your Accept header, that will redirect you to the wiki page, JSON dump, or RDF data (in XML or Turtle formats). Since the LOD Cloud criteria explicitly mentions content negotiation, I think we’re good :)
Cheers, Lucas
On 30.04.2018 23:08, Peter F. Patel-Schneider wrote:
Does it? The point is not just that Wikidata has real pointers to external resources.
Wikidata needs to serve RDF (e.g., in Turtle) in an accepted fashion. Is having https://www.wikidata.org/wiki/Special:EntityData/Q52000000.ttl available and linked to with an alternate link count when the "real" URI is https://www.wikidata.org/wiki/Q52000000? I don't know enough about this corner of web standards to know.
peter
On 04/30/2018 01:45 PM, Federico Leva (Nemo) wrote:
Peter F. Patel-Schneider, 30/04/2018 23:32: > Does the way that Wikidata serves RDF > (https://www.wikidata.org/wiki/Special:EntityData/Q52000000.rdf) satisfy this > requirement? I think that part was already settled with: https://lists.wikimedia.org/pipermail/wikidata/2017-October/011314.html
More information: https://phabricator.wikimedia.org/T85444
Federico
Wikidata mailing list Wikidata@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikidata
Wikidata mailing list Wikidata@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikidata
Wikidata mailing list Wikidata@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikidata
Wikidata mailing list Wikidata@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikidata
Wikidata mailing list Wikidata@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikidata
On 01.05.2018 10:03, Antoine Zimmermann wrote:
On 01/05/2018 03:25, Peter F. Patel-Schneider wrote:
As far as I can tell real IRIs for Wikidata are https URIs. The http IRIs redirect to https IRIs.
That's right.
As far as I can tell no content negotiation is done.
No, you're mistaken. Your tried the URL of a wikipage in your curl command. Those are for human consumption, thus not available in turtle.
The "real IRIs" of Wikidata entities are like this: https://www.wikidata.org/entity/Q%7BNUMBER%7D
However, they 303 redirect to https://www.wikidata.org/wiki/Special:EntityData/Q%7BNUMBER%7D
which is the identifier of a schema:Dataset. Then, if you HTTP GET these URIs, you can content negotiate them to JSON (https://www.wikidata.org/wiki/Special:EntityData/Q%7BNUMBER%7D.json) or to turtle (https://www.wikidata.org/wiki/Special:EntityData/Q%7BNUMBER%7D.ttl).
Suprisingly, there is no connection between the entity IRIs and the wikipage URLs. If one was given the IRI of an entity from Wikidata, and had no further information about how Wikidata works, they would not be able to retrieve HTML content about the entity.
There is a “concept URI” link in the sidebar on the left (between “Page information” and “Cite this page”), which is a hyperlink to the entity URI. I also seem to recall a Phabricator task for adding JSON-LD data to the wiki page, but I can’t find that right now – however, there is a task to “make export formats more visible”: https://phabricator.wikimedia.org/T109420
BTW, I'm not sure the implementation of content negotiation in Wikidata is correct because the server does not tell me the format of the resource to which it redirects (as opposed to what DBpedia does, for instance).
It sends a Content-Type header?
Cheers, Lucas
--AZ
peter
idefix merging> curl -I http://www.wikidata.org/wiki/Q52000000 HTTP/1.1 301 TLS Redirect Date: Tue, 01 May 2018 01:13:09 GMT Server: Varnish X-Varnish: 227838359 X-Cache: cp1068 int X-Cache-Status: int-front Set-Cookie: WMF-Last-Access=01-May-2018;Path=/;HttpOnly;secure;Expires=Sat, 02 Jun 2018 00:00:00 GMT Set-Cookie: WMF-Last-Access-Global=01-May-2018;Path=/;Domain=.wikidata.org;HttpOnly;secure;Expires=Sat,
02 Jun 2018 00:00:00 GMT X-Client-IP: 199.4.160.88 Location: https://www.wikidata.org/wiki/Q52000000 Content-Length: 0 Connection: keep-alive
idefix merging> curl -I https://www.wikidata.org/wiki/Q52000000 HTTP/2 200 date: Tue, 01 May 2018 01:14:58 GMT content-type: text/html; charset=UTF-8 server: mw1252.eqiad.wmnet x-content-type-options: nosniff p3p: CP="This is not a P3P policy! See https://www.wikidata.org/wiki/Special:CentralAutoLogin/P3P for more info." x-powered-by: HHVM/3.18.6-dev content-language: en link: </static/images/project-logos/wikidatawiki.png>;rel=preload;as=image vary: Accept-Encoding,Cookie,Authorization x-ua-compatible: IE=Edge backend-timing: D=75094 t=1525107829593021 x-varnish: 754403290 624210434, 194797954 924438274 via: 1.1 varnish (Varnish/5.1), 1.1 varnish (Varnish/5.1) age: 29467 x-cache: cp1067 hit/8, cp1068 hit/9 x-cache-status: hit-front set-cookie: CP=H2; Path=/; secure strict-transport-security: max-age=106384710; includeSubDomains; preload set-cookie: WMF-Last-Access=01-May-2018;Path=/;HttpOnly;secure;Expires=Sat, 02 Jun 2018 00:00:00 GMT set-cookie: WMF-Last-Access-Global=01-May-2018;Path=/;Domain=.wikidata.org;HttpOnly;secure;Expires=Sat,
02 Jun 2018 00:00:00 GMT x-analytics: ns=0;page_id=52899665;https=1;nocookies=1 x-client-ip: 199.4.160.88 cache-control: private, s-maxage=0, max-age=0, must-revalidate set-cookie: GeoIP=US:MA:Woburn:42.49:-71.16:v4; Path=/; secure; Domain=.wikidata.org accept-ranges: bytes
idefix merging> curl -I -H "Accept: text/turtle" https://www.wikidata.org/wiki/Q52000000 HTTP/2 200 date: Tue, 01 May 2018 01:15:52 GMT content-type: text/html; charset=UTF-8 server: mw1252.eqiad.wmnet x-content-type-options: nosniff p3p: CP="This is not a P3P policy! See https://www.wikidata.org/wiki/Special:CentralAutoLogin/P3P for more info." x-powered-by: HHVM/3.18.6-dev content-language: en link: </static/images/project-logos/wikidatawiki.png>;rel=preload;as=image vary: Accept-Encoding,Cookie,Authorization x-ua-compatible: IE=Edge backend-timing: D=75094 t=1525107829593021 x-varnish: 754403290 624210434, 160015159 924438274 via: 1.1 varnish (Varnish/5.1), 1.1 varnish (Varnish/5.1) age: 29522 x-cache: cp1067 hit/8, cp1068 hit/10 x-cache-status: hit-front set-cookie: CP=H2; Path=/; secure strict-transport-security: max-age=106384710; includeSubDomains; preload set-cookie: WMF-Last-Access=01-May-2018;Path=/;HttpOnly;secure;Expires=Sat, 02 Jun 2018 00:00:00 GMT set-cookie: WMF-Last-Access-Global=01-May-2018;Path=/;Domain=.wikidata.org;HttpOnly;secure;Expires=Sat,
02 Jun 2018 00:00:00 GMT x-analytics: ns=0;page_id=52899665;https=1;nocookies=1 x-client-ip: 199.4.160.88 cache-control: private, s-maxage=0, max-age=0, must-revalidate set-cookie: GeoIP=US:MA:Woburn:42.49:-71.16:v4; Path=/; secure; Domain=.wikidata.org accept-ranges: bytes
On 04/30/2018 02:53 PM, Lucas Werkmeister wrote:
The real URI (without scare quotes :) ) is not https://www.wikidata.org/wiki/Q52000000 but http://www.wikidata.org/entity/Q52000000 – and depending on your Accept header, that will redirect you to the wiki page, JSON dump, or RDF data (in XML or Turtle formats). Since the LOD Cloud criteria explicitly mentions content negotiation, I think we’re good :)
Cheers, Lucas
On 30.04.2018 23:08, Peter F. Patel-Schneider wrote:
Does it? The point is not just that Wikidata has real pointers to external resources.
Wikidata needs to serve RDF (e.g., in Turtle) in an accepted fashion. Is having https://www.wikidata.org/wiki/Special:EntityData/Q52000000.ttl available and linked to with an alternate link count when the "real" URI is https://www.wikidata.org/wiki/Q52000000?%C2%A0 I don't know enough about this corner of web standards to know.
peter
On 04/30/2018 01:45 PM, Federico Leva (Nemo) wrote:
Peter F. Patel-Schneider, 30/04/2018 23:32:
Does the way that Wikidata serves RDF (https://www.wikidata.org/wiki/Special:EntityData/Q52000000.rdf) satisfy this requirement?
I think that part was already settled with: https://lists.wikimedia.org/pipermail/wikidata/2017-October/011314.html
More information: https://phabricator.wikimedia.org/T85444
Federico
Wikidata mailing list Wikidata@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikidata
Wikidata mailing list Wikidata@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikidata
Wikidata mailing list Wikidata@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikidata
Wikidata mailing list Wikidata@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikidata
Thanks for the corrections.
So https://www.wikidata.org/entity/Q42 is *the* Wikidata IRI for Douglas Adams. Retrieving from this IRI results in a 303 See Other to https://www.wikidata.org/wiki/Special:EntityData/Q42, which (I guess) is the main IRI for representations of Douglas Adams and other pages with information about him.
From https://www.wikidata.org/wiki/Special:EntityData/Q42 content
negotiation can be used to get the JSON representation (the default), other representations including Turtle, and human-readable information. (Well actually I'm not sure that this is really correct. It appears that instead of directly using content negotiation, another 303 See Other is used to provide an IRI for a document in the requested format.)
https://www.wikidata.org/wiki/Special:EntityData/Q42.json and https://www.wikidata.org/wiki/Special:EntityData/Q42.ttl are the useful machine-readable documents containing the Wikidata information about Douglas Adams. Content negotiation is not possible on these pages.
https://www.wikidata.org/wiki/Q42 is the IRI that produces a human-readable version of the information about Douglas Adams. Content negotiation is not possible on this page, but it does have link rel="alternate" to the machine-readable pages.
Strangely this page has a link rel="canonical" to itself. Shouldn't that link be to https://www.wikidata.org/entity/Q42?%C2%A0 There is a human-visible link to this IRI, but there doesn't appear to be any machine-readable link.
RDF links to other IRIs for Douglas Adams are given in RDF pages by properties in the wdtn namespace. Many, but not all, identifiers are handled this way. (Strangely ISNI (P213) isn't even though it is linked on the human-readable page.)
So it looks as if Wikidata can be considered as Linked Open Data but maybe some improvements can be made.
peter
On 05/01/2018 01:03 AM, Antoine Zimmermann wrote:
On 01/05/2018 03:25, Peter F. Patel-Schneider wrote:
As far as I can tell real IRIs for Wikidata are https URIs. The http IRIs redirect to https IRIs.
That's right.
As far as I can tell no content negotiation is done.
No, you're mistaken. Your tried the URL of a wikipage in your curl command. Those are for human consumption, thus not available in turtle.
The "real IRIs" of Wikidata entities are like this: https://www.wikidata.org/entity/Q%7BNUMBER%7D
However, they 303 redirect to https://www.wikidata.org/wiki/Special:EntityData/Q%7BNUMBER%7D
which is the identifier of a schema:Dataset. Then, if you HTTP GET these URIs, you can content negotiate them to JSON (https://www.wikidata.org/wiki/Special:EntityData/Q%7BNUMBER%7D.json) or to turtle (https://www.wikidata.org/wiki/Special:EntityData/Q%7BNUMBER%7D.ttl).
Suprisingly, there is no connection between the entity IRIs and the wikipage URLs. If one was given the IRI of an entity from Wikidata, and had no further information about how Wikidata works, they would not be able to retrieve HTML content about the entity.
BTW, I'm not sure the implementation of content negotiation in Wikidata is correct because the server does not tell me the format of the resource to which it redirects (as opposed to what DBpedia does, for instance).
--AZ
I'm pretty sure that Wikidata is doing better than 90% of the current bubbles in the diagram.
If they wanted to have Wikidata in the diagram it would have been there before it was too small to read it. :)
On Tue, May 1, 2018 at 7:47 AM Peter F. Patel-Schneider < pfpschneider@gmail.com> wrote:
Thanks for the corrections.
So https://www.wikidata.org/entity/Q42 is *the* Wikidata IRI for Douglas Adams. Retrieving from this IRI results in a 303 See Other to https://www.wikidata.org/wiki/Special:EntityData/Q42, which (I guess) is the main IRI for representations of Douglas Adams and other pages with information about him.
From https://www.wikidata.org/wiki/Special:EntityData/Q42 content negotiation can be used to get the JSON representation (the default), other representations including Turtle, and human-readable information. (Well actually I'm not sure that this is really correct. It appears that instead of directly using content negotiation, another 303 See Other is used to provide an IRI for a document in the requested format.)
https://www.wikidata.org/wiki/Special:EntityData/Q42.json and https://www.wikidata.org/wiki/Special:EntityData/Q42.ttl are the useful machine-readable documents containing the Wikidata information about Douglas Adams. Content negotiation is not possible on these pages.
https://www.wikidata.org/wiki/Q42 is the IRI that produces a human-readable version of the information about Douglas Adams. Content negotiation is not possible on this page, but it does have link rel="alternate" to the machine-readable pages.
Strangely this page has a link rel="canonical" to itself. Shouldn't that link be to https://www.wikidata.org/entity/Q42? There is a human-visible link to this IRI, but there doesn't appear to be any machine-readable link.
RDF links to other IRIs for Douglas Adams are given in RDF pages by properties in the wdtn namespace. Many, but not all, identifiers are handled this way. (Strangely ISNI (P213) isn't even though it is linked on the human-readable page.)
So it looks as if Wikidata can be considered as Linked Open Data but maybe some improvements can be made.
peter
On 05/01/2018 01:03 AM, Antoine Zimmermann wrote:
On 01/05/2018 03:25, Peter F. Patel-Schneider wrote:
As far as I can tell real IRIs for Wikidata are https URIs. The http
IRIs
redirect to https IRIs.
That's right.
As far as I can tell no content negotiation is done.
No, you're mistaken. Your tried the URL of a wikipage in your curl
command.
Those are for human consumption, thus not available in turtle.
The "real IRIs" of Wikidata entities are like this: https://www.wikidata.org/entity/Q%7BNUMBER%7D
However, they 303 redirect to https://www.wikidata.org/wiki/Special:EntityData/Q%7BNUMBER%7D
which is the identifier of a schema:Dataset. Then, if you HTTP GET these URIs, you can content negotiate them to JSON (https://www.wikidata.org/wiki/Special:EntityData/Q%7BNUMBER%7D.json) or to turtle (https://www.wikidata.org/wiki/Special:EntityData/Q%7BNUMBER%7D.ttl).
Suprisingly, there is no connection between the entity IRIs and the
wikipage
URLs. If one was given the IRI of an entity from Wikidata, and had no further information about how Wikidata works, they would not be able to retrieve HTML content about the entity.
BTW, I'm not sure the implementation of content negotiation in Wikidata
is
correct because the server does not tell me the format of the resource to which it redirects (as opposed to what DBpedia does, for instance).
--AZ
Wikidata mailing list Wikidata@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikidata
It almost feels like someone doesn’t want Wikidata in there? Maybe that website is maintained by DBpedia fans? Just thinking out loud here because DBpedia is very popular in the academic world and Wikidata a huge threat for that popularity.
Maarten
Op 4 mei 2018 om 17:20 heeft Denny Vrandečić vrandecic@gmail.com het volgende geschreven:
I'm pretty sure that Wikidata is doing better than 90% of the current bubbles in the diagram.
If they wanted to have Wikidata in the diagram it would have been there before it was too small to read it. :)
On Tue, May 1, 2018 at 7:47 AM Peter F. Patel-Schneider pfpschneider@gmail.com wrote: Thanks for the corrections.
So https://www.wikidata.org/entity/Q42 is *the* Wikidata IRI for Douglas Adams. Retrieving from this IRI results in a 303 See Other to https://www.wikidata.org/wiki/Special:EntityData/Q42, which (I guess) is the main IRI for representations of Douglas Adams and other pages with information about him.
From https://www.wikidata.org/wiki/Special:EntityData/Q42 content negotiation can be used to get the JSON representation (the default), other representations including Turtle, and human-readable information. (Well actually I'm not sure that this is really correct. It appears that instead of directly using content negotiation, another 303 See Other is used to provide an IRI for a document in the requested format.)
https://www.wikidata.org/wiki/Special:EntityData/Q42.json and https://www.wikidata.org/wiki/Special:EntityData/Q42.ttl are the useful machine-readable documents containing the Wikidata information about Douglas Adams. Content negotiation is not possible on these pages.
https://www.wikidata.org/wiki/Q42 is the IRI that produces a human-readable version of the information about Douglas Adams. Content negotiation is not possible on this page, but it does have link rel="alternate" to the machine-readable pages.
Strangely this page has a link rel="canonical" to itself. Shouldn't that link be to https://www.wikidata.org/entity/Q42? There is a human-visible link to this IRI, but there doesn't appear to be any machine-readable link.
RDF links to other IRIs for Douglas Adams are given in RDF pages by properties in the wdtn namespace. Many, but not all, identifiers are handled this way. (Strangely ISNI (P213) isn't even though it is linked on the human-readable page.)
So it looks as if Wikidata can be considered as Linked Open Data but maybe some improvements can be made.
peter
On 05/01/2018 01:03 AM, Antoine Zimmermann wrote:
On 01/05/2018 03:25, Peter F. Patel-Schneider wrote:
As far as I can tell real IRIs for Wikidata are https URIs. The http IRIs redirect to https IRIs.
That's right.
As far as I can tell no content negotiation is done.
No, you're mistaken. Your tried the URL of a wikipage in your curl command. Those are for human consumption, thus not available in turtle.
The "real IRIs" of Wikidata entities are like this: https://www.wikidata.org/entity/Q%7BNUMBER%7D
However, they 303 redirect to https://www.wikidata.org/wiki/Special:EntityData/Q%7BNUMBER%7D
which is the identifier of a schema:Dataset. Then, if you HTTP GET these URIs, you can content negotiate them to JSON (https://www.wikidata.org/wiki/Special:EntityData/Q%7BNUMBER%7D.json) or to turtle (https://www.wikidata.org/wiki/Special:EntityData/Q%7BNUMBER%7D.ttl).
Suprisingly, there is no connection between the entity IRIs and the wikipage URLs. If one was given the IRI of an entity from Wikidata, and had no further information about how Wikidata works, they would not be able to retrieve HTML content about the entity.
BTW, I'm not sure the implementation of content negotiation in Wikidata is correct because the server does not tell me the format of the resource to which it redirects (as opposed to what DBpedia does, for instance).
--AZ
Wikidata mailing list Wikidata@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikidata
Wikidata mailing list Wikidata@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikidata
Since the subject has come out, I leave some general impressions, which aren't necessarily applicable to the people in charge of generating the LOD cloud.
Many DBpedia-centered researchers are truly reluctant to mention Wikidata. Some of them don't want people to know that Wikidata exists, so they continue introducing DBpedia in their talks and papers as the largest knowledge base that is available out there — which is, indeed, no longer true. This isn't hate but an attempt to survive, an attempt to ignore change, to continue working on the same lines of research and "enjoying" the corresponding, sometimes poor, funding.
It's not a matter of triples. The very ideas of both projects are different, and this point is what makes DBpedia potentially obsolete. DBpedia is a non-collaborative project — as we understand collaboration in the Wikimedia movement — that emerged from academia with the aim of *extracting* information from Wikipedia. Similarly to Wikipedia, it can be confusing to talk about DBpedia in the singular because there are several DBpedias, each one mainly oriented, and limited, to a language, and not very well interlinked. There's, however, a single multilingual Wikidata that makes the idea of extracting information from Wikipedia less meaningful. Most relevant structured data are already centralized here, in Wikidata, which *provides* them to Wikipedia. Moreover, the data in Wikidata are referenced... sometimes :), and they are more fine-grained and better structured than those in DBpedia.
Researchers should have nothing to fear from Wikidata, and some of them, mainly the young ones, do start to work on our project. In my humble opinion, we need the help of universities and research centers to fill some gaps and to produce and apply theory. I think these needs should be better communicated to researchers and fears should be mitigated. Our project isn't "that new" today.
Hopefully, Wikidata will appear soon in the LOD cloud... O:)
El 04/05/18 a las 18:33, Maarten Dammers escribió:
It almost feels like someone doesn’t want Wikidata in there? Maybe that website is maintained by DBpedia fans? Just thinking out loud here because DBpedia is very popular in the academic world and Wikidata a huge threat for that popularity.
Maarten
Op 4 mei 2018 om 17:20 heeft Denny Vrandečić <vrandecic@gmail.com mailto:vrandecic@gmail.com> het volgende geschreven:
I'm pretty sure that Wikidata is doing better than 90% of the current bubbles in the diagram.
If they wanted to have Wikidata in the diagram it would have been there before it was too small to read it. :)
Hoi, Given that DBpedia includes Wikidata and has its own processes to get data from the Wikipedias it must be bigger than Wikidata.
PS There is room enough for both Wikidata and DBpedia and if anything DBpedia is quite happy to collaborate, Wikidata is not collaborating; everything is Wikidata centred. I get the impression that it is like the Borg and its charm is sometimes equivalent. Thanks, GerardM
On 5 May 2018 at 15:02, David Abián davidabian@wikimedia.es wrote:
Since the subject has come out, I leave some general impressions, which aren't necessarily applicable to the people in charge of generating the LOD cloud.
Many DBpedia-centered researchers are truly reluctant to mention Wikidata. Some of them don't want people to know that Wikidata exists, so they continue introducing DBpedia in their talks and papers as the largest knowledge base that is available out there — which is, indeed, no longer true. This isn't hate but an attempt to survive, an attempt to ignore change, to continue working on the same lines of research and "enjoying" the corresponding, sometimes poor, funding.
It's not a matter of triples. The very ideas of both projects are different, and this point is what makes DBpedia potentially obsolete. DBpedia is a non-collaborative project — as we understand collaboration in the Wikimedia movement — that emerged from academia with the aim of *extracting* information from Wikipedia. Similarly to Wikipedia, it can be confusing to talk about DBpedia in the singular because there are several DBpedias, each one mainly oriented, and limited, to a language, and not very well interlinked. There's, however, a single multilingual Wikidata that makes the idea of extracting information from Wikipedia less meaningful. Most relevant structured data are already centralized here, in Wikidata, which *provides* them to Wikipedia. Moreover, the data in Wikidata are referenced... sometimes :), and they are more fine-grained and better structured than those in DBpedia.
Researchers should have nothing to fear from Wikidata, and some of them, mainly the young ones, do start to work on our project. In my humble opinion, we need the help of universities and research centers to fill some gaps and to produce and apply theory. I think these needs should be better communicated to researchers and fears should be mitigated. Our project isn't "that new" today.
Hopefully, Wikidata will appear soon in the LOD cloud... O:)
El 04/05/18 a las 18:33, Maarten Dammers escribió:
It almost feels like someone doesn’t want Wikidata in there? Maybe that website is maintained by DBpedia fans? Just thinking out loud here because DBpedia is very popular in the academic world and Wikidata a huge threat for that popularity.
Maarten
Op 4 mei 2018 om 17:20 heeft Denny Vrandečić <vrandecic@gmail.com mailto:vrandecic@gmail.com> het volgende geschreven:
I'm pretty sure that Wikidata is doing better than 90% of the current bubbles in the diagram.
If they wanted to have Wikidata in the diagram it would have been there before it was too small to read it. :)
-- David Abián Wikimedia España https://wikimedia.es/
Wikidata mailing list Wikidata@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikidata
Both Wikidata and DBpedia surely can, and should, coexist because we'll never be able to host in Wikidata the entirety of the Wikipedias. However, it's clear that, the more data is in Wikidata and, therefore, retrieved by the Wikipedias, the less data DBpedia has to extract from these Wikipedias.
I wouldn't say that Wikidata, or DBpedia, are happy or sad to collaborate with each other — they're complex abstractions with many different people involved with their corresponding considerations, and some of these people working on both projects. However, I know that Wikidata, as a platform, is collaborative in the highest possible degree, so not only DBpedia, but absolutely anyone, can collaborate and build Wikidata as they actually consider.
El 05/05/18 a las 15:09, Gerard Meijssen escribió:
PS There is room enough for both Wikidata and DBpedia and if anything DBpedia is quite happy to collaborate, Wikidata is not collaborating; everything is Wikidata centred. I get the impression that it is like the Borg and its charm is sometimes equivalent.
On 5 May 2018 at 14:39, David Abián davidabian@wikimedia.es wrote:
Both Wikidata and DBpedia surely can, and should, coexist because we'll never be able to host in Wikidata the entirety of the Wikipedias.
Can you give an example of something that can be represented in DBpedia, but not Wikidata?
I don't mean a technical lack of expressiveness, but the impossibility, and lack of intention, for Wikipedia to become a read-only interface of Wikidata someday.
El 05/05/18 a las 16:33, Andy Mabbett escribió:
On 5 May 2018 at 14:39, David Abián davidabian@wikimedia.es wrote:
Both Wikidata and DBpedia surely can, and should, coexist because we'll never be able to host in Wikidata the entirety of the Wikipedias.
Can you give an example of something that can be represented in DBpedia, but not Wikidata?
The semantics of Wikidata qualifiers have not been defined and and won't be enforced. It's left up to users to invent their own meanings. (In this way, Wikidata is still a lot like the prose in Wikipedia.) We need more "curated" projects like DBpedia
Mmh, I would have rather thought that the system of qualifiers, even imperfect, was a great enhacement compared to the DBpedia model - which is a bit of a mess.
Let's take Winston Churchill item https://www.wikidata.org/wiki/Q8016 : Wikidata tells us, for example, that he served as British Prime Minister from 1951 to 1955 replacing Clement Attlee and that he was replaced at this position by Anthony Eden. In DBpedia http://dbpedia.org/page/Winston_Churchill, which does not use reification, we have just a list of offices, a list of successors, a list of predecessors, a list of dates, and no way to figure out who replaced whom to what and when.
The handcrafted ontology of DBpedia is certainly more consistent, but it's also much poorer. Rather than impoverishing Wikidata's class system, would it not be better to find a way to avoid horrors like "actor is a subclass of person" https://datalanguage.com/news/wikidata-q41483. I would be interested to know if there are researchers working on the subject.
Regarding the compared size of DBpedia and Wikidata, I thought that Wikidata is by nature much larger. DBpedia cannot contain more entities than there are in the english Wikipedia (about 5 million), with its very strict criteria of notoriety, while Wikidata allows any more things. Am I wrong ? (I consider of course that DBpedia and its other language versions are different knowledge bases, as is the case in the LOD cloud)
2018-05-05 16:52 GMT+02:00 David Abián davidabian@wikimedia.es:
I don't mean a technical lack of expressiveness, but the impossibility, and lack of intention, for Wikipedia to become a read-only interface of Wikidata someday.
El 05/05/18 a las 16:33, Andy Mabbett escribió:
On 5 May 2018 at 14:39, David Abián davidabian@wikimedia.es wrote:
Both Wikidata and DBpedia surely can, and should, coexist because we'll never be able to host in Wikidata the entirety of the Wikipedias.
Can you give an example of something that can be represented in DBpedia, but not Wikidata?
-- David Abián Wikimedia España https://wikimedia.es/
Wikidata mailing list Wikidata@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikidata
On 5 May 2018 at 15:52, David Abián davidabian@wikimedia.es wrote:
El 05/05/18 a las 16:33, Andy Mabbett escribió:
On 5 May 2018 at 14:39, David Abián davidabian@wikimedia.es wrote:
Both Wikidata and DBpedia surely can, and should, coexist because we'll never be able to host in Wikidata the entirety of the Wikipedias.
Can you give an example of something that can be represented in DBpedia, but not Wikidata?
I don't mean a technical lack of expressiveness, but the impossibility, and lack of intention, for Wikipedia to become a read-only interface of Wikidata someday.
Well, neither is DBpedia, so I don't see how that substantiates your claim.
Wikipedia isn't a read-only interface but an editable project, so there will always be contents in Wikipedia that aren't in Wikidata, so DBpedia will always have the opportunity to offer contents from Wikipedia that aren't in Wikidata. That's all.
El 05/05/18 a las 19:52, Andy Mabbett escribió:
I don't mean a technical lack of expressiveness, but the impossibility, and lack of intention, for Wikipedia to become a read-only interface of Wikidata someday.
Well, neither is DBpedia, so I don't see how that substantiates your claim.
Hoi, And given that DBpedia finds its changes from the RSS feed for an increasing number of Wikipedias and given that we at Wikidata do not even regularly harvest data based on "category contains" on a regular basis (just as an example) there is no justifiable room for any sense of superiority at Wikidata. Thanks, GerardM
On 5 May 2018 at 20:35, David Abián davidabian@wikimedia.es wrote:
Wikipedia isn't a read-only interface but an editable project, so there will always be contents in Wikipedia that aren't in Wikidata, so DBpedia will always have the opportunity to offer contents from Wikipedia that aren't in Wikidata. That's all.
El 05/05/18 a las 19:52, Andy Mabbett escribió:
I don't mean a technical lack of expressiveness, but the impossibility, and lack of intention, for Wikipedia to become a read-only interface of Wikidata someday.
Well, neither is DBpedia, so I don't see how that substantiates your
claim.
-- David Abián Wikimedia España https://wikimedia.es/
Wikidata mailing list Wikidata@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikidata
Andy Mabbett, 05/05/2018 17:33:
Both Wikidata and DBpedia surely can, and should, coexist because we'll never be able to host in Wikidata the entirety of the Wikipedias.
Can you give an example of something that can be represented in DBpedia, but not Wikidata?
More simply, there's still a long way to go until Wikidata imports all the data contained in Wikipedia infoboxes (or equivalent data from other sources), let alone the rest.
So, as Gerard mentions, DBpedia has something more/different to offer. (The same is true for the various extractions of structured data from Wiktionary vs. Wiktionary's own unstructured data.)
That said, the LOD cloud is about links, as far as I understand. Wikidata should be very interesting in it.
Federico
On 5 May 2018 at 18:04, Federico Leva (Nemo) nemowiki@gmail.com wrote:
Andy Mabbett, 05/05/2018 17:33:
Both Wikidata and DBpedia surely can, and should, coexist because we'll never be able to host in Wikidata the entirety of the Wikipedias.
Can you give an example of something that can be represented in DBpedia, but not Wikidata?
More simply, there's still a long way to go until Wikidata imports all the data contained in Wikipedia infoboxes (or equivalent data from other sources), let alone the rest.
The statement I questioned was "never able"; that's not a matter of "a long way to go".
More simply, there's still a long way to go until Wikidata imports all the data contained in Wikipedia infoboxes (or equivalent data from other sources), let alone the rest.
Why would you want to import all the data contained in Wikipedia info boxes? I would rather aim for the oposite, i.e. Infobox being built from data in Wikidata. Simply because with Wikidata it is easier to capture where the data is coming from (through the references and qualifiers) than with Wikipedia info boxes.
_______________________________________________
Wikidata mailing list Wikidata@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikidata
Hoi, Yes but as it is the data in Wikipedia, particularly in lists is of by 6 to 8%. In order for Wikidata to host the data, the data first has to be curated. Errors can be found on either side. So we have to be able to compare in order to know what to curate. That takes an effort for us to be effective in the curation.
I have moved a lot of data from Wikpedia to Wikidata and I refuse to edit Wikipedia because my experience is one of more hostility at that end. What I am looking for is collaboration not for myself to be on a lonely track fighting windmills. Thanks, GerardM
On 5 May 2018 at 21:15, Andra Waagmeester andra@micel.io wrote:
More simply, there's still a long way to go until Wikidata imports all the
data contained in Wikipedia infoboxes (or equivalent data from other sources), let alone the rest.
Why would you want to import all the data contained in Wikipedia info boxes? I would rather aim for the oposite, i.e. Infobox being built from data in Wikidata. Simply because with Wikidata it is easier to capture where the data is coming from (through the references and qualifiers) than with Wikipedia info boxes.
Wikidata mailing list Wikidata@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikidata
Wikidata mailing list Wikidata@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikidata
More simply, there's still a long way to go until Wikidata imports all the data contained in Wikipedia infoboxes (or equivalent data from other sources), let alone the rest.
This surprises me. Are there any statistics somewhere on the rate of Wikipedia's infoboxes fully parsed ?
2018-05-05 19:04 GMT+02:00 Federico Leva (Nemo) nemowiki@gmail.com:
Andy Mabbett, 05/05/2018 17:33:
Both Wikidata and DBpedia surely can, and should, coexist because we'll
never be able to host in Wikidata the entirety of the Wikipedias.
Can you give an example of something that can be represented in DBpedia, but not Wikidata?
More simply, there's still a long way to go until Wikidata imports all the data contained in Wikipedia infoboxes (or equivalent data from other sources), let alone the rest.
So, as Gerard mentions, DBpedia has something more/different to offer. (The same is true for the various extractions of structured data from Wiktionary vs. Wiktionary's own unstructured data.)
That said, the LOD cloud is about links, as far as I understand. Wikidata should be very interesting in it.
Federico
Wikidata mailing list Wikidata@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikidata
On 06/05/2018 10:37, Ettore RIZZA wrote:
More simply, there's still a long way to go until Wikidata imports all the data contained in Wikipedia infoboxes (or equivalent data from other sources), let alone the rest.
This surprises me. Are there any statistics somewhere on the rate of Wikipedia's infoboxes fully parsed ?
That was more or less the goal of the CrossWikiFact project, which was unfortunately not very widely supported: https://meta.wikimedia.org/wiki/Grants:Project/DBpedia/CrossWikiFact
It's still not clear to me why this got so little support - it looked like a good opportunity to collaborate with DBpedia.
Antonin
@Antonin : You're right, I now remember Magnus Knuth's message on this list about GlobalFactSync https://meta.wikimedia.org/wiki/Grants:Project/DBpedia/GlobalFactSync, a lite version of CrossWikiFact, if I understood correctly. I also remember that his message did not trigger many reactions...
2018-05-06 10:46 GMT+02:00 Antonin Delpeuch (lists) < lists@antonin.delpeuch.eu>:
On 06/05/2018 10:37, Ettore RIZZA wrote:
More simply, there's still a long way to go until Wikidata imports all the data contained in Wikipedia infoboxes (or equivalent data from other sources), let alone the rest.
This surprises me. Are there any statistics somewhere on the rate of Wikipedia's infoboxes fully parsed ?
That was more or less the goal of the CrossWikiFact project, which was unfortunately not very widely supported: https://meta.wikimedia.org/wiki/Grants:Project/DBpedia/CrossWikiFact
It's still not clear to me why this got so little support - it looked like a good opportunity to collaborate with DBpedia.
Antonin
Wikidata mailing list Wikidata@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikidata
Hi all,
the discussion about Wikidata and LOD got into this specific detail and I was just hoping that we could pick up on a few topics.
We are still hoping to get some support for our GlobalFactSync proposal: https://meta.wikimedia.org/wiki/Grants:Project/DBpedia/GlobalFactSync
We created a new prototype here (Just Eiffel Tour for now):
http://88.99.242.78:9000/?s=http%3A%2F%2Fid.dbpedia.org%2Fglobal%2F12HpzV&am...
You can see there that the floor count property is different in the French Wikipedia (properties can be switched with the dropox on the top)
The English Wikipedia has the same value as Wikidata plus a reference. One of the goals of GlobalFactSync is to extract these references and import them into Wikidata.
We will also build a redirection service around it, so you can use Wikidata Q's and P's as arguments for ?s= and ?p= and get resolved to the right entry for quick comparison between WD and WP.
All the best,
Sebastian
On 06.05.2018 10:54, Ettore RIZZA wrote:
@Antonin : You're right, I now remember Magnus Knuth's message on this list about GlobalFactSync https://meta.wikimedia.org/wiki/Grants:Project/DBpedia/GlobalFactSync, a lite version of CrossWikiFact, if I understood correctly. I also remember that his message did not trigger many reactions...
2018-05-06 10:46 GMT+02:00 Antonin Delpeuch (lists) <lists@antonin.delpeuch.eu mailto:lists@antonin.delpeuch.eu>:
On 06/05/2018 10:37, Ettore RIZZA wrote: > More simply, there's still a long way to go until Wikidata imports > all the data contained in Wikipedia infoboxes (or equivalent data > from other sources), let alone the rest. > > > This surprises me. Are there any statistics somewhere on the rate of > Wikipedia's infoboxes fully parsed ? That was more or less the goal of the CrossWikiFact project, which was unfortunately not very widely supported: https://meta.wikimedia.org/wiki/Grants:Project/DBpedia/CrossWikiFact <https://meta.wikimedia.org/wiki/Grants:Project/DBpedia/CrossWikiFact> It's still not clear to me why this got so little support - it looked like a good opportunity to collaborate with DBpedia. Antonin _______________________________________________ Wikidata mailing list Wikidata@lists.wikimedia.org <mailto:Wikidata@lists.wikimedia.org> https://lists.wikimedia.org/mailman/listinfo/wikidata <https://lists.wikimedia.org/mailman/listinfo/wikidata>
Wikidata mailing list Wikidata@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikidata
Le sam. 5 mai 2018 à 16:35, Andy Mabbett andy@pigsonthewing.org.uk a écrit :
On 5 May 2018 at 14:39, David Abián davidabian@wikimedia.es wrote:
Both Wikidata and DBpedia surely can, and should, coexist because we'll never be able to host in Wikidata the entirety of the Wikipedias.
Can you give an example of something that can be represented in DBpedia, but not Wikidata?
Sure : DBpedia knows the specific values different versions of Wikipedia choose to display in the infobox. For example, the size or population of countries with disputed borders. This data is useful for researchers working on cultural bias in Wikipedia, but it makes little sense to store it in Wikidata.
-- Andy Mabbett @pigsonthewing http://pigsonthewing.org.uk
Wikidata mailing list Wikidata@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikidata
On 7 May 2018 at 00:15, Sylvain Boissel sylvainboissel@gmail.com wrote:
Le sam. 5 mai 2018 à 16:35, Andy Mabbett andy@pigsonthewing.org.uk a écrit
On 5 May 2018 at 14:39, David Abián davidabian@wikimedia.es wrote:
Both Wikidata and DBpedia surely can, and should, coexist because we'll never be able to host in Wikidata the entirety of the Wikipedias.
Can you give an example of something that can be represented in DBpedia, but not Wikidata?
Sure : DBpedia knows the specific values different versions of Wikipedia choose to display in the infobox. For example, the size or population of countries with disputed borders. This data is useful for researchers working on cultural bias in Wikipedia, but it makes little sense to store it in Wikidata.
Except that does; and Wikidata is more than capable of holding values from conflicting sources. So again, this does not substantiate the "Both Wikidata and DBpedia surely can, and should, coexist because we'll never be able to host in Wikidata the entirety of the Wikipedias" claim.
Wikidata should hold the data of all Wikipedias, that is its main purpose. However, it doesn't yet and there are many problems, i.e. missing references, population count moved to Commons and an open discussion to even throw out Wikidata from the infoboxes: https://en.wikipedia.org/wiki/Wikipedia:Wikidata/2018_Infobox_RfC
DBpedia is more about technology than data, so we are trying to help out and push Wikidata, so it has all the values of all Wikipedias plus it's references: https://meta.wikimedia.org/wiki/Grants:Project/DBpedia/GlobalFactSync
All the best,
Sebastian
On 07.05.2018 14:26, Andy Mabbett wrote:
On 7 May 2018 at 00:15, Sylvain Boissel sylvainboissel@gmail.com wrote:
Le sam. 5 mai 2018 à 16:35, Andy Mabbett andy@pigsonthewing.org.uk a écrit
On 5 May 2018 at 14:39, David Abián davidabian@wikimedia.es wrote:
Both Wikidata and DBpedia surely can, and should, coexist because we'll never be able to host in Wikidata the entirety of the Wikipedias.
Can you give an example of something that can be represented in DBpedia, but not Wikidata?
Sure : DBpedia knows the specific values different versions of Wikipedia choose to display in the infobox. For example, the size or population of countries with disputed borders. This data is useful for researchers working on cultural bias in Wikipedia, but it makes little sense to store it in Wikidata.
Except that does; and Wikidata is more than capable of holding values from conflicting sources. So again, this does not substantiate the "Both Wikidata and DBpedia surely can, and should, coexist because we'll never be able to host in Wikidata the entirety of the Wikipedias" claim.
To add to Andy's reply, on Wikidata the combination of Ranking ( https://m.wikidata.org/wiki/Help:Ranking) , Qualifier ( https://m.wikidata.org/wiki/Special:MyLanguage/Help:Qualifiers) and References (https://m.wikidata.org/wiki/Special:MyLanguage/Help:Sources) would enable storing disputed property values. So, it does make sense.
-fariz
On Mon, May 7, 2018, 19:27 Andy Mabbett andy@pigsonthewing.org.uk wrote:
On 7 May 2018 at 00:15, Sylvain Boissel sylvainboissel@gmail.com wrote:
Le sam. 5 mai 2018 à 16:35, Andy Mabbett andy@pigsonthewing.org.uk a
écrit
On 5 May 2018 at 14:39, David Abián davidabian@wikimedia.es wrote:
Both Wikidata and DBpedia surely can, and should, coexist because
we'll
never be able to host in Wikidata the entirety of the Wikipedias.
Can you give an example of something that can be represented in DBpedia, but not Wikidata?
Sure : DBpedia knows the specific values different versions of Wikipedia choose to display in the infobox. For example, the size or population of countries with disputed borders. This data is useful for researchers
working
on cultural bias in Wikipedia, but it makes little sense to store it in Wikidata.
Except that does; and Wikidata is more than capable of holding values from conflicting sources. So again, this does not substantiate the "Both Wikidata and DBpedia surely can, and should, coexist because we'll never be able to host in Wikidata the entirety of the Wikipedias" claim.
-- Andy Mabbett @pigsonthewing http://pigsonthewing.org.uk
Wikidata mailing list Wikidata@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikidata
In Wikidata, the subclass hierarchy and the way that properties are used is unmanaged and contradictory. Furthermore, Wikidata added statement qualifiers which put the meaning of any statement in doubt. For example, there is a property "use" https://www.wikidata.org/wiki/Property:P366 . If someone qualifies a statement with "use X" what does that mean? Is the statement no longer generally true? Should it be omitted? The semantics of Wikidata qualifiers have not been defined and and won't be enforced. It's left up to users to invent their own meanings. (In this way, Wikidata is still a lot like the prose in Wikipedia.)
We need more "curated" projects like DBpedia to do the work of maintaining a coherent subclass hierarchy and to take a conservative approach to statements with qualifiers (omitting most such statements unless the qualifier is unambiguous).
- Jeff
On 2018/05/05 15:02, David Abián wrote:
Since the subject has come out, I leave some general impressions, which aren't necessarily applicable to the people in charge of generating the LOD cloud.
Many DBpedia-centered researchers are truly reluctant to mention Wikidata. Some of them don't want people to know that Wikidata exists, so they continue introducing DBpedia in their talks and papers as the largest knowledge base that is available out there — which is, indeed, no longer true. This isn't hate but an attempt to survive, an attempt to ignore change, to continue working on the same lines of research and "enjoying" the corresponding, sometimes poor, funding.
It's not a matter of triples. The very ideas of both projects are different, and this point is what makes DBpedia potentially obsolete. DBpedia is a non-collaborative project — as we understand collaboration in the Wikimedia movement — that emerged from academia with the aim of *extracting* information from Wikipedia. Similarly to Wikipedia, it can be confusing to talk about DBpedia in the singular because there are several DBpedias, each one mainly oriented, and limited, to a language, and not very well interlinked. There's, however, a single multilingual Wikidata that makes the idea of extracting information from Wikipedia less meaningful. Most relevant structured data are already centralized here, in Wikidata, which *provides* them to Wikipedia. Moreover, the data in Wikidata are referenced... sometimes :), and they are more fine-grained and better structured than those in DBpedia.
Researchers should have nothing to fear from Wikidata, and some of them, mainly the young ones, do start to work on our project. In my humble opinion, we need the help of universities and research centers to fill some gaps and to produce and apply theory. I think these needs should be better communicated to researchers and fears should be mitigated. Our project isn't "that new" today.
Hopefully, Wikidata will appear soon in the LOD cloud... O:)
El 04/05/18 a las 18:33, Maarten Dammers escribió:
It almost feels like someone doesn’t want Wikidata in there? Maybe that website is maintained by DBpedia fans? Just thinking out loud here because DBpedia is very popular in the academic world and Wikidata a huge threat for that popularity.
Maarten
Op 4 mei 2018 om 17:20 heeft Denny Vrandečić <vrandecic@gmail.com mailto:vrandecic@gmail.com> het volgende geschreven:
I'm pretty sure that Wikidata is doing better than 90% of the current bubbles in the diagram.
If they wanted to have Wikidata in the diagram it would have been there before it was too small to read it. :)
Hi Denny, Maarten,
you should read your own emails. In fact it is quite easy to join the LOD cloud diagram.
The most important step is to follow the instructions on the page: http://lod-cloud.net under how to contribute and then add the metadata.
Some years ago I made a Wordpress with enabled Linked Data: http://www.klappstuhlclub.de/wp/ Even this is included as I simply added the metadata entry.
Do you really think John McCrae added a line in the code that says "if (dataset==wikidata) skip; " ?
You just need to add it like everybody else in LOD, DBpedia also created its entry and updates it now and then. The same accounts for http://lov.okfn.org%C2%A0 Somebody from Wikidata needs to upload the Wikidata properties as OWL. If nobody does it, it will not be in there.
All the best,
Sebastian
On 04.05.2018 18:33, Maarten Dammers wrote:
It almost feels like someone doesn’t want Wikidata in there? Maybe that website is maintained by DBpedia fans? Just thinking out loud here because DBpedia is very popular in the academic world and Wikidata a huge threat for that popularity.
Maarten
Op 4 mei 2018 om 17:20 heeft Denny Vrandečić <vrandecic@gmail.com mailto:vrandecic@gmail.com> het volgende geschreven:
I'm pretty sure that Wikidata is doing better than 90% of the current bubbles in the diagram.
If they wanted to have Wikidata in the diagram it would have been there before it was too small to read it. :)
On Tue, May 1, 2018 at 7:47 AM Peter F. Patel-Schneider <pfpschneider@gmail.com mailto:pfpschneider@gmail.com> wrote:
Thanks for the corrections. So https://www.wikidata.org/entity/Q42 is *the* Wikidata IRI for Douglas Adams. Retrieving from this IRI results in a 303 See Other to https://www.wikidata.org/wiki/Special:EntityData/Q42, which (I guess) is the main IRI for representations of Douglas Adams and other pages with information about him. From https://www.wikidata.org/wiki/Special:EntityData/Q42 content negotiation can be used to get the JSON representation (the default), other representations including Turtle, and human-readable information. (Well actually I'm not sure that this is really correct. It appears that instead of directly using content negotiation, another 303 See Other is used to provide an IRI for a document in the requested format.) https://www.wikidata.org/wiki/Special:EntityData/Q42.json and https://www.wikidata.org/wiki/Special:EntityData/Q42.ttl are the useful machine-readable documents containing the Wikidata information about Douglas Adams. Content negotiation is not possible on these pages. https://www.wikidata.org/wiki/Q42 is the IRI that produces a human-readable version of the information about Douglas Adams. Content negotiation is not possible on this page, but it does have link rel="alternate" to the machine-readable pages. Strangely this page has a link rel="canonical" to itself. Shouldn't that link be to https://www.wikidata.org/entity/Q42? There is a human-visible link to this IRI, but there doesn't appear to be any machine-readable link. RDF links to other IRIs for Douglas Adams are given in RDF pages by properties in the wdtn namespace. Many, but not all, identifiers are handled this way. (Strangely ISNI (P213) isn't even though it is linked on the human-readable page.) So it looks as if Wikidata can be considered as Linked Open Data but maybe some improvements can be made. peter On 05/01/2018 01:03 AM, Antoine Zimmermann wrote: > On 01/05/2018 03:25, Peter F. Patel-Schneider wrote: >> As far as I can tell real IRIs for Wikidata are https URIs. The http IRIs >> redirect to https IRIs. > > That's right. > >> As far as I can tell no content negotiation is >> done. > > No, you're mistaken. Your tried the URL of a wikipage in your curl command. > Those are for human consumption, thus not available in turtle. > > The "real IRIs" of Wikidata entities are like this: > https://www.wikidata.org/entity/Q{NUMBER} <https://www.wikidata.org/entity/Q%7BNUMBER%7D> > > However, they 303 redirect to > https://www.wikidata.org/wiki/Special:EntityData/Q{NUMBER} <https://www.wikidata.org/wiki/Special:EntityData/Q%7BNUMBER%7D> > > which is the identifier of a schema:Dataset. Then, if you HTTP GET these > URIs, you can content negotiate them to JSON > (https://www.wikidata.org/wiki/Special:EntityData/Q{NUMBER}.json <https://www.wikidata.org/wiki/Special:EntityData/Q%7BNUMBER%7D.json>) or to > turtle (https://www.wikidata.org/wiki/Special:EntityData/Q{NUMBER}.ttl <https://www.wikidata.org/wiki/Special:EntityData/Q%7BNUMBER%7D.ttl>). > > > Suprisingly, there is no connection between the entity IRIs and the wikipage > URLs. If one was given the IRI of an entity from Wikidata, and had no > further information about how Wikidata works, they would not be able to > retrieve HTML content about the entity. > > > BTW, I'm not sure the implementation of content negotiation in Wikidata is > correct because the server does not tell me the format of the resource to > which it redirects (as opposed to what DBpedia does, for instance). > > > --AZ _______________________________________________ Wikidata mailing list Wikidata@lists.wikimedia.org <mailto:Wikidata@lists.wikimedia.org> https://lists.wikimedia.org/mailman/listinfo/wikidata
Wikidata mailing list Wikidata@lists.wikimedia.org mailto:Wikidata@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikidata
Wikidata mailing list Wikidata@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikidata
Hi!
you should read your own emails. In fact it is quite easy to join the LOD cloud diagram.
The most important step is to follow the instructions on the page: http://lod-cloud.net under how to contribute and then add the metadata.
I may not be reading it right or misunderstanding something, but I tried to locate up-to-date working instructions for doing this a few times and it always ended up going nowhere - the instructions turned out to be out of date, or new process not working yet, or something else. It would be very nice and very helpful if you could point out specifically where on that page are step-by-step instructions which could be followed and result in resolving this issue?
Do you really think John McCrae added a line in the code that says "if (dataset==wikidata) skip; " ?
I don't think anybody thinks that. And I think most of people there think it would be nice to have Wikidata added to LOD. It sounds like you know how to do it, could you please share more specific information about it?
You just need to add it like everybody else in LOD, DBpedia also created its entry and updates it now and then. The same accounts for http://lov.okfn.org%C2%A0 Somebody from Wikidata needs to upload the Wikidata properties as OWL. If nobody does it, it will not be in there.
Could you share more information about lov.okfn.org? Going there produces 502, and it's not mentioned anywhere on lod-cloud.net. Where it is documented and what is exactly the process and what you mean by "upload the Wikidata properties as OWL"? More detailed information would be hugely helpful.
Thanks in advance,
Well, then, we have tried several times to get into that diagram, and it never worked out.
So, given the page you linke, it says:
Contributing to the Diagram
First, make sure that you publish data according to the Linked Data principles http://www.w3.org/DesignIssues/LinkedData.html. We interpret this as:
- There must be *resolvable http:// (or https://) URIs*. - They must resolve, with or without content negotiation, to *RDF data* in one of the popular RDF formats (RDFa, RDF/XML, Turtle, N-Triples). - The dataset must contain *at least 1000 triples*. (Hence, your FOAF file most likely does not qualify.) - The dataset must be connected via *RDF links* to a dataset that is already in the diagram. This means, either your dataset must use URIs from the other dataset, or vice versa. We arbitrarily require at least 50 links. - Access of the *entire* dataset must be possible via *RDF crawling*, via an *RDF dump*, or via a *SPARQL endpoint*.
The process for adding datasets is still under development, please contact John P. McCrae john@mcc.ae to add a new dataset
Wikidata fulfills all the conditions easily. So, here we go, I am adding John to this thread - although I know he already knows about this request - and I am asking officially to enter Wikidata into the LOD diagram.
Let's keep it all open, and see where it goes from here.
Cheers, Denny
On Mon, May 7, 2018 at 4:15 AM Sebastian Hellmann < hellmann@informatik.uni-leipzig.de> wrote:
Hi Denny, Maarten,
you should read your own emails. In fact it is quite easy to join the LOD cloud diagram.
The most important step is to follow the instructions on the page: http://lod-cloud.net under how to contribute and then add the metadata.
Some years ago I made a Wordpress with enabled Linked Data: http://www.klappstuhlclub.de/wp/ Even this is included as I simply added the metadata entry.
Do you really think John McCrae added a line in the code that says "if (dataset==wikidata) skip; " ?
You just need to add it like everybody else in LOD, DBpedia also created its entry and updates it now and then. The same accounts for http://lov.okfn.org Somebody from Wikidata needs to upload the Wikidata properties as OWL. If nobody does it, it will not be in there.
All the best,
Sebastian
On 04.05.2018 18:33, Maarten Dammers wrote:
It almost feels like someone doesn’t want Wikidata in there? Maybe that website is maintained by DBpedia fans? Just thinking out loud here because DBpedia is very popular in the academic world and Wikidata a huge threat for that popularity.
Maarten
Op 4 mei 2018 om 17:20 heeft Denny Vrandečić vrandecic@gmail.com het volgende geschreven:
I'm pretty sure that Wikidata is doing better than 90% of the current bubbles in the diagram.
If they wanted to have Wikidata in the diagram it would have been there before it was too small to read it. :)
On Tue, May 1, 2018 at 7:47 AM Peter F. Patel-Schneider < pfpschneider@gmail.com> wrote:
Thanks for the corrections.
So https://www.wikidata.org/entity/Q42 is *the* Wikidata IRI for Douglas Adams. Retrieving from this IRI results in a 303 See Other to https://www.wikidata.org/wiki/Special:EntityData/Q42, which (I guess) is the main IRI for representations of Douglas Adams and other pages with information about him.
From https://www.wikidata.org/wiki/Special:EntityData/Q42 content negotiation can be used to get the JSON representation (the default), other representations including Turtle, and human-readable information. (Well actually I'm not sure that this is really correct. It appears that instead of directly using content negotiation, another 303 See Other is used to provide an IRI for a document in the requested format.)
https://www.wikidata.org/wiki/Special:EntityData/Q42.json and https://www.wikidata.org/wiki/Special:EntityData/Q42.ttl are the useful machine-readable documents containing the Wikidata information about Douglas Adams. Content negotiation is not possible on these pages.
https://www.wikidata.org/wiki/Q42 is the IRI that produces a human-readable version of the information about Douglas Adams. Content negotiation is not possible on this page, but it does have link rel="alternate" to the machine-readable pages.
Strangely this page has a link rel="canonical" to itself. Shouldn't that link be to https://www.wikidata.org/entity/Q42? There is a human-visible link to this IRI, but there doesn't appear to be any machine-readable link.
RDF links to other IRIs for Douglas Adams are given in RDF pages by properties in the wdtn namespace. Many, but not all, identifiers are handled this way. (Strangely ISNI (P213) isn't even though it is linked on the human-readable page.)
So it looks as if Wikidata can be considered as Linked Open Data but maybe some improvements can be made.
peter
On 05/01/2018 01:03 AM, Antoine Zimmermann wrote:
On 01/05/2018 03:25, Peter F. Patel-Schneider wrote:
As far as I can tell real IRIs for Wikidata are https URIs. The http
IRIs
redirect to https IRIs.
That's right.
As far as I can tell no content negotiation is done.
No, you're mistaken. Your tried the URL of a wikipage in your curl
command.
Those are for human consumption, thus not available in turtle.
The "real IRIs" of Wikidata entities are like this: https://www.wikidata.org/entity/Q%7BNUMBER%7D
However, they 303 redirect to https://www.wikidata.org/wiki/Special:EntityData/Q%7BNUMBER%7D
which is the identifier of a schema:Dataset. Then, if you HTTP GET these URIs, you can content negotiate them to JSON (https://www.wikidata.org/wiki/Special:EntityData/Q%7BNUMBER%7D.json) or to turtle (https://www.wikidata.org/wiki/Special:EntityData/Q%7BNUMBER%7D.ttl
).
Suprisingly, there is no connection between the entity IRIs and the
wikipage
URLs. If one was given the IRI of an entity from Wikidata, and had no further information about how Wikidata works, they would not be able to retrieve HTML content about the entity.
BTW, I'm not sure the implementation of content negotiation in Wikidata
is
correct because the server does not tell me the format of the resource
to
which it redirects (as opposed to what DBpedia does, for instance).
--AZ
Wikidata mailing list Wikidata@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikidata
Wikidata mailing list Wikidata@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikidata
Wikidata mailing listWikidata@lists.wikimedia.orghttps://lists.wikimedia.org/mailman/listinfo/wikidata
-- All the best, Sebastian Hellmann
Director of Knowledge Integration and Linked Data Technologies (KILT) Competence Center at the Institute for Applied Informatics (InfAI) at Leipzig University Executive Director of the DBpedia Association Projects: http://dbpedia.org, http://nlp2rdf.org, http://linguistics.okfn.org, https://www.w3.org/community/ld4lt http://www.w3.org/community/ld4lt Homepage: http://aksw.org/SebastianHellmann Research Group: http://aksw.org _______________________________________________ Wikidata mailing list Wikidata@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikidata
Folks, I’m already in contact with John, there’s no need to contact him again :)
Cheers, Lucas
Am Mo., 7. Mai 2018 um 19:32 Uhr schrieb Denny Vrandečić < vrandecic@gmail.com>:
Well, then, we have tried several times to get into that diagram, and it never worked out.
So, given the page you linke, it says:
Contributing to the Diagram
First, make sure that you publish data according to the Linked Data principles http://www.w3.org/DesignIssues/LinkedData.html. We interpret this as:
- There must be *resolvable http:// (or https://) URIs*.
- They must resolve, with or without content negotiation, to *RDF data* in
one of the popular RDF formats (RDFa, RDF/XML, Turtle, N-Triples).
- The dataset must contain *at least 1000 triples*. (Hence, your FOAF
file most likely does not qualify.)
- The dataset must be connected via *RDF links* to a dataset that is
already in the diagram. This means, either your dataset must use URIs from the other dataset, or vice versa. We arbitrarily require at least 50 links.
- Access of the *entire* dataset must be possible via *RDF crawling*,
via an *RDF dump*, or via a *SPARQL endpoint*.
The process for adding datasets is still under development, please contact John P. McCrae john@mcc.ae to add a new dataset
Wikidata fulfills all the conditions easily. So, here we go, I am adding John to this thread - although I know he already knows about this request - and I am asking officially to enter Wikidata into the LOD diagram.
Let's keep it all open, and see where it goes from here.
Cheers, Denny
On Mon, May 7, 2018 at 4:15 AM Sebastian Hellmann < hellmann@informatik.uni-leipzig.de> wrote:
Hi Denny, Maarten,
you should read your own emails. In fact it is quite easy to join the LOD cloud diagram.
The most important step is to follow the instructions on the page: http://lod-cloud.net under how to contribute and then add the metadata.
Some years ago I made a Wordpress with enabled Linked Data: http://www.klappstuhlclub.de/wp/ Even this is included as I simply added the metadata entry.
Do you really think John McCrae added a line in the code that says "if (dataset==wikidata) skip; " ?
You just need to add it like everybody else in LOD, DBpedia also created its entry and updates it now and then. The same accounts for http://lov.okfn.org Somebody from Wikidata needs to upload the Wikidata properties as OWL. If nobody does it, it will not be in there.
All the best,
Sebastian
On 04.05.2018 18:33, Maarten Dammers wrote:
It almost feels like someone doesn’t want Wikidata in there? Maybe that website is maintained by DBpedia fans? Just thinking out loud here because DBpedia is very popular in the academic world and Wikidata a huge threat for that popularity.
Maarten
Op 4 mei 2018 om 17:20 heeft Denny Vrandečić vrandecic@gmail.com het volgende geschreven:
I'm pretty sure that Wikidata is doing better than 90% of the current bubbles in the diagram.
If they wanted to have Wikidata in the diagram it would have been there before it was too small to read it. :)
On Tue, May 1, 2018 at 7:47 AM Peter F. Patel-Schneider < pfpschneider@gmail.com> wrote:
Thanks for the corrections.
So https://www.wikidata.org/entity/Q42 is *the* Wikidata IRI for Douglas Adams. Retrieving from this IRI results in a 303 See Other to https://www.wikidata.org/wiki/Special:EntityData/Q42, which (I guess) is the main IRI for representations of Douglas Adams and other pages with information about him.
From https://www.wikidata.org/wiki/Special:EntityData/Q42 content negotiation can be used to get the JSON representation (the default), other representations including Turtle, and human-readable information. (Well actually I'm not sure that this is really correct. It appears that instead of directly using content negotiation, another 303 See Other is used to provide an IRI for a document in the requested format.)
https://www.wikidata.org/wiki/Special:EntityData/Q42.json and https://www.wikidata.org/wiki/Special:EntityData/Q42.ttl are the useful machine-readable documents containing the Wikidata information about Douglas Adams. Content negotiation is not possible on these pages.
https://www.wikidata.org/wiki/Q42 is the IRI that produces a human-readable version of the information about Douglas Adams. Content negotiation is not possible on this page, but it does have link rel="alternate" to the machine-readable pages.
Strangely this page has a link rel="canonical" to itself. Shouldn't that link be to https://www.wikidata.org/entity/Q42? There is a human-visible link to this IRI, but there doesn't appear to be any machine-readable link.
RDF links to other IRIs for Douglas Adams are given in RDF pages by properties in the wdtn namespace. Many, but not all, identifiers are handled this way. (Strangely ISNI (P213) isn't even though it is linked on the human-readable page.)
So it looks as if Wikidata can be considered as Linked Open Data but maybe some improvements can be made.
peter
On 05/01/2018 01:03 AM, Antoine Zimmermann wrote:
On 01/05/2018 03:25, Peter F. Patel-Schneider wrote:
As far as I can tell real IRIs for Wikidata are https URIs. The http
IRIs
redirect to https IRIs.
That's right.
As far as I can tell no content negotiation is done.
No, you're mistaken. Your tried the URL of a wikipage in your curl
command.
Those are for human consumption, thus not available in turtle.
The "real IRIs" of Wikidata entities are like this: https://www.wikidata.org/entity/Q%7BNUMBER%7D
However, they 303 redirect to https://www.wikidata.org/wiki/Special:EntityData/Q%7BNUMBER%7D
which is the identifier of a schema:Dataset. Then, if you HTTP GET
these
URIs, you can content negotiate them to JSON (https://www.wikidata.org/wiki/Special:EntityData/Q%7BNUMBER%7D.json) or
to
turtle (https://www.wikidata.org/wiki/Special:EntityData/Q%7BNUMBER%7D.ttl
).
Suprisingly, there is no connection between the entity IRIs and the
wikipage
URLs. If one was given the IRI of an entity from Wikidata, and had no further information about how Wikidata works, they would not be able to retrieve HTML content about the entity.
BTW, I'm not sure the implementation of content negotiation in
Wikidata is
correct because the server does not tell me the format of the resource
to
which it redirects (as opposed to what DBpedia does, for instance).
--AZ
Wikidata mailing list Wikidata@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikidata
Wikidata mailing list Wikidata@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikidata
Wikidata mailing listWikidata@lists.wikimedia.orghttps://lists.wikimedia.org/mailman/listinfo/wikidata
-- All the best, Sebastian Hellmann
Director of Knowledge Integration and Linked Data Technologies (KILT) Competence Center at the Institute for Applied Informatics (InfAI) at Leipzig University Executive Director of the DBpedia Association Projects: http://dbpedia.org, http://nlp2rdf.org, http://linguistics.okfn.org, https://www.w3.org/community/ld4lt http://www.w3.org/community/ld4lt Homepage: http://aksw.org/SebastianHellmann Research Group: http://aksw.org _______________________________________________ Wikidata mailing list Wikidata@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikidata
Wikidata mailing list Wikidata@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikidata
Thanks!
On Mon, May 7, 2018 at 1:36 PM Lucas Werkmeister < lucas.werkmeister@wikimedia.de> wrote:
Folks, I’m already in contact with John, there’s no need to contact him again :)
Cheers, Lucas
Am Mo., 7. Mai 2018 um 19:32 Uhr schrieb Denny Vrandečić < vrandecic@gmail.com>:
Well, then, we have tried several times to get into that diagram, and it never worked out.
So, given the page you linke, it says:
Contributing to the Diagram
First, make sure that you publish data according to the Linked Data principles http://www.w3.org/DesignIssues/LinkedData.html. We interpret this as:
- There must be *resolvable http:// (or https://) URIs*.
- They must resolve, with or without content negotiation, to *RDF
data* in one of the popular RDF formats (RDFa, RDF/XML, Turtle, N-Triples).
- The dataset must contain *at least 1000 triples*. (Hence, your FOAF
file most likely does not qualify.)
- The dataset must be connected via *RDF links* to a dataset that is
already in the diagram. This means, either your dataset must use URIs from the other dataset, or vice versa. We arbitrarily require at least 50 links.
- Access of the *entire* dataset must be possible via *RDF crawling*,
via an *RDF dump*, or via a *SPARQL endpoint*.
The process for adding datasets is still under development, please contact John P. McCrae john@mcc.ae to add a new dataset
Wikidata fulfills all the conditions easily. So, here we go, I am adding John to this thread - although I know he already knows about this request - and I am asking officially to enter Wikidata into the LOD diagram.
Let's keep it all open, and see where it goes from here.
Cheers, Denny
On Mon, May 7, 2018 at 4:15 AM Sebastian Hellmann < hellmann@informatik.uni-leipzig.de> wrote:
Hi Denny, Maarten,
you should read your own emails. In fact it is quite easy to join the LOD cloud diagram.
The most important step is to follow the instructions on the page: http://lod-cloud.net under how to contribute and then add the metadata.
Some years ago I made a Wordpress with enabled Linked Data: http://www.klappstuhlclub.de/wp/ Even this is included as I simply added the metadata entry.
Do you really think John McCrae added a line in the code that says "if (dataset==wikidata) skip; " ?
You just need to add it like everybody else in LOD, DBpedia also created its entry and updates it now and then. The same accounts for http://lov.okfn.org Somebody from Wikidata needs to upload the Wikidata properties as OWL. If nobody does it, it will not be in there.
All the best,
Sebastian
On 04.05.2018 18:33, Maarten Dammers wrote:
It almost feels like someone doesn’t want Wikidata in there? Maybe that website is maintained by DBpedia fans? Just thinking out loud here because DBpedia is very popular in the academic world and Wikidata a huge threat for that popularity.
Maarten
Op 4 mei 2018 om 17:20 heeft Denny Vrandečić vrandecic@gmail.com het volgende geschreven:
I'm pretty sure that Wikidata is doing better than 90% of the current bubbles in the diagram.
If they wanted to have Wikidata in the diagram it would have been there before it was too small to read it. :)
On Tue, May 1, 2018 at 7:47 AM Peter F. Patel-Schneider < pfpschneider@gmail.com> wrote:
Thanks for the corrections.
So https://www.wikidata.org/entity/Q42 is *the* Wikidata IRI for Douglas Adams. Retrieving from this IRI results in a 303 See Other to https://www.wikidata.org/wiki/Special:EntityData/Q42, which (I guess) is the main IRI for representations of Douglas Adams and other pages with information about him.
From https://www.wikidata.org/wiki/Special:EntityData/Q42 content negotiation can be used to get the JSON representation (the default), other representations including Turtle, and human-readable information. (Well actually I'm not sure that this is really correct. It appears that instead of directly using content negotiation, another 303 See Other is used to provide an IRI for a document in the requested format.)
https://www.wikidata.org/wiki/Special:EntityData/Q42.json and https://www.wikidata.org/wiki/Special:EntityData/Q42.ttl are the useful machine-readable documents containing the Wikidata information about Douglas Adams. Content negotiation is not possible on these pages.
https://www.wikidata.org/wiki/Q42 is the IRI that produces a human-readable version of the information about Douglas Adams. Content negotiation is not possible on this page, but it does have link rel="alternate" to the machine-readable pages.
Strangely this page has a link rel="canonical" to itself. Shouldn't that link be to https://www.wikidata.org/entity/Q42? There is a human-visible link to this IRI, but there doesn't appear to be any machine-readable link.
RDF links to other IRIs for Douglas Adams are given in RDF pages by properties in the wdtn namespace. Many, but not all, identifiers are handled this way. (Strangely ISNI (P213) isn't even though it is linked on the human-readable page.)
So it looks as if Wikidata can be considered as Linked Open Data but maybe some improvements can be made.
peter
On 05/01/2018 01:03 AM, Antoine Zimmermann wrote:
On 01/05/2018 03:25, Peter F. Patel-Schneider wrote:
As far as I can tell real IRIs for Wikidata are https URIs. The
http IRIs
redirect to https IRIs.
That's right.
As far as I can tell no content negotiation is done.
No, you're mistaken. Your tried the URL of a wikipage in your curl
command.
Those are for human consumption, thus not available in turtle.
The "real IRIs" of Wikidata entities are like this: https://www.wikidata.org/entity/Q%7BNUMBER%7D
However, they 303 redirect to https://www.wikidata.org/wiki/Special:EntityData/Q%7BNUMBER%7D
which is the identifier of a schema:Dataset. Then, if you HTTP GET
these
URIs, you can content negotiate them to JSON (https://www.wikidata.org/wiki/Special:EntityData/Q%7BNUMBER%7D.json) or
to
turtle (
https://www.wikidata.org/wiki/Special:EntityData/Q%7BNUMBER%7D.ttl).
Suprisingly, there is no connection between the entity IRIs and the
wikipage
URLs. If one was given the IRI of an entity from Wikidata, and had no further information about how Wikidata works, they would not be able
to
retrieve HTML content about the entity.
BTW, I'm not sure the implementation of content negotiation in
Wikidata is
correct because the server does not tell me the format of the
resource to
which it redirects (as opposed to what DBpedia does, for instance).
--AZ
Wikidata mailing list Wikidata@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikidata
Wikidata mailing list Wikidata@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikidata
Wikidata mailing listWikidata@lists.wikimedia.orghttps://lists.wikimedia.org/mailman/listinfo/wikidata
-- All the best, Sebastian Hellmann
Director of Knowledge Integration and Linked Data Technologies (KILT) Competence Center at the Institute for Applied Informatics (InfAI) at Leipzig University Executive Director of the DBpedia Association Projects: http://dbpedia.org, http://nlp2rdf.org, http://linguistics.okfn.org, https://www.w3.org/community/ld4lt http://www.w3.org/community/ld4lt Homepage: http://aksw.org/SebastianHellmann Research Group: http://aksw.org _______________________________________________ Wikidata mailing list Wikidata@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikidata
Wikidata mailing list Wikidata@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikidata
-- Lucas Werkmeister Software Developer (Intern)
Wikimedia Deutschland e. V. | Tempelhofer Ufer 23-24 | 10963 Berlin Phone: +49 (0)30 219 158 26-0 https://wikimedia.de
Imagine a world, in which every single human being can freely share in the sum of all knowledge. That‘s our commitment.
Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e. V. Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg unter der Nummer 23855 B. Als gemeinnützig anerkannt durch das Finanzamt für Körperschaften I Berlin, Steuernummer 27/029/42207. _______________________________________________ Wikidata mailing list Wikidata@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikidata
Hi Lucas, Denny,
all you need to do is update your entry on old.datahub.io:
https://old.datahub.io/dataset/wikidata
It was edited by Lucie-Aimée Kaffee two years ago. You need to contact her, as she created the Wikimedia org in Datahub. I might be able to have someone switch ownership of the org to a new account.
But there is many essential metadata missing:
Compare with the DBpedia entry: https://old.datahub.io/dataset/dbpedia
Especially the links and the triple size in the bottom. So you need to keep this one updated in order to appear in the LOD cloud.
Please tell me if you can't edit it, I know a former admin from the time datahub.io was first created 10 years ago in LOD2 and LATC EU projects, he might be able to do something in case there is nobody answering due to datahub.io switching to a new style.
All the best,
Sebastian
On 07.05.2018 22:35, Lucas Werkmeister wrote:
Folks, I’m already in contact with John, there’s no need to contact him again :)
Cheers, Lucas
Am Mo., 7. Mai 2018 um 19:32 Uhr schrieb Denny Vrandečić <vrandecic@gmail.com mailto:vrandecic@gmail.com>:
Well, then, we have tried several times to get into that diagram, and it never worked out. So, given the page you linke, it says: Contributing to the Diagram First, make sure that you publish data according to the Linked Data principles <http://www.w3.org/DesignIssues/LinkedData.html>. We interpret this as: * There must be /resolvable http:// (or https://) URIs/. * They must resolve, with or without content negotiation, to /RDF data/ in one of the popular RDF formats (RDFa, RDF/XML, Turtle, N-Triples). * The dataset must contain /at least 1000 triples/. (Hence, your FOAF file most likely does not qualify.) * The dataset must be connected via /RDF links/ to a dataset that is already in the diagram. This means, either your dataset must use URIs from the other dataset, or vice versa. We arbitrarily require at least 50 links. * Access of the /entire/ dataset must be possible via /RDF crawling/, via an /RDF dump/, or via a /SPARQL endpoint/. The process for adding datasets is still under development, please contact John P. McCrae <mailto:john@mcc.ae> to add a new dataset Wikidata fulfills all the conditions easily. So, here we go, I am adding John to this thread - although I know he already knows about this request - and I am asking officially to enter Wikidata into the LOD diagram. Let's keep it all open, and see where it goes from here. Cheers, Denny On Mon, May 7, 2018 at 4:15 AM Sebastian Hellmann <hellmann@informatik.uni-leipzig.de <mailto:hellmann@informatik.uni-leipzig.de>> wrote: Hi Denny, Maarten, you should read your own emails. In fact it is quite easy to join the LOD cloud diagram. The most important step is to follow the instructions on the page: http://lod-cloud.net under how to contribute and then add the metadata. Some years ago I made a Wordpress with enabled Linked Data: http://www.klappstuhlclub.de/wp/ Even this is included as I simply added the metadata entry. Do you really think John McCrae added a line in the code that says "if (dataset==wikidata) skip; " ? You just need to add it like everybody else in LOD, DBpedia also created its entry and updates it now and then. The same accounts for http://lov.okfn.org Somebody from Wikidata needs to upload the Wikidata properties as OWL. If nobody does it, it will not be in there. All the best, Sebastian On 04.05.2018 18:33, Maarten Dammers wrote:
It almost feels like someone doesn’t want Wikidata in there? Maybe that website is maintained by DBpedia fans? Just thinking out loud here because DBpedia is very popular in the academic world and Wikidata a huge threat for that popularity. Maarten Op 4 mei 2018 om 17:20 heeft Denny Vrandečić <vrandecic@gmail.com <mailto:vrandecic@gmail.com>> het volgende geschreven:
I'm pretty sure that Wikidata is doing better than 90% of the current bubbles in the diagram. If they wanted to have Wikidata in the diagram it would have been there before it was too small to read it. :) On Tue, May 1, 2018 at 7:47 AM Peter F. Patel-Schneider <pfpschneider@gmail.com <mailto:pfpschneider@gmail.com>> wrote: Thanks for the corrections. So https://www.wikidata.org/entity/Q42 is *the* Wikidata IRI for Douglas Adams. Retrieving from this IRI results in a 303 See Other to https://www.wikidata.org/wiki/Special:EntityData/Q42, which (I guess) is the main IRI for representations of Douglas Adams and other pages with information about him. From https://www.wikidata.org/wiki/Special:EntityData/Q42 content negotiation can be used to get the JSON representation (the default), other representations including Turtle, and human-readable information. (Well actually I'm not sure that this is really correct. It appears that instead of directly using content negotiation, another 303 See Other is used to provide an IRI for a document in the requested format.) https://www.wikidata.org/wiki/Special:EntityData/Q42.json and https://www.wikidata.org/wiki/Special:EntityData/Q42.ttl are the useful machine-readable documents containing the Wikidata information about Douglas Adams. Content negotiation is not possible on these pages. https://www.wikidata.org/wiki/Q42 is the IRI that produces a human-readable version of the information about Douglas Adams. Content negotiation is not possible on this page, but it does have link rel="alternate" to the machine-readable pages. Strangely this page has a link rel="canonical" to itself. Shouldn't that link be to https://www.wikidata.org/entity/Q42? There is a human-visible link to this IRI, but there doesn't appear to be any machine-readable link. RDF links to other IRIs for Douglas Adams are given in RDF pages by properties in the wdtn namespace. Many, but not all, identifiers are handled this way. (Strangely ISNI (P213) isn't even though it is linked on the human-readable page.) So it looks as if Wikidata can be considered as Linked Open Data but maybe some improvements can be made. peter On 05/01/2018 01:03 AM, Antoine Zimmermann wrote: > On 01/05/2018 03:25, Peter F. Patel-Schneider wrote: >> As far as I can tell real IRIs for Wikidata are https URIs. The http IRIs >> redirect to https IRIs. > > That's right. > >> As far as I can tell no content negotiation is >> done. > > No, you're mistaken. Your tried the URL of a wikipage in your curl command. > Those are for human consumption, thus not available in turtle. > > The "real IRIs" of Wikidata entities are like this: > https://www.wikidata.org/entity/Q{NUMBER} <https://www.wikidata.org/entity/Q%7BNUMBER%7D> > > However, they 303 redirect to > https://www.wikidata.org/wiki/Special:EntityData/Q{NUMBER} <https://www.wikidata.org/wiki/Special:EntityData/Q%7BNUMBER%7D> > > which is the identifier of a schema:Dataset. Then, if you HTTP GET these > URIs, you can content negotiate them to JSON > (https://www.wikidata.org/wiki/Special:EntityData/Q{NUMBER}.json <https://www.wikidata.org/wiki/Special:EntityData/Q%7BNUMBER%7D.json>) or to > turtle (https://www.wikidata.org/wiki/Special:EntityData/Q{NUMBER}.ttl <https://www.wikidata.org/wiki/Special:EntityData/Q%7BNUMBER%7D.ttl>). > > > Suprisingly, there is no connection between the entity IRIs and the wikipage > URLs. If one was given the IRI of an entity from Wikidata, and had no > further information about how Wikidata works, they would not be able to > retrieve HTML content about the entity. > > > BTW, I'm not sure the implementation of content negotiation in Wikidata is > correct because the server does not tell me the format of the resource to > which it redirects (as opposed to what DBpedia does, for instance). > > > --AZ _______________________________________________ Wikidata mailing list Wikidata@lists.wikimedia.org <mailto:Wikidata@lists.wikimedia.org> https://lists.wikimedia.org/mailman/listinfo/wikidata _______________________________________________ Wikidata mailing list Wikidata@lists.wikimedia.org <mailto:Wikidata@lists.wikimedia.org> https://lists.wikimedia.org/mailman/listinfo/wikidata
_______________________________________________ Wikidata mailing list Wikidata@lists.wikimedia.org <mailto:Wikidata@lists.wikimedia.org> https://lists.wikimedia.org/mailman/listinfo/wikidata
-- All the best, Sebastian Hellmann Director of Knowledge Integration and Linked Data Technologies (KILT) Competence Center at the Institute for Applied Informatics (InfAI) at Leipzig University Executive Director of the DBpedia Association Projects: http://dbpedia.org, http://nlp2rdf.org, http://linguistics.okfn.org, https://www.w3.org/community/ld4lt <http://www.w3.org/community/ld4lt> Homepage: http://aksw.org/SebastianHellmann Research Group: http://aksw.org _______________________________________________ Wikidata mailing list Wikidata@lists.wikimedia.org <mailto:Wikidata@lists.wikimedia.org> https://lists.wikimedia.org/mailman/listinfo/wikidata _______________________________________________ Wikidata mailing list Wikidata@lists.wikimedia.org <mailto:Wikidata@lists.wikimedia.org> https://lists.wikimedia.org/mailman/listinfo/wikidata
-- Lucas Werkmeister Software Developer (Intern)
Wikimedia Deutschland e. V. | Tempelhofer Ufer 23-24 | 10963 Berlin Phone: +49 (0)30 219 158 26-0 https://wikimedia.de
Imagine a world, in which every single human being can freely share in the sum of all knowledge. That‘s our commitment.
Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e. V. Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg unter der Nummer 23855 B. Als gemeinnützig anerkannt durch das Finanzamt für Körperschaften I Berlin, Steuernummer 27/029/42207.
Wikidata mailing list Wikidata@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikidata
Is there an easy way to navigate this? I was wondering if there was a way to zoom-in on a certain area and then see connections from that image. When I clicked on something I got a JSON view. I don't know how much coding it would take to have something like the Visual Thesaurus where clicking on links brings that circle into focus with its first degree connections. Maybe I need a magnifier on my 4k monitor.
Bruce
On Mon, Apr 30, 2018 at 3:17 PM, Ettore RIZZA ettorerizza@gmail.com wrote:
Hi all,
The new version of the "Linked Open data Cloud" graph http://lod-cloud.net/ is out ... and still no Wikidata in it. According to this Twitter discussion https://twitter.com/AmrapaliZ/status/990927835400474626, this would be due to a lack of metadata on Wikidata. No way to fix that easily? The LOD cloud is cited in many scientific papers, it is not a simple gadget.
Cheers,
Ettore Rizza
Wikidata mailing list Wikidata@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikidata
Yeah, that would be nice.
You can zoom in on the image, and search for the labels in it. Unfortunately many of the labels are truncated, e.g., WordNe....
Clicking on a node gets the raw data backing up the image, but I don't see how to get the processed data. The data for some of the nodes either doesn't have enough information to determine whether the source actually satisfies the requirements to be in the LOD Cloud (Wordnet, universal-dependencies-treebank-hebrew) or something about the source doesn't work anymore (Freebase).
peter
On 05/04/2018 09:52 AM, Bruce Whealton wrote:
Is there an easy way to navigate this? I was wondering if there was a way to zoom-in on a certain area and then see connections from that image. When I clicked on something I got a JSON view. I don't know how much coding it would take to have something like the Visual Thesaurus where clicking on links brings that circle into focus with its first degree connections. Maybe I need a magnifier on my 4k monitor.
Bruce
On Mon, Apr 30, 2018 at 3:17 PM, Ettore RIZZA <ettorerizza@gmail.com mailto:ettorerizza@gmail.com> wrote:
Hi all, The new version of the "Linked Open data Cloud" graph <http://lod-cloud.net/> is out ... and still no Wikidata in it. According to this Twitter discussion <https://twitter.com/AmrapaliZ/status/990927835400474626>, this would be due to a lack of metadata on Wikidata. No way to fix that easily? The LOD cloud is cited in many scientific papers, it is not a simple gadget. Cheers, Ettore Rizza _______________________________________________ Wikidata mailing list Wikidata@lists.wikimedia.org <mailto:Wikidata@lists.wikimedia.org> https://lists.wikimedia.org/mailman/listinfo/wikidata <https://lists.wikimedia.org/mailman/listinfo/wikidata>
-- Bruce M Whealton Jr. My Online Resume: http://fwwebdev.com/myresume/bruce-whealton-resume-view I do business as Future Wave Web Development http://futurewavewebdevelopment.com Providing Web Development & Design, as well as Programming/Software Engineering
Wikidata mailing list Wikidata@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikidata
Since DBpedia was mentioned, wondering which is bigger in the LOD Cloud, Wikidata or DBpedia?
Wikidata at the current state contains at least 2700 external ID properties, which reaffirms Wikidata as a hub of data.
-fariz
On Sat, May 5, 2018, 00:10 Peter F. Patel-Schneider pfpschneider@gmail.com wrote:
Yeah, that would be nice.
You can zoom in on the image, and search for the labels in it. Unfortunately many of the labels are truncated, e.g., WordNe....
Clicking on a node gets the raw data backing up the image, but I don't see how to get the processed data. The data for some of the nodes either doesn't have enough information to determine whether the source actually satisfies the requirements to be in the LOD Cloud (Wordnet, universal-dependencies-treebank-hebrew) or something about the source doesn't work anymore (Freebase).
peter
On 05/04/2018 09:52 AM, Bruce Whealton wrote:
Is there an easy way to navigate this? I was wondering if there was a
way
to zoom-in on a certain area and then see connections from that image.
When
I clicked on something I got a JSON view. I don't know how much coding
it
would take to have something like the Visual Thesaurus where clicking on links brings that circle into focus with its first degree connections. Maybe I need a magnifier on my 4k monitor.
Bruce
On Mon, Apr 30, 2018 at 3:17 PM, Ettore RIZZA <ettorerizza@gmail.com mailto:ettorerizza@gmail.com> wrote:
Hi all, The new version of the "Linked Open data Cloud" graph <http://lod-cloud.net/> is out ... and still no Wikidata in it. According to this Twitter discussion <https://twitter.com/AmrapaliZ/status/990927835400474626>, this
would be
due to a lack of metadata on Wikidata. No way to fix that easily? The LOD cloud is cited in many scientific papers, it is not a simple
gadget.
Cheers, Ettore Rizza _______________________________________________ Wikidata mailing list Wikidata@lists.wikimedia.org <mailto:Wikidata@lists.wikimedia.org> https://lists.wikimedia.org/mailman/listinfo/wikidata <https://lists.wikimedia.org/mailman/listinfo/wikidata>
-- Bruce M Whealton Jr. My Online Resume:
http://fwwebdev.com/myresume/bruce-whealton-resume-view
I do business as Future Wave Web Development http://futurewavewebdevelopment.com Providing Web Development & Design, as well as Programming/Software
Engineering
Wikidata mailing list Wikidata@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikidata
Wikidata mailing list Wikidata@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikidata
They go by number of triples, of which Wikidata has ca. 5 billion vs. 9.5 million in DBpedia - see https://lod-cloud.net/dataset/dbpedia .
On Sat, May 5, 2018 at 1:45 AM, Fariz Darari fadirra@gmail.com wrote:
Since DBpedia was mentioned, wondering which is bigger in the LOD Cloud, Wikidata or DBpedia?
Wikidata at the current state contains at least 2700 external ID properties, which reaffirms Wikidata as a hub of data.
-fariz
On Sat, May 5, 2018, 00:10 Peter F. Patel-Schneider pfpschneider@gmail.com wrote:
Yeah, that would be nice.
You can zoom in on the image, and search for the labels in it. Unfortunately many of the labels are truncated, e.g., WordNe....
Clicking on a node gets the raw data backing up the image, but I don't see how to get the processed data. The data for some of the nodes either doesn't have enough information to determine whether the source actually satisfies the requirements to be in the LOD Cloud (Wordnet, universal-dependencies-treebank-hebrew) or something about the source doesn't work anymore (Freebase).
peter
On 05/04/2018 09:52 AM, Bruce Whealton wrote:
Is there an easy way to navigate this? I was wondering if there was a way to zoom-in on a certain area and then see connections from that image. When I clicked on something I got a JSON view. I don't know how much coding it would take to have something like the Visual Thesaurus where clicking on links brings that circle into focus with its first degree connections. Maybe I need a magnifier on my 4k monitor.
Bruce
On Mon, Apr 30, 2018 at 3:17 PM, Ettore RIZZA <ettorerizza@gmail.com mailto:ettorerizza@gmail.com> wrote:
Hi all, The new version of the "Linked Open data Cloud" graph <http://lod-cloud.net/> is out ... and still no Wikidata in it. According to this Twitter discussion <https://twitter.com/AmrapaliZ/status/990927835400474626>, this
would be due to a lack of metadata on Wikidata. No way to fix that easily? The LOD cloud is cited in many scientific papers, it is not a simple gadget.
Cheers, Ettore Rizza _______________________________________________ Wikidata mailing list Wikidata@lists.wikimedia.org <mailto:Wikidata@lists.wikimedia.org> https://lists.wikimedia.org/mailman/listinfo/wikidata <https://lists.wikimedia.org/mailman/listinfo/wikidata>
-- Bruce M Whealton Jr. My Online Resume: http://fwwebdev.com/myresume/bruce-whealton-resume-view I do business as Future Wave Web Development http://futurewavewebdevelopment.com Providing Web Development & Design, as well as Programming/Software Engineering
Wikidata mailing list Wikidata@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikidata
Wikidata mailing list Wikidata@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikidata
Wikidata mailing list Wikidata@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikidata