Dear all,
I wanted to join in and give my birthday present to Wikidata (I am a little bit late, though!) (also, honestly, I didn't recall it was Wikidata's birthday, but it is a nice occasion :P)
Here it is: http://wikidataldf.com
What is LDF? LDF stands for Linked Data Fragments, they are a new system for query RDF datasets that stands middle way between having a SPARQL endpoint and downloading the whole thing.
More formally LDF is «a publishing method [for RDF datasets] that allows efficient offloading of query execution from servers to clients through a lightweight partitioning strategy. It enables servers to maintain availability rates as high as any regular HTTP server, allowing querying to scale reliably to much larger numbers of clients»[1].
This system was devised Ruben Verborgh, Miel Vander Sande and Pieter Colpaert at Multimedia Lab (Ghent University) in Ghent, Belgium. You can read more about it: http://linkeddatafragments.org/
What is Wikidata LDF? Using the software by Verborgh et al. I have setup the website http://wikidataldf.com that contains: * an interface to navigate in the RDF data and query them using the Triple Pattern Fragments client * a web client where you can compose and execute SPARQL queries
This is not, strictly speaking, a SPARQL endpoint (not all the SPARQL standard is implemented and it is slower, but it should be more reliable, if you are interested in details, please do read more at the link above).
The data are, for the moment, limited to the sitelinks dump but I am working towards adding the other dump. I have taken the Wikidata RDF dumps as of Oct, 13th 2014[2].
To use them I had to convert them in HDT format[3a][3b], using the hdt-cpp library[3c] (devel) (which is taking quite a lot of resources and computing time for the whole dumps, that's the reason why I haven't published the rest yet ^_^).
DBpedia has also this[4]: http://fragments.dbpedia.org/
All the software used is available under the MIT license on the LDF repo on github[5a], and also the (two pages) website is available here[5b].
I would like to thank Ruben for his feedback and his presentation about LDF at SpazioDati in Trento, Italy (here's the slides[6]).
All this said, happy birthday Wikidata.
Cristian
[1] http://linkeddatafragments.org/publications/ldow2014.pdf [2] https://tools.wmflabs.org/wikidata-exports/rdf/exports/ [3a] http://www.rdfhdt.org/ [3b] http://www.w3.org/Submission/HDT-Implementation/ [3c] https://github.com/rdfhdt/hdt-cpp [4] http://sourceforge.net/p/dbpedia/mailman/message/32982329/ [5a] see the Browser.js, Server.js and Client.js repos in https://github.com/LinkedDataFragments [5b] https://github.com/CristianCantoro/wikidataldf [6] http://www.slideshare.net/RubenVerborgh/querying-datasets-on-the-web-with-hi...
On 10/30/14 6:32 AM, Cristian Consonni wrote:
Dear all,
I wanted to join in and give my birthday present to Wikidata (I am a little bit late, though!) (also, honestly, I didn't recall it was Wikidata's birthday, but it is a nice occasion :P)
Here it is: http://wikidataldf.com
What is LDF? LDF stands for Linked Data Fragments, they are a new system for query RDF datasets that stands middle way between having a SPARQL endpoint and downloading the whole thing.
More formally LDF is «a publishing method [for RDF datasets] that allows efficient offloading of query execution from servers to clients through a lightweight partitioning strategy. It enables servers to maintain availability rates as high as any regular HTTP server, allowing querying to scale reliably to much larger numbers of clients»[1].
This system was devised Ruben Verborgh, Miel Vander Sande and Pieter Colpaert at Multimedia Lab (Ghent University) in Ghent, Belgium. You can read more about it:http://linkeddatafragments.org/
What is Wikidata LDF? Using the software by Verborgh et al. I have setup the website http://wikidataldf.com that contains:
- an interface to navigate in the RDF data and query them using the
Triple Pattern Fragments client
- a web client where you can compose and execute SPARQL queries
This is not, strictly speaking, a SPARQL endpoint (not all the SPARQL standard is implemented and it is slower, but it should be more reliable, if you are interested in details, please do read more at the link above).
The data are, for the moment, limited to the sitelinks dump but I am working towards adding the other dump. I have taken the Wikidata RDF dumps as of Oct, 13th 2014[2].
To use them I had to convert them in HDT format[3a][3b], using the hdt-cpp library[3c] (devel) (which is taking quite a lot of resources and computing time for the whole dumps, that's the reason why I haven't published the rest yet ^_^).
DBpedia has also this[4]: http://fragments.dbpedia.org/
All the software used is available under the MIT license on the LDF repo on github[5a], and also the (two pages) website is available here[5b].
I would like to thank Ruben for his feedback and his presentation about LDF at SpazioDati in Trento, Italy (here's the slides[6]).
All this said, happy birthday Wikidata.
Cristian
[1]http://linkeddatafragments.org/publications/ldow2014.pdf [2]https://tools.wmflabs.org/wikidata-exports/rdf/exports/ [3a]http://www.rdfhdt.org/ [3b]http://www.w3.org/Submission/HDT-Implementation/ [3c]https://github.com/rdfhdt/hdt-cpp [4]http://sourceforge.net/p/dbpedia/mailman/message/32982329/ [5a] see the Browser.js, Server.js and Client.js repos in https://github.com/LinkedDataFragments [5b]https://github.com/CristianCantoro/wikidataldf [6]http://www.slideshare.net/RubenVerborgh/querying-datasets-on-the-web-with-hi...
Yep! And for publishing some of the information above into the Linked Open Data Cloud, from this thread, via nanotation:
http://wikidataldf.com a schema:WebPage; rdfs:label "Wikidata LDF"; skos:altLabel "Wikidata Linked Data Fragments" ; dcterms:hasPart http://data.wikidataldf.com/, http://client.wikidataldf.com/; xhv:related http://linkeddatafragments.org/, http://fragments.dbpedia.org/ ; rdfs:comment """ I wanted to join in and give my birthday present to Wikidata (I am a little bit late, though!) (also, honestly, I didn't recall it was Wikidata's birthday, but it is a nice occasion :P) """ ;
rdfs:comment """ What is Wikidata LDF? Using the software by Verborgh et al. I have setup the website http://wikidataldf.com that contains: * an interface to navigate in the RDF data and query them using the Triple Pattern Fragments client * a web client where you can compose and execute SPARQL queries
This is not, strictly speaking, a SPARQL endpoint (not all the SPARQL standard is implemented and it is slower, but it should be more reliable, if you are interested in details, please do read more at the link above).
The data are, for the moment, limited to the sitelinks dump but I am working towards adding the other dump. I have taken the Wikidata RDF dumps as of Oct, 13th 2014[2].
To use them I had to convert them in HDT format[3a][3b], using the hdt-cpp library[3c] (devel) (which is taking quite a lot of resources and computing time for the whole dumps, that's the reason why I haven't published the rest yet ^_^). """ ; dcterms:references http://linkeddatafragments.org/publications/ldow2014.pdf, https://tools.wmflabs.org/wikidata-exports/rdf/exports/, http://www.rdfhdt.org/, http://www.w3.org/Submission/HDT-Implementation/, https://github.com/rdfhdt/hdt-cpp, http://sourceforge.net/p/dbpedia/mailman/message/32982329/, https://github.com/LinkedDataFragments, https://github.com/CristianCantoro/wikidataldf, http://www.slideshare.net/RubenVerborgh/querying-datasets-on-the-web-with-high-availability .
Links:
[1] http://linkeddata.uriburner.com/about/%7Burl-of-this-post%7D -- will resolve to a page comprised of the nanotated content above [2] http://linkeddata.uriburner.com/about/html/https/lists.wikimedia.org/piperma... -- Example based on an earlier reply [3] http://linkeddata.uriburner.com/c/9CB36OOJ -- alternative view that offers deeper Linked Open Data follow-your-nose exploration [4] http://linkeddata.uriburner.com/c/9V56JQC -- A reified statement example .
On 10/30/14 9:29 AM, Kingsley Idehen wrote:
On 10/30/14 6:32 AM, Cristian Consonni wrote:
Dear all,
I wanted to join in and give my birthday present to Wikidata (I am a little bit late, though!) (also, honestly, I didn't recall it was Wikidata's birthday, but it is a nice occasion :P)
Here it is: http://wikidataldf.com
What is LDF? LDF stands for Linked Data Fragments, they are a new system for query RDF datasets that stands middle way between having a SPARQL endpoint and downloading the whole thing.
More formally LDF is «a publishing method [for RDF datasets] that allows efficient offloading of query execution from servers to clients through a lightweight partitioning strategy. It enables servers to maintain availability rates as high as any regular HTTP server, allowing querying to scale reliably to much larger numbers of clients»[1].
This system was devised Ruben Verborgh, Miel Vander Sande and Pieter Colpaert at Multimedia Lab (Ghent University) in Ghent, Belgium. You can read more about it:http://linkeddatafragments.org/
What is Wikidata LDF? Using the software by Verborgh et al. I have setup the website http://wikidataldf.com that contains:
- an interface to navigate in the RDF data and query them using the
Triple Pattern Fragments client
- a web client where you can compose and execute SPARQL queries
This is not, strictly speaking, a SPARQL endpoint (not all the SPARQL standard is implemented and it is slower, but it should be more reliable, if you are interested in details, please do read more at the link above).
The data are, for the moment, limited to the sitelinks dump but I am working towards adding the other dump. I have taken the Wikidata RDF dumps as of Oct, 13th 2014[2].
To use them I had to convert them in HDT format[3a][3b], using the hdt-cpp library[3c] (devel) (which is taking quite a lot of resources and computing time for the whole dumps, that's the reason why I haven't published the rest yet ^_^).
DBpedia has also this[4]: http://fragments.dbpedia.org/
All the software used is available under the MIT license on the LDF repo on github[5a], and also the (two pages) website is available here[5b].
I would like to thank Ruben for his feedback and his presentation about LDF at SpazioDati in Trento, Italy (here's the slides[6]).
All this said, happy birthday Wikidata.
Cristian
[1]http://linkeddatafragments.org/publications/ldow2014.pdf [2]https://tools.wmflabs.org/wikidata-exports/rdf/exports/ [3a]http://www.rdfhdt.org/ [3b]http://www.w3.org/Submission/HDT-Implementation/ [3c]https://github.com/rdfhdt/hdt-cpp [4]http://sourceforge.net/p/dbpedia/mailman/message/32982329/ [5a] see the Browser.js, Server.js and Client.js repos in https://github.com/LinkedDataFragments [5b]https://github.com/CristianCantoro/wikidataldf [6]http://www.slideshare.net/RubenVerborgh/querying-datasets-on-the-web-with-hi...
Yep! And for publishing some of the information above into the Linked Open Data Cloud, from this thread, via nanotation:
http://wikidataldf.com a schema:WebPage; rdfs:label "Wikidata LDF"; skos:altLabel "Wikidata Linked Data Fragments" ; dcterms:hasPart http://data.wikidataldf.com/, http://client.wikidataldf.com/; xhv:related http://linkeddatafragments.org/, http://fragments.dbpedia.org/ ; rdfs:comment """ I wanted to join in and give my birthday present to Wikidata (I am a little bit late, though!) (also, honestly, I didn't recall it was Wikidata's birthday, but it is a nice occasion :P) """ ;
rdfs:comment """ What is Wikidata LDF? Using the software by Verborgh et al. I have setup the website http://wikidataldf.com that contains: * an interface to navigate in the RDF data and query them using the Triple Pattern Fragments client * a web client where you can compose and execute SPARQL queries
This is not, strictly speaking, a SPARQL endpoint
(not all the SPARQL standard is implemented and it is slower, but it should be more reliable, if you are interested in details, please do read more at the link above).
The data are, for the moment, limited to the
sitelinks dump but I am working towards adding the other dump. I have taken the Wikidata RDF dumps as of Oct, 13th 2014[2].
To use them I had to convert them in HDT
format[3a][3b], using the hdt-cpp library[3c] (devel) (which is taking quite a lot of resources and computing time for the whole dumps, that's the reason why I haven't published the rest yet ^_^). """ ; dcterms:references http://linkeddatafragments.org/publications/ldow2014.pdf, https://tools.wmflabs.org/wikidata-exports/rdf/exports/, http://www.rdfhdt.org/, http://www.w3.org/Submission/HDT-Implementation/, https://github.com/rdfhdt/hdt-cpp, http://sourceforge.net/p/dbpedia/mailman/message/32982329/, https://github.com/LinkedDataFragments, https://github.com/CristianCantoro/wikidataldf, http://www.slideshare.net/RubenVerborgh/querying-datasets-on-the-web-with-high-availability .
Links:
[1] http://linkeddata.uriburner.com/about/%7Burl-of-this-post%7D -- will resolve to a page comprised of the nanotated content above [2] http://linkeddata.uriburner.com/about/html/https/lists.wikimedia.org/piperma... -- Example based on an earlier reply [3] http://linkeddata.uriburner.com/c/9CB36OOJ -- alternative view that offers deeper Linked Open Data follow-your-nose exploration [4] http://linkeddata.uriburner.com/c/9V56JQC -- A reified statement example .
Little correction (which is why #1 above wouldn't have produced the expected result). Adding missing Nanotation markers, as shown here:
## Nanotation Start ##
http://wikidataldf.com a schema:WebPage; rdfs:label "Wikidata LDF"; skos:altLabel "Wikidata Linked Data Fragments" ; dcterms:hasPart http://data.wikidataldf.com/, http://client.wikidataldf.com/; xhv:related http://linkeddatafragments.org/, http://fragments.dbpedia.org/ ; rdfs:comment """ I wanted to join in and give my birthday present to Wikidata (I am a little bit late, though!) (also, honestly, I didn't recall it was Wikidata's birthday, but it is a nice occasion :P) """ ;
rdfs:comment """ What is Wikidata LDF? Using the software by Verborgh et al. I have setup the website http://wikidataldf.com that contains: * an interface to navigate in the RDF data and query them using the Triple Pattern Fragments client * a web client where you can compose and execute SPARQL queries
This is not, strictly speaking, a SPARQL endpoint (not all the SPARQL standard is implemented and it is slower, but it should be more reliable, if you are interested in details, please do read more at the link above).
The data are, for the moment, limited to the sitelinks dump but I am working towards adding the other dump. I have taken the Wikidata RDF dumps as of Oct, 13th 2014[2].
To use them I had to convert them in HDT format[3a][3b], using the hdt-cpp library[3c] (devel) (which is taking quite a lot of resources and computing time for the whole dumps, that's the reason why I haven't published the rest yet ^_^). """ ; dcterms:references http://linkeddatafragments.org/publications/ldow2014.pdf, https://tools.wmflabs.org/wikidata-exports/rdf/exports/, http://www.rdfhdt.org/, http://www.w3.org/Submission/HDT-Implementation/, https://github.com/rdfhdt/hdt-cpp, http://sourceforge.net/p/dbpedia/mailman/message/32982329/, https://github.com/LinkedDataFragments, https://github.com/CristianCantoro/wikidataldf, http://www.slideshare.net/RubenVerborgh/querying-datasets-on-the-web-with-high-availability .
## Nanotation End ##
Hi Christian,
Awesome :-) Small note: I just got a "Bad Gateway" when trying http://data.wikidataldf.com/ but it now seems to work.
It also seems that some of your post answers the question from my previous email. That sounds as if it is pretty hard to create HDT exports (not much surprise there). Maybe it would be nice to at least reuse the work: could we re-publish your HDT dumps after you created them? I thought about creating HDT right away but this is quite hard since the order is based on URL strings and must thus be different from any order one could establish "naturally" on the Wikidata data.
Cheers,
Markus
On 30.10.2014 11:32, Cristian Consonni wrote:
Dear all,
I wanted to join in and give my birthday present to Wikidata (I am a little bit late, though!) (also, honestly, I didn't recall it was Wikidata's birthday, but it is a nice occasion :P)
Here it is: http://wikidataldf.com
What is LDF? LDF stands for Linked Data Fragments, they are a new system for query RDF datasets that stands middle way between having a SPARQL endpoint and downloading the whole thing.
More formally LDF is «a publishing method [for RDF datasets] that allows efficient offloading of query execution from servers to clients through a lightweight partitioning strategy. It enables servers to maintain availability rates as high as any regular HTTP server, allowing querying to scale reliably to much larger numbers of clients»[1].
This system was devised Ruben Verborgh, Miel Vander Sande and Pieter Colpaert at Multimedia Lab (Ghent University) in Ghent, Belgium. You can read more about it: http://linkeddatafragments.org/
What is Wikidata LDF? Using the software by Verborgh et al. I have setup the website http://wikidataldf.com that contains:
- an interface to navigate in the RDF data and query them using the
Triple Pattern Fragments client
- a web client where you can compose and execute SPARQL queries
This is not, strictly speaking, a SPARQL endpoint (not all the SPARQL standard is implemented and it is slower, but it should be more reliable, if you are interested in details, please do read more at the link above).
The data are, for the moment, limited to the sitelinks dump but I am working towards adding the other dump. I have taken the Wikidata RDF dumps as of Oct, 13th 2014[2].
To use them I had to convert them in HDT format[3a][3b], using the hdt-cpp library[3c] (devel) (which is taking quite a lot of resources and computing time for the whole dumps, that's the reason why I haven't published the rest yet ^_^).
DBpedia has also this[4]: http://fragments.dbpedia.org/
All the software used is available under the MIT license on the LDF repo on github[5a], and also the (two pages) website is available here[5b].
I would like to thank Ruben for his feedback and his presentation about LDF at SpazioDati in Trento, Italy (here's the slides[6]).
All this said, happy birthday Wikidata.
Cristian
[1] http://linkeddatafragments.org/publications/ldow2014.pdf [2] https://tools.wmflabs.org/wikidata-exports/rdf/exports/ [3a] http://www.rdfhdt.org/ [3b] http://www.w3.org/Submission/HDT-Implementation/ [3c] https://github.com/rdfhdt/hdt-cpp [4] http://sourceforge.net/p/dbpedia/mailman/message/32982329/ [5a] see the Browser.js, Server.js and Client.js repos in https://github.com/LinkedDataFragments [5b] https://github.com/CristianCantoro/wikidataldf [6] http://www.slideshare.net/RubenVerborgh/querying-datasets-on-the-web-with-hi...
Wikidata-l mailing list Wikidata-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikidata-l
2014-10-30 18:05 GMT+01:00 Markus Krötzsch markus@semantic-mediawiki.org:
Hi Christian,
Awesome :-) Small note: I just got a "Bad Gateway" when trying http://data.wikidataldf.com/ but it now seems to work.
I was restarting the server in fact I have uploaded also the wikidata-terms dump now. (the wikidata-statements file is not collaborating though :( )
It also seems that some of your post answers the question from my previous email. That sounds as if it is pretty hard to create HDT exports (not much surprise there). Maybe it would be nice to at least reuse the work: could we re-publish your HDT dumps after you created them?
yes, sure, here they are: http://wikidataldf.com/download/
C
2014-10-30 19:41 GMT+01:00 Cristian Consonni kikkocristian@gmail.com:
2014-10-30 18:05 GMT+01:00 Markus Krötzsch markus@semantic-mediawiki.org:
Awesome :-) Small note: I just got a "Bad Gateway" when trying http://data.wikidataldf.com/ but it now seems to work.
I was restarting the server in fact I have uploaded also the wikidata-terms dump now. (the wikidata-statements file is not collaborating though :( )
Ok, now I have managed to add the Wikidata statements dump too.
If somebody would like to add some exemple SPARQL queries it would be awesome.
It also seems that some of your post answers the question from my previous email. That sounds as if it is pretty hard to create HDT exports (not much surprise there). Maybe it would be nice to at least reuse the work: could we re-publish your HDT dumps after you created them?
yes, sure, here they are: http://wikidataldf.com/download/
I should add, yes, it is pretty hard to create the HDT file since the process requires an awful lot of RAM, and I don't know if in the future I will be able to produce them.
C
Dear all,
It also seems that some of your post answers the question from my previous email. That sounds as if it is pretty hard to create HDT exports (not much surprise there). Maybe it would be nice to at least reuse the work: could we re-publish your HDT dumps after you created them?
yes, sure, here they are: http://wikidataldf.com/download/
I should add, yes, it is pretty hard to create the HDT file since the process requires an awful lot of RAM, and I don't know if in the future I will be able to produce them.
Maybe some nuance: creating HDT exports is not *that* hard.
First, on a technical level, it's simply: rdf2hdt -f turtle triples.ttl triples.hdt so that's not really difficult ;-)
Second, concerning machine resources: for datasets with millions of triples, you can easily do it on any machine. It doesn't take that much RAM, and certainly not that much disk space. When you have hundreds of millions of triples, as is the case with Wikidata/DBpedia/…, having a significant amount of RAM does indeed help a lot. The people working on HDT will surely improve that requirement in the future.
We should really see HDT generation as a one-time server effort that serves to reduce future server efforts significantly.
Best,
Ruben
PS If anybody has trouble generating an HDT file, feel free to send me a link to your dump and I'll do it for you.
2014-10-30 22:40 GMT+01:00 Cristian Consonni kikkocristian@gmail.com:
Ok, now I have managed to add the Wikidata statements dump too.
And I have added a wikidata.hdt combined dump of all of the above.
2014-10-31 10:25 GMT+01:00 Ruben Verborgh ruben.verborgh@ugent.be:
Maybe some nuance: creating HDT exports is not *that* hard.
First, on a technical level, it's simply: rdf2hdt -f turtle triples.ttl triples.hdt so that's not really difficult ;-)
Yes, I agree. I mean, I am not an expert in the field - this should be clear by now :P - and I was able to do that. (by "not an expert in the field" I mean that I never heard about HDT or LDF before 6 days ago)
It should be noted that in the conversion of the statements and terms dump I obtained some "Unicode range" errors, which result in ignored triples (i.e. triples not inserted in the HDT files). I am unable to say if this is a problem of the dumps or of hdt-lib.
C
On 31.10.2014 14:51, Cristian Consonni wrote:
2014-10-30 22:40 GMT+01:00 Cristian Consonni kikkocristian@gmail.com:
Ok, now I have managed to add the Wikidata statements dump too.
And I have added a wikidata.hdt combined dump of all of the above.
Nice. We are running the RDF generation on a shared cloud environment and I am not sure we can really use a lot of RAM there. Do you have any guess how much RAM you needed to get this done?
2014-10-31 10:25 GMT+01:00 Ruben Verborgh ruben.verborgh@ugent.be:
Maybe some nuance: creating HDT exports is not *that* hard.
First, on a technical level, it's simply: rdf2hdt -f turtle triples.ttl triples.hdt so that's not really difficult ;-)
Yes, I agree. I mean, I am not an expert in the field - this should be clear by now :P - and I was able to do that. (by "not an expert in the field" I mean that I never heard about HDT or LDF before 6 days ago)
It should be noted that in the conversion of the statements and terms dump I obtained some "Unicode range" errors, which result in ignored triples (i.e. triples not inserted in the HDT files). I am unable to say if this is a problem of the dumps or of hdt-lib.
The OpenRDF library we use for creating the dumps has some fairly thorough range checks for every single character it exports (from the code I have seen), so my default assumption would be that it does the right thing. However, it is also true that Wikidata contains some very exotic unicode characters in its data. ;-)
Markus
Hi Markus,
2014-11-01 0:29 GMT+01:00 Markus Krötzsch markus@semantic-mediawiki.org:
Nice. We are running the RDF generation on a shared cloud environment and I am not sure we can really use a lot of RAM there. Do you have any guess how much RAM you needed to get this done?
I didn't take any stats (my bad) but I would say that for the combined dump, starting from the compressed (gz) file it took around 50GB. I don't have time to re-run this experiment again now but I next time I will take some measurements.
Cristian
On 04.11.2014 18:18, Cristian Consonni wrote:
Hi Markus,
2014-11-01 0:29 GMT+01:00 Markus Krötzsch markus@semantic-mediawiki.org:
Nice. We are running the RDF generation on a shared cloud environment and I am not sure we can really use a lot of RAM there. Do you have any guess how much RAM you needed to get this done?
I didn't take any stats (my bad) but I would say that for the combined dump, starting from the compressed (gz) file it took around 50GB. I don't have time to re-run this experiment again now but I next time I will take some measurements.
Ok, thanks, this is already a good indicator for us. I don't think we could use up that much memory on Wikimedia Labs ...
Btw. there was a bug in our RDF exports that made them bigger than they should have been (no wrong triples, but many duplicates). I have corrected the issue now and uploaded new versions. Maybe this also will make processing faster next time.
Markus
On Thu, Oct 30, 2014 at 11:32 AM, Cristian Consonni kikkocristian@gmail.com wrote:
Dear all,
I wanted to join in and give my birthday present to Wikidata (I am a little bit late, though!) (also, honestly, I didn't recall it was Wikidata's birthday, but it is a nice occasion :P)
Here it is: http://wikidataldf.com
That's super cool. Thanks Cristian. I added it to the presents list.
Cheers Lydia