Hello Nadja,
Topic maps do not populate a class model for whatever the wiki page represents --- good luck getting agreement on a class model for anything in WP, will not happen, imho
TM itself has a small ontology as seen at [[meta:wikitopics]] --- topic map, topic, type, name, (name-) variant, association, context, etc --- note there is no class, property, datatype, etc because no class model is used to describe page content
RDF describes class extents; WP has no need for these descriptions. Topic maps describe subject indexes; subject indexes are what WP sorely lacks imho.
From [[meta:wikitopics]] you can see that
topic map class model is stored in SMW/RDF triples (in subobjects). That said, [[wikidata]] is a fully-conforming RDF application - outputs the topic maps as triples, query triples, etc. A key point is that others can download WP topic maps & convert them into class model(s) appropriate to their application You're oriented perhaps towards RDF->TM transforms.... hard to do..... I'm all about TM->RDF.
regards - john
On 14.06.2012 02:21, Nadja Kutz wrote:
John McClure wrote:
"Of course, thanks for the pointer. Yes, I'd agree that 19788's ontology be closely reviewed for inclusion. 19788:2 standardizes the Dublin Core properties, the same I recommend for [[wikidata]] provenance data, the same slated for the [[wikidata]] ontology. But more to your point is that the entire ISO corpus would fit really well if it were viewed as a topic map whose topics and sub-topics can be referenced from [[wikidata]] artifacts such as property definitions."
Hello John
Frankly speaking I don't see why one would want to use topic maps.
That is RDF triples are after an identification (canonical: elements with the same URI are identified)
a labeled graph, here to be called
"the" RDF graph. (I know that some people call the triples themselves
"the" RDF graph, but why use a second word (namely graph) for triples?
Triples are very trivial highly disconnected graph.).
If I want to
connect certain nodes of that graph to a topic
I only need to supply
these nodes with an extra triple which says
("this node belongs to
this topic", i.e. something like (node, belongsto,thistopic) ) or modify
the canonical identification map and the RDF graph will be a
"topic map" or one has the case that the triples are
already set out
in "topics" that is for example
if I have a set of triples with the
same resource URI then upon canonical identification these are
a kind
of "topic map" (with all "legs" pointing in one direction) or am I missing out something crucial?
However if you start with topics,
you have no canonical information about the "internal structure" of
a
topic and in some cases you would need to artificially impose this in retrospect onto
the datastructure. Like if your topic is "members of
a society" , you have all the members and you would need some internal structure like a hierarchy
then you would need to supply each member
with a hierarchy classification
(i.e. with extra data, which is
usually different for each member). For the RDF case the person who gave you
the triples could have made a choice of order which could be given
upon canonical identification. I.e. in principle the
internal
structure depends on your identification map but there is a canonical one.
You can of course mimick a RDF triple with a topic map by
choosing the topic to be the ressource and
one "leg" of your topic as
a property and the topic which is connected with this "leg" as the object, but the
choice of a leg is not canonical if there is more
than one leg. Only if you would make all "legs" of a topic map into triples you would have something like a canonical assignment. I find these differences important. But may be I have overseen something or misunderstood
about topic maps (I read about this issue what I found
scattered around in the internet so this is not so unprobable).
I
had this kind of discussion with people from deepa mehta http://www.deepamehta.de/
because they used topic maps, but sofar
nobody there could convince me about the distinguished advantages of topic maps.
But the discussion was sofar rather brief. The
discussion was because we discussed to what extend it would be possible to merge a student project we
had at HTW Berlin ( a collaboration
platform for visualizing RDF data called Mimirix
http://www.daytar.de/art/MIMIRIX/) with deepa mehta, like for example
one could use at least the backend, which has already a layout for access control
(the deepa mehta people told me that they haven't yet
really attacked the issue
of access control) or one could use at lease
the carefully designed client.
May be you have other arguments for
topic maps, as said I might have missed out something.
I understand
that there are other issues like the speed of adressability or direct access issues
but these are then rather an issue of the serialization
I find.
So I didn't understand why for example the pregiven JSON
structure of an JSONarray
http://www.json.org/javadoc/org/json/JSONArray.html
is not used in
JSON-LD
http://json-ld.org/spec/latest/json-ld-syntax/#sets-and-lists
but
thats another topic.
In the context of applications of ISO metadata
you may want to read:
http://www.azimuthproject.org/azimuth/show/Examples+of+semantic+web+applicat...
_______________________________________________
Wikidata-l mailing
list
Wikidata-l@lists.wikimedia.org