I meanwhile found a public accessible link to your publication: http://www.mpi-inf.mpg.de/yago-naga/yago/publications/aij.pdf in which you write:
"However, in contrast to the original YAGO, the methodology for building YAGO2 (and also maintaining it) is systematically designed top-down with the goal of integrating entity-relationship-oriented facts with the spatial and tempo- ral dimensions. To this end, we have developed an extensible approach to fact extraction from Wikipedia and other sources, and we have tapped on specific inputs that contribute to the goal of enhancing facts with spatio-temporal scope. Moreover, we have developed a new representation model, coined SPOTL tuples (SPO + Time + Location), which can co-exist with SPO triples, but provide a much more convenient way of browsing and querying the YAGO2 knowl- edge base. " etc. page 3
so it seems one special feature is the explicit treatment of space and time, which sounds interesting. So I would like make some of my questions more precise
"YAGO has about 100 manually defined relations, such as wasBornOnDate, locatedIn and hasPopulation. Categories and infoboxes can be exploited to deliver instances of these relations. (p.3) "
...
"The new YAGO2 architecture is based on declarative rules that are stored in text files."
-Is there a wiki or some other public accessible place for those like the mapping wiki of dbpedia?
-what do you do if infoboxes change?
-How do you treat microformats?
"Instead of seeing only SPO triples and thus having to perform an explicit de-reification join for associated meta-facts, the user should see extended 5-tuples where each fact already includes its associated temporal and spatial information. We refer to this view of the data as the SPOTL view: SPO triples augmented by Time and Location. We also discuss a further optional extension into SPOTLX 6-tuples where the last component offers keywords or key phrases from the conteXt of sources where the original SPO fact occurs.(p. 20)"
It is not yet fully clear to me how your concept go together with other concepts to include contexts like with named graphs or with the inclusion of context-ontologies within formats like JSON-LD, eventually that would need a longer discussion, are you planning to set up a wiki page on datawiki, like for example there is one for JSON-LD: http://meta.wikimedia.org/wiki/Talk:Wikidata/Data_model/JSON ?
Is it possible to extract the Yago queries in some RDF serialization format?
2012/4/14 JFC Morfin <jefsey(a)jefsey.com>
> Lydia,
> May be you could create and maintain http://meta.wikimedia.org/**
> wiki/Wikidata/Status_updates/<http://meta.wikimedia.org/wiki/Wikidata/Status_updates/>as a menu page for all the monthly reports? This way we could quote and use
> it as a single permanent URL for the Status Reports.
> Thank you and best
JFC, http://meta.wikimedia.org/wiki/Wikidata/Status_updates already exists.
Regards,
Sylvain.
--
Sylvain Boissel
Chargé de mission communauté et technologie de Wikimédia France
tél 07.62.93.42.02 - email sylvain.boissel(a)wikimedia.fr - twitter
@sboissel<https://twitter.com/#!/sboissel>
*Imaginez un monde où chaque personne sur la planète aurait librement accès
à la totalité du savoir humain. C'est notre engagement. Aidez Wikimedia
France à en faire une réalité <https://dons.wikimedia.fr>.*
www.wikimedia.fr
(apologies for multiple posts; please forward; please email
i-challenge2012_a_t_easychair.org for questions)
********************
NEWS:
- Deadline extension until April 25th, 2012
- The total amount of 2,000 EUR (sponsored by Wolters Kluwer) will be
awarded in prizes and split among the most promising applications.
- Linked Data Cup Board updated:
http://i-challenge.blogs.aksw.org/chairs-committee
********************
Linked Data Cup 2012
http://i-challenge.blogs.aksw.org/
co-located with the I-Semantics 2012
Graz, Austria, 5 - 7 September 2012
http://www.i-semantics.at
********************
The yearly organised Linked Data Cup (formerly Triplification Challenge)
awards prizes to the most promising innovation involving linked data.
Four different technological topics are addressed: triplification,
interlinking, cleansing, and application mash-ups. The Linked Data Cup
invites scientists and practitioners to submit novel and innovative (5
star) linked data sets and applications built on linked data technology.
Although more and more data is triplified and published as RDF and
linked data, the question arises how to evaluate the usefulness of such
approaches. The Linked Data Cup therefore requires all submissions to
include a concrete use case and problem statement alongside a solution
(triplified data set, interlinking/cleansing approach, linked data
application) that showcases the usefulness of linked data. Submissions
that can provide measurable benefits of employing linked data over
traditional methods are preferred.
Note that the call is not limited to any domain or target group. We
accept submissions ranging from value-added business intelligence use
cases to scientific networks to the longest tail [1] of information
domains. The only strict requirement is that the employment of linked
data is very well motivated and also justified (i.e. we rank approaches
higher that provide solutions, which could not have been realised
without linked data, even if they lack technical or scientific
brilliance). The total amount of 2,000 EUR (sponsored by Wolters Kluwer)
will be awarded in prizes and split among the most promising applications.
Evaluation Criteria
===================
The submissions will be initially evaluated with a well-known five star
ranking system [2]. Furthermore, entries will be assessed according to
the extent to which they
1. motivate the relevancy of their use case for their respective domain;
2. justify the adequacy of linked data technologies for their solution;
3. demonstrate that all alternatives to linked data would have resulted
in an inferior solution;
4. provide an evaluation that can measure the benefits of linked data
Topics
======
Ideas for topics include (but are not limited to):
* Improving traditional approaches with help of linked data
* Linked data use in science and education
* Linked data supported multimedia applications
* Linked data in the open source context
* Web annotation
* Generic applications
* Internationalization of linked data
* Visualization of linked data
* Linked government data
* Business models based on linked data
* Recommender systems supported by linked data
* Integrating microposts with linked data
* Distributed social web based on linked data
* Linked data sensor networks
Submission and Reviewing
========================
Submissions to the Linked Data Cup will be reviewed by members of the
Linked Data Cup Board and invited experts from the Linked Data community.
Submissions should consist of 4 pages and must be original and must not
have been submitted for publication elsewhere. Papers should follow the
ACM ICPS guidelines for formatting as accepted submissions will be
published in the I-SEMANTICS 2012 proceedings in the digital library of
the ACM ICP series. Please read the submission page[a] for detailed
information on how to submit.
Important Dates (Linked Data Cup)
1. Paper Submission Deadline: April 25, 2012
2. Notification of Acceptance: May 21, 2012
3. Camera-Ready Paper: June 11, 2012
Links
=====
[1]
http://static.slidesharecdn.com/swf/ssplayer2.swf?doc=uleiauersemmedmainz20…
[2] http://www.w3.org/DesignIssues/LinkedData.html
--
Dipl. Inf. Sebastian Hellmann
Department of Computer Science, University of Leipzig
Projects: http://nlp2rdf.org , http://dbpedia.org
Homepage: http://bis.informatik.uni-leipzig.de/SebastianHellmann
Research Group: http://aksw.org
Dear Wikidata team,
I am writing on behalf of the YAGO team at the Max Planck Institute
for Informatics in Saarbruecken [1]. We have heard about the Wikidata
project, and we are very excited to learn that you aim to launch a
free knowledge base in the spirit of Wikipedia.
We would like to get in touch with you -- also to see whether or how
we could help on the long run. Let me briefly tell you what we have on
our side: As you might know, YAGO is knowledge graph that has been
extracted automatically from the infoboxes and categories of
Wikipedia. We have evaluated YAGO manually and achieved a precision of
95%, meaning that statistically speaking, only 5 out of 100 statements
in the knowledge graph are extracted wrongly. We also have a link of
the Wikicategories to the WordNet taxonomy (again with 95% precision),
and type checking methods for the extracted statements. Should these
things ever be useful to you, we would be happy to help.
I will be at the WWW conference next week. In case some of you are
there, too, I'd be happy to get in touch to learn more about your
current work.
Thanks
Fabian
[1] http://yago-knowledge.org
--
Thanks
Fabian
--
Fabian online: http://suchanek.name
At 11:36 13/04/2012, Jeroen De Dauw wrote:
>Hey,
>I've been following the usage of WikiData on twitter, and for the
>last week or so, more then half the tweets have been pointing to
>this article. Apparently people like to criticize :)
To discuss something fundamental is not criticizing. This article
raises a problem we know very well à the different (linguistic,
telecom, network) strata below (diversity, orthotypography,
normalization, informatics, canonization, globalization,
internationalization, etc. ) which now touches the semantical and
intellectual strata.
Engineering issues, as discussed here, are for the technical
processes to work better. The problem is to determine what "better"
societaly means in an "antropobotical" society (persons/robots) like
ours - broadly influenced by its daily experience of Wikipedia. This
leads to fundamental questions on the Freedom of Knowledge. The
difficulty is the resulting ethical/technical loop and the impact on
engineering orientations.
jfc