At 02:28 19/03/2012, John R. Frank wrote:
Data Wikipedians, The talk submission below explains my interest in structuring wikipedia for interaction with machines, so consider it my introduction to this list. I hope WikiData will provide a foundation for KBA-type algorithms (see below), and am looking forward to learning more about APIs from WikiData.
Dear John,
I certainly share interest in that area: to provide people and machines the genuine sources in such a way they can use them (indexing only a first step, facilitated reading is the next) to form opinions rather than to provide doxative value added summaries or pre-mashed opinions - even of high quality like in Wikipedia (I consider as the "default" ontology)
Because, if information is power, shared data are information no more (cf. general theory of information). My interests therefore would be:
1) possible non-English sources projects? I am interested in what could have been considered in French, as I lead an ".fra" project where the .fra name space could be a taxonomy (semanic addressing) of an open ontology (in such a case a diktyology: a network structered ontology). http://a-fra.org (old site we have to revamp).
2) formated data containers (whatever the way they could be fed) that could be used to present knowledge, interact to explore understandings and be associated to build comprehendings.
Thank you for the links. jfc
-jrf
http://wikimania2012.wikimedia.org/wiki/Submissions/TREC-KBA-Mining-Content-...
TREC KBA - Mining Content Streams to Recommend Page Updates to Editors
Abstract: We have organized a new session in NIST's Text Retrieval Conference (TREC) called Knowledge Base Acceleration (KBA). TREC KBA challenges computer science researchers to develop algorithms that mine content streams, such as news and blogs, to recommend edits to knowledge bases (KB), such as Wikipedia. We consider a KB to be "large" if the number of entities described by the KB is larger than the number of humans maintaining the KB. As entities change and evolve in the real world, large KBs often lag behind by months or years. Such large KBs are an increasingly important tool in several industries, including biomedical research, law enforcement, and financial services. TREC KBA aims to develop algorithms for helping KB editors stay abreast of changes to the organizations, people, proteins, and other entities described by their KBs. In this technical presentation, we will give an overview of the TREC KBA data sets and tasks for 2012 and future years. In addition to developing text analytics, we are also working on a wikipedia bot for connecting KBA-type systems to users' talk pages in mediawiki. After presenting the current state of our bot development, we hope to engage the audience in an open discussion about how such algorithms might be most fruitfully employed in the Wikipedia community.
(Consider putting your name on the "interested" list in the page linked above.)
Wikidata-l mailing list Wikidata-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikidata-l