Dear users, developers and all people interested in semantic wikis,
We are happy to announce SMWCon Fall 2013 - the 8th Semantic MediaWiki
* Dates: October 28th to October 30th 2013 (Monday to Wednesday)
* Location: A&O Berlin Hauptbahnhof, Lehrter Str. 12, 10557 Berlin, Germany
* Conference wikipage: https://semantic-mediawiki.org/wiki/SMWCon_Fall_2013
* Participants: Everybody interested in semantic wikis, especially in
Semantic MediaWiki, e.g., users, developers, consultants, business
SMWCon Fall 2013 will be supported by the Open Semantic Data
Association e. V. . Our platinum sponsor will be WikiVote ltd,
Following the success of recent SMWCons, we will have one tutorial day
and two conference days.
Participating in the conference: To help us planning, you can already
informally register on the wikipage, although a firm registration will
later be needed.
Contributing to the conference: If you want to present your work in
the conference please go to the conference wikipage and add your talk
there. To create an attractive program for the conference, we will
later ask you to give further information about your proposals.
Tutorials and presentations will be video and audio recorded and will
be made available for others after the conference.
==Among others, we encourage contributions on the following topics==
===Applications of semantic wikis===
* Semantic wikis for enterprise workflows and business intelligence
* Semantic wikis for corporate or personal knowledge management
* Exchange on business models with semantic wikis
* Lessons learned (best/worst practices) from using semantic wikis or
* Semantic wikis in e-science, e-learning, e-health, e-government
* Semantic wikis for finding a common vocabulary among a group of people
* Semantic wikis for teaching students about the Semantic Web
* Offering incentives for users of semantic wikis
===Development of semantic wikis===
* Semantic wikis as knowledge base backends / data integration platforms
* Comparisons of semantic wiki concepts and technologies
* Community building, feature wishlists, roadmapping of Semantic MediaWiki
* Improving user experience in a semantic wiki
* Speeding up semantic wikis
* Integrations and interoperability of semantic wikis with other
applications and mashups
* Modeling of complex domains in semantic wikis, using rules, formulas etc.
* Access control and security aspects in semantic wikis
* Multilingual semantic wikis
If you have questions you can contact me (Yury Katkov, Program Chair),
Benedikt Kämpgen (General Chair) or Karsten Hoffmeyer (Local Chair)
per e-mail (Cc).
Hope to see you in Berlin
Yury Katkov, Program Chair
I am reading the documentation of WikiData where I learned that new
properties could be suggested for discussion. But this means adding knew
properties to WikiData. However, is it possible to use existing RDF
vocabularies like the RDF implementation <http://www.rdaregistry.info/>
of RDA <http://www.loc.gov/aba/rda/> a cataloging norm based on the FRBR
conceptual model (Functional Requirements for Bibliographic Records)
(see also "What is FRBR? <http://www.loc.gov/cds/downloads/FRBR.PDF>") ?
Unless this could be considered like librarian stuff, the FRBR
conceptual model is an interresting way of expressing relations between
any work (book, music, movie...) and their authors because it makes a
distinction between the work ("20.000 lieues sous les mers", the novel
written by Jules Verne) and its manifestations (the publication of this
novel by Hetzel in Paris in 1871). FRBR suggest two more levels
"expression" (which I don't understood yet) and item (an explary of the
book). This model was used by the Bibliothèque Nationale de France (BNF)
for its web site data.bnf.fr, the open data portal of the BNF.
What I mean, is that I could ask for a new property in WikiData like
p:writerOf, but why not using rdaw:author (rdaw:
http://rdaregistry.info/Elements/w/) or rather than having a WikiData
property p:workTitle using rdaw:titleOfTheWork ?
Traitement et analyse de bases de données
Centre de Recherche Bretonne et Celtique
20 rue Duquesne
29238 Brest cedex 3
tel : +33 (0)2 98 01 68 95
fax : +33 (0)2 98 01 63 93
Lydia is focusing on some Outreach tasks at the moment so I have
volunteered to make this announcement. The development team are currently
planning to enable Phase 2 for all language editions of Wikiquote on June
10th. For those who don't know, Phase 2 is enabling data access from
Wikiquote to Wikidata and vice versa. Lydia also wants to thank all users
who helped make the Phase 1 deployment a successful launch on all Wikiquote
Thanks, John Lewis
Since the very beginning I have kept myself busy with properties, thinking
about which ones fit, which ones are missing to better describe reality,
how integrate into the ones that we have. The thing is that the more I work
with them, the less difference I see with normal items.... and if soon
there will be statements allowed in property pages, the difference will
blur even more.
I can understand that from the software development point of view it might
make sense to have a clear difference. Or for the community to get a deeper
understanding of the underlying concepts represented by words.
But semantically I see no difference between:
cement (Q45190) <emissivity (P1295)> 0.54
cement (Q45190) <emissivity (Q899670)> 0.54
Am I missing something here? Are properties really needed or are we adding
unnecessary artificial constraints?
I share your dissatisfaction with "part of" because that language construct
hides many different conceptual relationships that should be cleared out, I
think we'll have some community discussion work to do in that regard. One
of the uses is: what is the relationship between a human and his behavior?
I would say that the "human" <has been defined as having> "human behavior"
(or the reverse). But if you have a better suggestion to express this
concept I would be really glad to hear it.
Now that you mention it, yes, I agree that only a property called
"corresponds with item" makes sense in this context, but not the inverse.
I would like to make a further distinction regarding constraints. The
nature of constraints is not to set arbitrary limits but to reflect
patterns that naturally appear in concepts. On that regard, I hate the word
"constraint", because it means that we are placing a "straitjacket" on
reality, when it is the other way round, recurring patterns in the real
world make us "expect" that a value will fall within the bonds of our
I think that we should seriously consider using the term "expectation" from
now on because we don't "constrain" the values per se, we "expect" them to
have a value, and when the value departs from the expected value, then it
sets an alarm that might reflect an error or not.
Once made that distinction, yes, you are right, considering that we are
separating properties and items, our expectations do not belong to the data
itself, they belong to the property.
However, I would like to go to bring the conversation to a deeper level.
What is that what makes the concept of "addition (Q32043)" to be that? What
is in "physical object (Q223557)" that we, sentient beings, can perceive
and agree to treat as a concept? I mention those two because one is purely
abstract, and the other one is purely physical. And I would say that
"addition (Q32043)" <has been defined as having> "associativity (Q177251)"
and "physical object (Q223557)" <has been repeatedly observed to have>
"density (Q29539)". We can argue whether the second is an expectation or
not, but the first is definitely not, someone defined an "addition" like
that and this information can be sourced. Even more, we could also say that
also "physical object (Q223557)" <has been defined as having> "density
(Q29539)", and I guess we could find sources for that statement too.
With all this I want to make the point that there are two sources of
- from our experience seeing repetitions and patterns in the values
(male/female/etc "between 10 and 50"), which belong to the property
- from the agreed definition of the concept itself, which belong to the data
PS: this is a re-post because my previous message was bounced back "for
being too long" :)
** Apologies for cross-posting **
2nd International Workshop on
(Document) Changes: Modeling, Detection, Storage and Visualization
Part of ACM DocEng 2014
September 16th, 2014, Fort Collins, near Denver, Colorado
/*** DEADLINE EXTENDED TO: JUNE 13th (Short Abstracts are due: June 6th) ***/
CALL FOR PAPERS
DChanges 2014 is the second edition of the International Workshop on (Document) Changes: Modeling, Detection, Storage and Visualization in conjunction with the ACM Symposium on Document Engineering. This year, the workshop will be held in Fort Collins, near Denver, Colorado in September 2014.
The goal of this series of events is to share ideas, common issues and principles about models and algorithms for change tracking and detection, versioning and collaborative editing. We want to look at these topics from different perspectives and want to identify the most common issues and the peculiarities of each domain and each approach. The workshop aims at bringing together researchers and practitioners from industry and academia, to discuss these issues in an informal setting and to foster collaboration among them.
The 2014 edition will be focused on the interpretation, visualisation and exploitation of changes. One of last edition's outcomes was that we identified the need for novel interfaces to better understand and exploit detected changes. Several issues were pointed out as still unsolved: interfaces do not scale when dealing with many changes, changes at different levels of abstraction are often not sufficiently taken into account, detection and visualization are often inter-mixed, logs are often detailed but underexploited, and versioning techniques are not very well suited for non-technical people.
Submissions on other topics are also welcome. We also seek contributions on, but not necessarily limited to:
* Diffing and change tracking algorithms
* Detecting changes on complex data structures
* High-level differences
* Change modeling and representation
* Novel approaches to tree-based diff
* Detecting changes on trees, graphs, diagrams and any kind of document
* Edit-distance measures
* Quality of deltas and patches
* Editing patterns
* Semantic diff
* Management of update conflicts
* N-way merge algorithms
* Propagation of changes
* Applications of diff techniques from and to other domains
* software engineering, ontology management, humanities, law, medicine
* Versioning systems
* Collaborative editors
The workshop will run a full day, and be divided in two parts, in order to emphasize both theoretical/algorithmic aspects and practical applications. Ample space will be given to peer discussions and brainstorming about the results of the presentations and the ideas brought forth by participants.
A detailed schedule will be announced in July.
We are honoured to announce that the keynote will be given by Jean-Yves Vion-Dury.
We will publish workshop post-proceedings via the ACM International Conference Proceedings Series.
Authors are required to submit an extended abstract (2-4 pages long) that will undergo a single blind review process. Accepted extended abstracts will be available during the workshop. The best extended abstracts will also be included in the DocEng proceedings.
Full papers (4-8 pages long) are due after the workshop and will be included in the post-proceedings.
Authors are required to submit an extended abstract before the workshop and a full paper after the workshop:
* Short Abstracts are due: June 6th
* Extended Abstracts (2-4 pages) are due: June 13th /*** extended deadline ***/
* Acceptance notice: July 4th
* Camera ready: July 20th
* Workshop: September 16th
* Full papers are due (4-8 pages): October 3rd
* Acceptance notice: October 24th
* Camera ready: November 14th
Papers must be submitted to the EasyChair site (available soon).
Two types of submissions are possible:
* Application/demo notes: showcasing systems or tools
** Extended abstract: 2 pages long
** Full paper: 4 pages long
* Research papers: describing original and unpublished research
** Extended abstract: 4 pages long
** Full paper: 8 pages long
All papers must conform to the ACM SIG Proceedings format. All submissions will undergo a rigorous single blind review process.
* Gioele Barabucci, Universität zu Köln
* Uwe M. Borghoff, Universität der Bundeswehr München
* Angelo Di Iorio, Università di Bologna
* Sonja Maier, Universität der Bundeswehr München
* Ethan Munson, University of Wisconsin-Milwaukee
For any question, please contact <dchanges(a)lists.cs.unibo.it>.
* Serge Autexier, DFKI Bremen
* Boris Konev, University of Liverpool
* John Lumley, PhD
* Pascal Molli, Université de Nantes - LINA
* Sebastian Rönnau, Zalando AG
* Wolfgang Stürzlinger, York University
* Yannis Tzitzikas, University of Crete and FORTH-ICS
* Fabio Vitali, Università di Bologna
* Jean-Yves Vion-Dury, Xerox Research Centre Europe
Hey folks :)
I have a watchlist with a bunch of articles/items on Wikidata and
German and English Wikipedia and Wikivoyage. There isn't much of an
overlap though and I'd especially like to watch all articles for the
items on my Wikidata watchlist also on Wikipedia.
Does a tool already exist that syncs watchlists across our wikis? And
if not is anyone here up for doing that? Maybe using OAuth?
Lydia Pintscher - http://about.me/lydia.pintscher
Product Manager for Wikidata
Wikimedia Deutschland e.V.
Tempelhofer Ufer 23-24
Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e. V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg
unter der Nummer 23855 Nz. Als gemeinnützig anerkannt durch das
Finanzamt für Körperschaften I Berlin, Steuernummer 27/681/51985.