Dear users, developers and all people interested in semantic wikis,
We are happy to announce SMWCon Fall 2013 - the 8th Semantic MediaWiki
* Dates: October 28th to October 30th 2013 (Monday to Wednesday)
* Location: A&O Berlin Hauptbahnhof, Lehrter Str. 12, 10557 Berlin, Germany
* Conference wikipage: https://semantic-mediawiki.org/wiki/SMWCon_Fall_2013
* Participants: Everybody interested in semantic wikis, especially in
Semantic MediaWiki, e.g., users, developers, consultants, business
SMWCon Fall 2013 will be supported by the Open Semantic Data
Association e. V. . Our platinum sponsor will be WikiVote ltd,
Following the success of recent SMWCons, we will have one tutorial day
and two conference days.
Participating in the conference: To help us planning, you can already
informally register on the wikipage, although a firm registration will
later be needed.
Contributing to the conference: If you want to present your work in
the conference please go to the conference wikipage and add your talk
there. To create an attractive program for the conference, we will
later ask you to give further information about your proposals.
Tutorials and presentations will be video and audio recorded and will
be made available for others after the conference.
==Among others, we encourage contributions on the following topics==
===Applications of semantic wikis===
* Semantic wikis for enterprise workflows and business intelligence
* Semantic wikis for corporate or personal knowledge management
* Exchange on business models with semantic wikis
* Lessons learned (best/worst practices) from using semantic wikis or
* Semantic wikis in e-science, e-learning, e-health, e-government
* Semantic wikis for finding a common vocabulary among a group of people
* Semantic wikis for teaching students about the Semantic Web
* Offering incentives for users of semantic wikis
===Development of semantic wikis===
* Semantic wikis as knowledge base backends / data integration platforms
* Comparisons of semantic wiki concepts and technologies
* Community building, feature wishlists, roadmapping of Semantic MediaWiki
* Improving user experience in a semantic wiki
* Speeding up semantic wikis
* Integrations and interoperability of semantic wikis with other
applications and mashups
* Modeling of complex domains in semantic wikis, using rules, formulas etc.
* Access control and security aspects in semantic wikis
* Multilingual semantic wikis
If you have questions you can contact me (Yury Katkov, Program Chair),
Benedikt Kämpgen (General Chair) or Karsten Hoffmeyer (Local Chair)
per e-mail (Cc).
Hope to see you in Berlin
Yury Katkov, Program Chair
Heya folks :)
Denny and I will be doing another office hour for all things Wikidata
after Wikimania. Everyone is welcome to ask questions about Wikidata.
We'll be doing this on IRC in #wikimedia-office and start with a quick
update in the current state of Wikidata and its development. It'll be
on the 26th of August at 16:00 UTC. For your timezone see
Hope to see many of you there.
Lydia Pintscher - http://about.me/lydia.pintscher
Community Communications for Technical Projects
Wikimedia Deutschland e.V.
Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e. V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg
unter der Nummer 23855 Nz. Als gemeinnützig anerkannt durch das
Finanzamt für Körperschaften I Berlin, Steuernummer 27/681/51985.
Can you help me understand the scope of a Wikidata entry please?
What is this Wikidata entry for?
Is it for the person Norman Cook and all of his aliases?
Should that title be Fatboy Slim or Norman Cook?
Is it ok that it has different titles in different languages?
Do there have to be separate Wikpedia pages before we can create separate
Wikidata entities for the separate concepts?
In MusicBrainz there are three artists that point to the 'Norman Cook'
Should they all be pointing at the same Wikidata entry too?
Is it ok that there is only a single MusicBrainz identifier in Wikidata?
How is that identifier chosen?
The problem that we are experiencing is that our Triplestore is merging
all these concepts together into a single entity and I am trying to work
out where to break the equivalence, or if it is even a problem.
This e-mail (and any attachments) is confidential and
may contain personal views which are not the views of the BBC unless specifically stated.
If you have received it in
error, please delete it from your system.
Do not use, copy or disclose the
information in any way nor act in reliance on it and notify the sender
Please note that the BBC monitors e-mails
sent or received.
Further communication will signify your consent to
Greetings to Wikidata team and community from Semantic MediaWiki team
It seems that already there are a lot of things possible to do with
Wikidata. What about including some Wikidata tutorials to the tutorial
day of SMWCon conference?
I can already think of the following exciting topics:
* adding information and querying Wikidata
* using Wikidata extensions in enterprise
* using Wikidata API
Surely, there can be a lot more interesting topics than that!
Of course all the tutorials will be video recorded and can be then
used as learning materials.
If you're interested in giving the tutorial please read our Call for
Tutorials  write a short proposal and contact me.
Yury Katkov, WikiVote
On 06/21/2013 08:00 AM, Aubrey wrote:
> Another dream of mine is an annotator that could save "facts" in Wikidata
> We could reald a newspaper online, or a book, or an article on a scientific
> blog, and highlight a short sentence, and this sentence would be a
> statement (Item has a Property Value), with a source (the original
> I bet this is not *so* difficult.
At first I thought you meant that it would be good to implement this in
Zotero https://www.zotero.org/ , Annotator
https://github.com/okfn/annotator , or a similar tool, to help a user
keep track of their own favorite Wikidata facts. Now I understand :)
that you'd like, perhaps, a client-side browser plugin or script that
takes some highlighted text, offers the user a GUI to fix up the
statement and source, and then feeds it into Wikidata. Am I right?
Engineering Community Manager
I am happy to announce that all interwikis from all articles, templates, project pages (except some archive pages) have been moved to Wikidata. This includes the removal of all local interwikis. With this I roughly checked all pages if they are connected to the right article on Wikidata.
I solved a lot of interwikiconflict, often with disambiguation pages. I also made sure that every articles has an item on Wikidata.
The Dutch Wikivoyage is the first Wikivoyage that fully switched to Wikidata.
-------- Original Message --------
Subject: [Wikitech-l] Announcing: the Miga Data Viewer
Date: Wed, 24 Jul 2013 12:33:24 -0400
From: Yaron Koren <yaron(a)wikiworks.com>
Reply-To: Wikimedia developers <wikitech-l(a)lists.wikimedia.org>
To: Wikimedia developers <Wikitech-l(a)lists.wikimedia.org>
A project I've been working on for the last three months, via a Wikimedia
Individual Engagement Grant, finally had its first release today. It's the
Miga Data Viewer, and it provides a lightweight framework for browsing and
navigating through structured data in CSV file, which can for easily
browsing through, among other things, Wikipedia and Wikidata data. You can
read more about it here:
...and on the Miga homepage, where the software can also be downloaded:
WikiWorks · MediaWiki Consulting · http://wikiworks.com
Wikitech-l mailing list
http://tabula.nerdpower.org/ Tabula: Turn tables within PDFs into CSVs.
More information at http://source.mozillaopennews.org/en-US/code/tabula/ .
I imagine there are some people on this list who have access to PDFs of
openly licensed data they'd like to get into Wikidata (from corporate or
government sources who don't provide easy-to-work-with dumps or APIs).
I heard about Tabula last night and thought the following flow sounded
1) get PDFs
2) run them through Tabula to get CSVs
3) use a pywikipediabot script to upload rows to Wikidata
Engineering Community Manager