Dear users, developers and all people interested in semantic wikis,
We are happy to announce SMWCon Fall 2013 - the 8th Semantic MediaWiki
Conference:
* Dates: October 28th to October 30th 2013 (Monday to Wednesday)
* Location: A&O Berlin Hauptbahnhof, Lehrter Str. 12, 10557 Berlin, Germany
* Conference wikipage: https://semantic-mediawiki.org/wiki/SMWCon_Fall_2013
* Participants: Everybody interested in semantic wikis, especially in
Semantic MediaWiki, e.g., users, developers, consultants, business
representatives, researchers.
SMWCon Fall 2013 will be supported by the Open Semantic Data
Association e. V. [1]. Our platinum sponsor will be WikiVote ltd,
Russia [2].
Following the success of recent SMWCons, we will have one tutorial day
and two conference days.
Participating in the conference: To help us planning, you can already
informally register on the wikipage, although a firm registration will
later be needed.
Contributing to the conference: If you want to present your work in
the conference please go to the conference wikipage and add your talk
there. To create an attractive program for the conference, we will
later ask you to give further information about your proposals.
Tutorials and presentations will be video and audio recorded and will
be made available for others after the conference.
==Among others, we encourage contributions on the following topics==
===Applications of semantic wikis===
* Semantic wikis for enterprise workflows and business intelligence
* Semantic wikis for corporate or personal knowledge management
* Exchange on business models with semantic wikis
* Lessons learned (best/worst practices) from using semantic wikis or
their extensions
* Semantic wikis in e-science, e-learning, e-health, e-government
* Semantic wikis for finding a common vocabulary among a group of people
* Semantic wikis for teaching students about the Semantic Web
* Offering incentives for users of semantic wikis
===Development of semantic wikis===
* Semantic wikis as knowledge base backends / data integration platforms
* Comparisons of semantic wiki concepts and technologies
* Community building, feature wishlists, roadmapping of Semantic MediaWiki
* Improving user experience in a semantic wiki
* Speeding up semantic wikis
* Integrations and interoperability of semantic wikis with other
applications and mashups
* Modeling of complex domains in semantic wikis, using rules, formulas etc.
* Access control and security aspects in semantic wikis
* Multilingual semantic wikis
If you have questions you can contact me (Yury Katkov, Program Chair),
Benedikt Kämpgen (General Chair) or Karsten Hoffmeyer (Local Chair)
per e-mail (Cc).
Hope to see you in Berlin
Yury Katkov, Program Chair
[1] http://www.opensemanticdata.org/
[2] http://wikivote.ru
Heya folks :)
Denny and I will be doing another office hour for all things Wikidata
after Wikimania. Everyone is welcome to ask questions about Wikidata.
We'll be doing this on IRC in #wikimedia-office and start with a quick
update in the current state of Wikidata and its development. It'll be
on the 26th of August at 16:00 UTC. For your timezone see
http://www.timeanddate.com/worldclock/fixedtime.html?msg=Wikidata+office+ho…
Hope to see many of you there.
Cheers
Lydia
--
Lydia Pintscher - http://about.me/lydia.pintscher
Community Communications for Technical Projects
Wikimedia Deutschland e.V.
Obentrautstr. 72
10963 Berlin
www.wikimedia.de
Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e. V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg
unter der Nummer 23855 Nz. Als gemeinnützig anerkannt durch das
Finanzamt für Körperschaften I Berlin, Steuernummer 27/681/51985.
Hello,
Can you help me understand the scope of a Wikidata entry please?
What is this Wikidata entry for?
http://www.wikidata.org/wiki/Q272619
Is it for the person Norman Cook and all of his aliases?
Should that title be Fatboy Slim or Norman Cook?
Is it ok that it has different titles in different languages?
Do there have to be separate Wikpedia pages before we can create separate
Wikidata entities for the separate concepts?
In MusicBrainz there are three artists that point to the 'Norman Cook'
Wikipedia page:
http://musicbrainz.org/artist/3150be04-f42f-43e0-ab5c-77965a4f7a7dhttp://musicbrainz.org/artist/34c63966-445c-4613-afe1-4f0e1e53ae9ahttp://musicbrainz.org/artist/ba81eb4a-0c89-489f-9982-0154b8083a28
Should they all be pointing at the same Wikidata entry too?
Is it ok that there is only a single MusicBrainz identifier in Wikidata?
How is that identifier chosen?
The problem that we are experiencing is that our Triplestore is merging
all these concepts together into a single entity and I am trying to work
out where to break the equivalence, or if it is even a problem.
Thanks!
nick.
-----------------------------
http://www.bbc.co.uk
This e-mail (and any attachments) is confidential and
may contain personal views which are not the views of the BBC unless specifically stated.
If you have received it in
error, please delete it from your system.
Do not use, copy or disclose the
information in any way nor act in reliance on it and notify the sender
immediately.
Please note that the BBC monitors e-mails
sent or received.
Further communication will signify your consent to
this.
-----------------------------
Greetings to Wikidata team and community from Semantic MediaWiki team
and community!
It seems that already there are a lot of things possible to do with
Wikidata. What about including some Wikidata tutorials to the tutorial
day of SMWCon conference?
I can already think of the following exciting topics:
Basic tutorials:
* adding information and querying Wikidata
* using Wikidata extensions in enterprise
Advanced topics:
* using Wikidata API
Surely, there can be a lot more interesting topics than that!
Of course all the tutorials will be video recorded and can be then
used as learning materials.
If you're interested in giving the tutorial please read our Call for
Tutorials [1] write a short proposal and contact me.
-----
Cheers,
Yury Katkov, WikiVote
[1] http://semantic-mediawiki.org/wiki/SMWCon_Fall_2013/Call_for_Tutorials
On 06/21/2013 08:00 AM, Aubrey wrote:
> Another dream of mine is an annotator that could save "facts" in Wikidata
> statements.
> We could reald a newspaper online, or a book, or an article on a scientific
> blog, and highlight a short sentence, and this sentence would be a
> statement (Item has a Property Value), with a source (the original
> document).
> I bet this is not *so* difficult.
At first I thought you meant that it would be good to implement this in
Zotero https://www.zotero.org/ , Annotator
https://github.com/okfn/annotator , or a similar tool, to help a user
keep track of their own favorite Wikidata facts. Now I understand :)
that you'd like, perhaps, a client-side browser plugin or script that
takes some highlighted text, offers the user a GUI to fix up the
statement and source, and then feeds it into Wikidata. Am I right?
--
Sumana Harihareswara
Engineering Community Manager
Wikimedia Foundation
Hi Jacobo,
I hope you don't mind that I share the answer with the list. I think the
answer to this question might be of general interest.
the JavaScript creating the visualization in the browser is here:
<https://dl.dropboxusercontent.com/u/172199972/map/map.js>
As you can see it is just a simple usage of HTML5 canvas.
It requires two data files as these (careful, large):
<https://dl.dropboxusercontent.com/u/172199972/map/wdlabel.js>
<https://dl.dropboxusercontent.com/u/172199972/map/graph.js>
The first contains all items, their latlong, and their label.
The second contains the graph, the way items are connected to each other.
The latter two files are created by the following Python scripts, in two
steps. First, you need to create the knowledge base. This can be done with
the following scripts:
<https://github.com/mkroetzsch/wda>
Use there the script
<
https://github.com/mkroetzsch/wda/blob/master/wda-analyze-edits-and-write-k…
>
Careful when you run it, it will download all Wikidata dumps. This might
need a few free Gigabyte and a decent internet connection.
Now, you should have the file kb.txt.gz, containing the knowledge base.
By the way, you can also download the knowledge base as it is created
nightly by us here:
<https://dl.dropboxusercontent.com/u/172199972/kb.txt.gz>
Finally, you will need a few scripts from here:
<https://github.com/vrandezo/wikidata-analytics>
Run them in the following order:
geolabel.py - extracts a list of all locations and their label from the
knowledge base <
https://github.com/vrandezo/wikidata-analytics/blob/master/geolabel.py>
geolabel2wdlabel.py - transforms the list to JavaScript for ready
consumption by the Wikidata Map Interface <
https://github.com/vrandezo/wikidata-analytics/blob/master/geolabel2wdlabel…
>
geo.py - extract a list of all locations from the knowledge base <
https://github.com/vrandezo/wikidata-analytics/blob/master/geo.py>
graph.py - extracts the simple knowledge graph from the knowledge base <
https://github.com/vrandezo/wikidata-analytics/blob/master/graph.py>
geograph.py - extracts the part of the simple knowledge graph that connects
geographical items with each other (needs geo and graph) <
https://github.com/vrandezo/wikidata-analytics/blob/master/geograph.py>
geograph2geojs.py - transforms the geograph to JavaScript for ready
consumption by the Wikidata Map Interface <
https://github.com/vrandezo/wikidata-analytics/blob/master/geograph2geojs.py
>
This should you give the two files wdlabel.js and graph.js, which will be
called by the Wikidata Map Interface (see it's HTML source in order to see
how).
This process is run nightly on a machine we have standing here in the
office. I am planning to set this up on labs, but didn't find the time yet.
I hope this helps,
Denny
2013/7/29 Jacobo Nájera <jacobo(a)metahumano.org>
> Hi Denny,
>
> I am interested in Wikidata Map Interface, Where can i see and download
> the code? I want to experiment and document with it.
>
> Thanks,
> Jacobo
>
> --
> Wikimedia México
>
>
--
Project director Wikidata
Wikimedia Deutschland e.V. | Obentrautstr. 72 | 10963 Berlin
Tel. +49-30-219 158 26-0 | http://wikimedia.de
Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e.V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg unter
der Nummer 23855 B. Als gemeinnützig anerkannt durch das Finanzamt für
Körperschaften I Berlin, Steuernummer 27/681/51985.
These ideas are also relevant to Wikidata, so I'm forwarding to the
general and tech lists. Besides the people listed in the forwarded
email, I was also in the discussion.
Matt Flaschen
Hello all,
I am happy to announce that all interwikis from all articles, templates, project pages (except some archive pages) have been moved to Wikidata. This includes the removal of all local interwikis. With this I roughly checked all pages if they are connected to the right article on Wikidata.
I solved a lot of interwikiconflict, often with disambiguation pages. I also made sure that every articles has an item on Wikidata.
The Dutch Wikivoyage is the first Wikivoyage that fully switched to Wikidata.
Greetings,
Romaine
-------- Original Message --------
Subject: [Wikitech-l] Announcing: the Miga Data Viewer
Date: Wed, 24 Jul 2013 12:33:24 -0400
From: Yaron Koren <yaron(a)wikiworks.com>
Reply-To: Wikimedia developers <wikitech-l(a)lists.wikimedia.org>
To: Wikimedia developers <Wikitech-l(a)lists.wikimedia.org>
Hi,
A project I've been working on for the last three months, via a Wikimedia
Individual Engagement Grant, finally had its first release today. It's the
Miga Data Viewer, and it provides a lightweight framework for browsing and
navigating through structured data in CSV file, which can for easily
browsing through, among other things, Wikipedia and Wikidata data. You can
read more about it here:
http://wikiworks.com/blog/2013/07/23/announcing-miga/
...and on the Miga homepage, where the software can also be downloaded:
http://migadv.com
Thanks,
Yaron
--
WikiWorks · MediaWiki Consulting · http://wikiworks.com
_______________________________________________
Wikitech-l mailing list
Wikitech-l(a)lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l
http://tabula.nerdpower.org/ Tabula: Turn tables within PDFs into CSVs.
More information at http://source.mozillaopennews.org/en-US/code/tabula/ .
I imagine there are some people on this list who have access to PDFs of
openly licensed data they'd like to get into Wikidata (from corporate or
government sources who don't provide easy-to-work-with dumps or APIs).
I heard about Tabula last night and thought the following flow sounded
plausible:
1) get PDFs
2) run them through Tabula to get CSVs
3) use a pywikipediabot script to upload rows to Wikidata
Happy adding!
--
Sumana Harihareswara
Engineering Community Manager
Wikimedia Foundation