Hi all,
In response to several requests, we have extended the submission deadline
for the Wiki Workshop @ WWW 2017 by one week. It is now on *January 31,
2017* (end of day anywhere on Earth).
Note that this deadline applies only if authors want their contribution to
appear as part of the conference proceedings. If they don't want their
contribution to appear in the proceedings, they should submit to the later
deadline (February 26, 2017).
*We emphasize that we explicitly encourage the submission of preliminary
work in the form of extended abstracts (1 or 2 pages).*
We look forward to your submissions! If you have any further questions,
don't hesitate to contact us at wikiworkshop(a)googlegroups.com.
Robert West, EPFL
Leila Zia, Wikimedia Foundation
Dario Taraborelli, Wikimedia Foundation
Jure Leskovec, Stanford University
------------------
Wiki Workshop 2017
Held at *WWW 2017* (International World Wide Web Conference), Perth,
Australia, April 4, 2017
Workshop webpage:
http://www.wikiworkshop.org
CALL FOR CONTRIBUTIONS
The goal of this workshop is to bring together researchers exploring all
aspects of Wikimedia websites such as Wikipedia, Wikidata, and Commons.
With members of the Wikimedia Foundation's Research team on the organizing
committee and with the experience of successful workshops in 2015
<http://snap.stanford.edu/wiki-icwsm15/> and 2016
<http://snap.stanford.edu/wikiworkshop2016/>, we aim to continue
facilitating a direct pathway for exchanging ideas between the organization
that operates Wikimedia websites and the researchers interested in studying
them.
Topics of interest include, but are not limited to
- new technologies and initiatives to grow content, quality, diversity,
and participation across Wikimedia projects
- use of bots, algorithms, and crowdsourcing strategies to curate,
source, or verify content and structured data
- bias in content and gaps of knowledge
- diversity of Wikimedia editors and users
- understanding editor motivations, engagement models, and incentives
- Wikimedia consumer motivations and their needs: readers, researchers,
tool/API developers
- innovative uses of Wikipedia and other Wikimedia projects for AI and
NLP applications
- consensus-finding and conflict resolution on editorial issues
- participation in discussions and their dynamics
- dynamics of content reuse across projects and the impact of policies
and community norms on reuse
- privacy
- collaborative content creation (unstructured, semi-structured, or
structured)
- collaborative task management
- innovative uses of Wikimedia projects' content and consumption
patterns as sensors for real-world events, culture, etc.
Papers should be 1 to 8 pages long and will be published on the workshop
webpage and optionally (depending on the authors' choice) in the workshop
proceedings. Authors whose papers are accepted to the workshop will have
the opportunity to participate in a poster session.
We explicitly encourage the submission of preliminary work in the form of
extended abstracts (1 or 2 pages).
KEY DATES
If authors want paper to appear in proceedings:
- Submission deadline: *January 31, 2017* (end of day anywhere on Earth)
- Author feedback: February 7, 2017
- Camera-ready version due: February 14, 2017
If authors *do not* want paper to appear in proceedings:
- Submission deadline: *February 26, 2017*
- Author feedback: March 7, 2017
Please see workshop webpage for formatting and submission instructions.
ORGANIZATION
Robert West, EPFL
Leila Zia, Wikimedia Foundation
Dario Taraborelli, Wikimedia Foundation
Jure Leskovec, Stanford University
CONTACT
Please direct your questions to wikiworkshop(a)googlegroups.com
Hello World
I'm Satya. I recently came to know about the contributions one can do to
the wiki idea. I am much fascinated by the concept of open source, but the
problem is i don't know how to start with my contribution.(i am new to open
source)
I created accounts in MediaWiki,Phabricator.
I have no clue upon how the thing works. I'd be glad if someone helped me
out.
Regards
S.Satya Pramod
Forwarding from the OSM list for info:
---------- Forwarded message ----------
From: Yuri Astrakhan <yuriastrakhan(a)gmail.com>
Date: 21 January 2017 at 01:40
Subject: [OSM-talk] RU Wikipedia now uses OSM by Wikidata ID
To: "talk(a)openstreetmap.org" <talk(a)openstreetmap.org>
Russian Wikipedia just replaced all of their map links in the upper
right corner (geohack) with the <maplink> Kartographer extension!
Moreover, when clicking the link, it also shows the location outline,
if that object exists in OpenStreetMap with a corresponding Wikidata
ID (ways and relations only, no nodes). My deepest respect to my
former Interactive Team colleagues and volunteers who have made it
possible! (This was community wishlist #21)
Example - city of Salzburg (click coordinates in the upper right
corner, or in the infobox on the side):
https://ru.wikipedia.org/wiki/%D0%97%D0%B0%D0%BB%D1%8C%D1%86%D0%B1%D1%83%D1…
P.S. I am still working on improving Wikidata linking, and will be
very happy to collaborate with anyone on improving OSM data quality.
_______________________________________________
talk mailing list
talk(a)openstreetmap.org
https://lists.openstreetmap.org/listinfo/talk
--
Andy Mabbett
@pigsonthewing
http://pigsonthewing.org.uk
Hi,
I didn't see it around here, but in a paper from Nov 2016 a group of
researchers from IBM used Wikidata to select which entities to feed Watson
for automatic QA generation:
"Training IBM Watson using Automatically Generated Question - Answer Pairs"
https://arxiv.org/abs/1611.03932
Cheers,
Micru
Hi everyone,
First of all, I want to introduce myself because I am new in this
mailing list. I am Ivanhercaz in the Wikimedia projects, my name is easy
to decipher if you look my username ―*Ivan Her*nández *Caz*orla―; I am
member of Wikimedia España. One of the projects that I love is Wikidata
and I am interested in its system, I mean, Wikibase. It is because I am
planning a cultural project and I am thinking to use Wikibase to gather
data.
My main doubt is about how works Wikibase. I know that I need to
download the extensions and install it, easy, and then configure it. But
I would like to know if I can install the Repository and the Client
extension in the same MediaWiki installation ―everything in
/wiki.example.org/― or if I have to install the Repository in one and
the Client in other ―something like /data.example.org /and
/wiki.example.org/―.
If it is possible both ways, what do you recommend me? One installation
with both extensions, or one for each extension?
Thanks in advance! I await your answer.
Regards from Canary Islands, Iván
--
Iván Hernández Cazorla.
Estudiante del Grado de Historia en la *Universidad de Las Palmas de
Gran Canaria*.
Socio de *Wikimedia España*.
Sitio web personal <http://distriker.com>.
Hi Adrian,
I am not involved in BigData/BlazeGraph etc.. but I can provide some
insight.
On 2017-01-11 15:15, Adrian Bielefeldt wrote:
> I'm writign on behalf of a Wikidata Research Project [1] into
> SPARQL-Queries. We are writing a java application to parse
> SPARQL-Queries from the query.wikidata.org-logs and analyse them for
> different features. At the moment we are parsing the queries with
> org.openrdf.query.parser.QueryParserUtil into
> org.openrdf.query.parser.ParsedQuery-Objects. Unfortunatly we cannot
> parse queries like this one:
>
> SELECT DISTINCT ?item
> WHERE
> { ?tree0 wdt:P31 ?item . BIND (wd:Q146 AS ?tree0) }
>
> because of: "BIND clause alias '{}' was previously used".
>
> If we parse them as org.openrdf.query.TupleQuery-Objects using
> org.openrdf.repository.RepositoryConnection.prepareTupleQuery they are
> parsed just fine.
This is practically an invalid SPARQL query (different systems accept
more or less) but
the query is nonsensical. The Query Parser reports that.
The TupleQuery parses according to the AST which does not "judge" a
query on its merits.
It has been a while since I did any query analysis for
sparql.uniprot.org.
What I did at that time is for more detailed analysis is use the
SPIN-API [2] to convert
the SPARQL query string to an RDF representation. Loaded that SPARQL
query as RDF into a
triple store and used SPARQL to analyze those SPARQL queries. i.e. use
SPARQL to analyze analytical SPARQL queries ;)
Regarding the openrdf parser questions you can ask them at eclipse-rdf4j
https://groups.google.com/forum/#!forum/rdf4j-users
rdf4j at eclipse is what was formerly named openrdf/sesame.
Regards,
Jerven
[2] https://github.com/spinrdf/spinrdf/
>
>
> Unfortunatly, the TupleQuery-Objects do not support the
> analysis-Features of ParsedQuery.
>
> All of which leads to 2-3 questions:
>
> 1. Why is this kind of query accepted by one parser but not the other?
> 2. Is it possible to obtain a ParsedQuery-Object from a
> TupleQuery-Object?
> (3. Is this the right mailing list to post these questions to?)
>
> Greetings,
>
> Adrian Bielefeldt
>
>
> Links:
> ------
> [1]
> https://meta.wikimedia.org/wiki/Research:Understanding_Wikidata_Queries
> ------------------------------------------------------------------------------
> Developer Access Program for Intel Xeon Phi Processors
> Access to Intel Xeon Phi processor-based developer platforms.
> With one year of Intel Parallel Studio XE.
> Training and support from Colfax.
> Order your platform today. http://sdm.link/xeonphi
> _______________________________________________
> Bigdata-developers mailing list
> Bigdata-developers(a)lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bigdata-developers
--
Jerven Tjalling Bolleman
SIB | Swiss Institute of Bioinformatics
CMU - 1, rue Michel Servet - 1211 Geneva 4
t: +41 22 379 58 85 - f: +41 22 379 58 58
Jerven.Bolleman(a)sib.swiss - http://www.sib.swiss
Is there a way to download the wikidata ontology?
Beste Grüße / kind regards
Rüdiger Klein
______________________________________________________
Dr. Rüdiger Klein
Fraunhofer Institute for Intelligent Analysis and Information Systems (IAIS) Dept. Adaptive Reflective Teams (ART)
Schloss Birlinghoven, 53754 Sankt Augustin, Germany
Tel: +49 2241 14 2608
Fax: +49 2241 14 2342
E-Mail: ruediger.klein(a)iais.fraunhofer.de
Hey,
I recently wanted to set up an enpoint and realised there is no truthy
triple dump so far. I opened a ticket on phabricator.
https://phabricator.wikimedia.org/T155103
Is there a general interest to have such a dump? Is there anyone else who
could use it and/or would feel responsible to set it up?
Cheers,
Lucie
--
Lucie-Aimée Kaffee