Hello all,
This is an announcement for a breaking change to the output format of the
WikibaseQualityConstraints constraint checking API, to go live on 6 May 2019
(most likely around 12:00 UTC). It affects all clients that use the
*wbcheckconstraints* API action.
We are adding a new status for constraints
<https://lists.wikimedia.org/pipermail/wikidata/2019-April/012910.html>, in
addition to regular constraints and mandatory constraints: suggestion
constraints indicate possible improvements to a statement, but are not
inherently problematic like other constraint violations. This implies a new
status for constraint results as well: in addition to 'violation' for
violations of mandatory constraints and 'warning' for violations of regular
constraints, as well as several statuses that are not violations, there is
now 'suggestion' for violations of suggestion constraints. The default
value of the status API parameter is changed to include this status as well
(from violation|warning|bad-parameters to
violation|warning|suggestion|bad-parameters), and it can appear as the
"status" of a result in the response.
API consumers that are not interested in suggestion constraints can specify
a non-default value for the status API parameter, e. g. the old default
violation|warning|bad-parameters, to avoid getting responses including this
status. Others should decide how to handle it, and update their code
accordingly if necessary.
According to our stable interface policy
<https://www.wikidata.org/wiki/Wikidata:Stable_Interface_Policy>, the
change will be enabled 4 weeks after this announcement, on May 6th, and a
test system will be set up latest on April 17th on test.wikidata.org.
If you have any question or issue, let us know in the related ticket
<https://phabricator.wikimedia.org/T204439>.
Cheers,
--
Léa Lacroix
Project Manager Community Communication for Wikidata
Wikimedia Deutschland e.V.
Tempelhofer Ufer 23-24
10963 Berlin
www.wikimedia.de
Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e. V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg unter
der Nummer 23855 Nz. Als gemeinnützig anerkannt durch das Finanzamt für
Körperschaften I Berlin, Steuernummer 27/029/42207.
*(apologies for crost-posting - feel free to share this announcement in
your communities)*
Hello all,
Having a service providing short links exclusively for the Wikimedia
projects is a community request that came up regularly on Phabricator
<https://phabricator.wikimedia.org/T44085> or in community discussions
<https://meta.wikimedia.org/wiki/Community_Wishlist_Survey_2019/Reading/Crea…>.
After a common work of developers from the Wikimedia Foundation and
Wikimedia Germany, we are now able to provide such a feature, it will be
enabled on April 11th on Meta.
*What is the URL Shortener doing?*
The Wikimedia URL Shortener is a feature that allows you to create short
URLs for any page on projects hosted by the Wikimedia Foundation, in order
to reuse them elsewhere, for example on social networks or on wikis.
The feature can be accessed from Meta wiki on the special page
m:Special:URLShortener
<https://meta.wikimedia.org/wiki/Special:URLShortener> (will be enabled on
April 11th). On this page, you will be able to enter any web address from a
service hosted by the Wikimedia Foundation, to generate a short URL, and to
copy it and reuse it anywhere.
The format of the URL is w.wiki/ followed by a string of letters and
numbers. You can already test an example: w.wiki/3 redirects to
wikimedia.org.
*What are the limitations and security measures?*
In order to assure the security of the links, and to avoid shortlinks
pointing to external or dangerous websites, the URL shortener is restricted
to services hosted by the Wikimedia Foundation. This includes for example:
all Wikimedia projects, Meta, Mediawiki, the Wikidata Query Service,
Phabricator. (see the full list here
<https://meta.wikimedia.org/wiki/Wikimedia_URL_Shortener>)
In order to avoid abuse of the tool, there is a rate limit: logged-in users
can create up to 50 links every 2 minutes, and the IPs are limited to 10
creations per 2 minutes.
*Where will this feature be available?*
In order to enforce the rate limit described above, the page
Special:URLShortener will only be enabled on Meta. You can of course create
links or redirects to this page from your home wiki.
The next step we’re working on is to integrate the feature directly in the
interface of the Wikidata Query Service, where bit.ly is currently used to
generate short links for the results of the queries. For now, you will have
to copy and paste the link of your query in the Meta page.
*Documentation and requests*
- If you have any question or requests, feel free to leave a comment
under this Phabricator task <https://phabricator.wikimedia.org/T44085>
- The user documentation is available here
<https://meta.wikimedia.org/wiki/Special:MyLanguage/Wikimedia_URL_Shortener>:
please help us to translate it in your language!
- See also the technical documentation of the extension
<https://www.mediawiki.org/wiki/Extension:UrlShortener>
Thanks a lot to all the developers and volunteers who helped moving forward
with this feature, and making it available today for everyone in the
Wikimedia projects!
--
Léa Lacroix
Project Manager Community Communication for Wikidata
Wikimedia Deutschland e.V.
Tempelhofer Ufer 23-24
10963 Berlin
www.wikimedia.de
Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e. V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg unter
der Nummer 23855 Nz. Als gemeinnützig anerkannt durch das Finanzamt für
Körperschaften I Berlin, Steuernummer 27/029/42207.
18th International Semantic Web Conference (ISWC 2019)
“Knowledge Graphs, Linked Data, Linked Schemas and AI on the Web”
Auckland, New Zealand, 26-30 October, 2019
https://iswc2019.semanticweb.org/
The International Semantic Web Conference (ISWC) is the premier venue for
presenting fundamental research, innovative technology, and applications
concerning semantics, data, and the Web. It is the most important
international venue to discuss and present latest advances and applications
of the semantic Web, knowledge graphs, linked data, ontologies and
artificial intelligence (AI) on the Web.
ISWC attracts a large number of high quality submissions every year and
participants from both industry and academia. ISWC brings together
researchers from different areas, such as artificial intelligence,
databases, natural language processing, information systems, human computer
interaction, information retrieval, web science, etc., who investigate,
develop and use novel methods and technologies for accessing, interpreting
and using information on the Web in a more effective way.
Follow us:
Twitter: @iswc_conf , #iswc_conf ( https://twitter.com/iswc_conf )
LinkedIn: https://www.linkedin.com/groups/13612370
Facebook: https://www.facebook.com/ISWConf/
Become part of ISWC 2019 by submitting to the following tracks & activities
or just attend them!
In this announcement:
* Highlights
1. Call for Doctoral Consortium
Full papers due: April 17, 2019 (**today**)
23:59:59 Hawaii Time
* Highlights
*******************************************
* Doctoral Consortium: papers submitted to the doctoral consortium will
be subject to **double blind** peer review.
1. Call for Doctoral Consortium Papers
**********************************************
The ISWC 2019 Doctoral Consortium will take place as part of the 18th
International Semantic Web Conference in Auckland, New Zealand. This forum
will provide PhD students an opportunity to share and develop their
research ideas in a critical but supportive environment, to get feedback
from mentors who are senior members of the Semantic Web research community,
to explore issues related to academic and research careers, and to build
relationships with other Semantic Web PhD students from around the world.
The Consortium aims to broaden the perspectives and to improve the research
and communication skills of these students.
The Doctoral Consortium is intended for students who have a specific
research proposal and some preliminary results, but who have sufficient
time prior to completing their dissertation to benefit from the consortium
experience. Generally, students in their second or third year of PhD will
benefit the most from the Doctoral Consortium. In the Consortium, the
students will present their proposals and get specific feedback and advice
on how to improve their research plan.
All proposals submitted to the Doctoral Consortium will undergo a thorough
reviewing process with a view to providing detailed and constructive
feedback. The international program committee will select - submissions for
presentation at the Doctoral Consortium.
Students with accepted submissions at the Doctoral Consortium will be
eligible to apply for travel fellowships to offset some of the travel
costs but they will be asked to attend the whole day of the Doctoral
Consortium.
We ask the PhD students to submit a 12 page description of their PhD
research proposal. All proposal have to be submitted electronically via the
EasyChair conference submission System. The proposal text must have at
least 8 sections (some can be very short), addressing each of the following
questions:
1. Problem statement: What is the problem that you are addressing?
2. Relevancy: Why is the problem important? Who will benefit if you
succeed? Who should care?
3. Related work: How have others attempted to address this problem? Why is
the problem difficult?
4. Research question(s): What are the research questions that you plan to
address?
5. Hypotheses: What hypotheses are related to your research questions? See
Is This Really Science? The Semantic Webber’s Guide to Evaluating Research
Contributions.
6. Preliminary results: Do you have any preliminary results that
demonstrate that your approach is promising?
7. Approach: How are you planning to address your research questions and
test your hypotheses? What is the main idea behind your approach? The key
innovation?
8. Evaluation plan: How will you measure your success – faster/ more
accurate/ less failures/ etc.? How do you plan to test your hypothesis?
What will you measure? What will you compare to?
9. Reflections: Why do you think you will succeed where others failed?
Provide an argument, based either on common knowledge or on evidence that
you have accumulated, that your approach is likely to succeed.
Further info:
https://iswc2019.semanticweb.org/call-for-doctoral-consortium-papers/
== Important Dates ==
Full papers due April 17, 2019 (**today**) 23:59:59
Hawaii Time
Notifications May 15, 2019
Camera-ready papers due June 14, 2019
== Program Chairs ==
Contact: iswc2019-doctoral-consortium(a)inria.fr
Miao Qiao, Computer Science Department, the University of Auckland,
Auckland, New Zealand
Mauro Dragoni, Fondazione Bruno Kessler, Trento, Italy
See you all in Auckland!
The ISWC 2019 Organising Team (
https://iswc2019.semanticweb.org/organizing-committee/ )
Hello all,
As you might know, there are plenty of tools
<https://grafana.wikimedia.org/d/000000154/wikidata?orgId=1> analyzing
Wikidata’s content and its usage across the Wikimedia projects.
The new dashboard we would like to present you today, the *Wikidata
Identifier Landscape
<https://wmdeanalytics.wmflabs.org/WD_ExternalIdentifiersDashboard/>*, is
focusing on the external identifiers and their usage on Wikidata. Its
different views allow to answer questions like: how much do our external
identifiers overlap with each other? With how many statements are they
currently described? Which identifiers represent certain topic areas? On
what type of items are they usually used? How can we map the galaxy of our
external identifiers?
[image: WD EId Network.png]
<https://commons.wikimedia.org/wiki/File:WD_EId_Network.png>
*Map of the external identifiers galaxy*
---
On the dashboard, you can browse through different tabs:
- the Similarity Map presents a global overview of the overlap in the
usage of Wikidata identifiers
- the Overlap Network visualizes all Wikidata external identifiers in a
network of nearest neighbors
- the Tables section allows you to check directly the number of items
using a certain external identifier, and the number of items using two
external identifiers of your choice
- the Identifier Classes tab maps the relationships between the external
identifiers belonging to the same class (eg chemistry, cultural heritage,
feminism…)
- the Particular Identifier tab provides for each identifier a map of
its neighbors and some examples of items where this identifier is used
On every tab, the descriptions gives you more information about the
calculation method and the result. You can also check the documentation page
<https://meta.wikimedia.org/wiki/Wikidata_Identifier_Landscape>.
[image: Screenshot from ExID dashboard - Mérimée ID.png]
<https://commons.wikimedia.org/wiki/File:Screenshot_from_ExID_dashboard_-_M%…>
*Map of the external identifiers neighbors of Mérimée ID (P380)
<https://www.wikidata.org/wiki/Property:P380>*
---
If you have any question, if you find a bug of if you have a request for
future development, feel free to ping me or to comment on this Phabricator
task <https://phabricator.wikimedia.org/T204440>.
And of course, feel free to share the dashboard with people or projects
that could be interested in playing with identifiers :)
Cheers,
--
Léa Lacroix
Project Manager Community Communication for Wikidata
Wikimedia Deutschland e.V.
Tempelhofer Ufer 23-24
10963 Berlin
www.wikimedia.de
Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e. V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg unter
der Nummer 23855 Nz. Als gemeinnützig anerkannt durch das Finanzamt für
Körperschaften I Berlin, Steuernummer 27/029/42207.
Dear Mr.,
I thank you for your answer. I thank you for your clarifications. 44262 lexemes are a limited output. That is why I asked if we can merge public domain ontologies in Wikidata to let the process more efficient. In fact, if we integrate the public domain ontology for Portuguese into LexData, we can have more than 40000 Portuguese lexemes added into Wikidata. Then, portuguese users can work to enrich the input.
Yours Sincerely,
Houcemeddine Turki (he/him)
Medical Student, Faculty of Medicine of Sfax, University of Sfax, Tunisia
Undergraduate Researcher, UR12SP36
GLAM and Education Coordinator, Wikimedia TN User Group
Member, WikiResearch Tunisia
Member, Wiki Project Med
Member, WikiIndaba Steering Committee
Member, Wikimedia and Library User Group Steering Committee
Co-Founder, WikiLingua Maghreb
Founder, TunSci
____________________
+21629499418
-------- Message d'origine --------
De : Nicolas VIGNERON <vigneron.nicolas(a)gmail.com>
Date : 2019/04/14 22:37 (GMT+01:00)
À : Discussion list for the Wikidata project <wikidata(a)lists.wikimedia.org>
Objet : Re: [Wikidata] Wikidata and Portuguese wordnets
> Currently, there are nearly 8000 lexemes in Wikidata.
There is, right now, 44262 lexemes entity (query: http://tinyurl.com/y692kszu), including 42 in Portuguese (query: http://tinyurl.com/y2f3636e) and in a total of 321 languages (it already is one of the biggest database if you count the number of languages).
In total, there is 120 391 forms for these 44k+ lexemes (query:http://tinyurl.com/y5xudhwg far from the biggest databases but already an impressive number as almost everything has been done by hand and in less than a year!).
> If there is not a person who can work to change this situation, it will take years to let LexData represent the main ten languages including Portuguese.
There is already a lot of person working on Lexemes in Wikidata (I'm actually working on Breton).
But true, there is also a lot to do!
You're welcome to join us, the project page is here: https://www.wikidata.org/wiki/Wikidata:Lexicographical_data.
Cheers, ~nicolas
Dear Mr.,
I thank you for your answer. I see. However, there are several public domain ontologies that can be added to Wikidata like Onto.PT.
Yours Sincerely,
Houcemeddine Turki (he/him)
Medical Student, Faculty of Medicine of Sfax, University of Sfax, Tunisia
Undergraduate Researcher, UR12SP36
GLAM and Education Coordinator, Wikimedia TN User Group
Member, WikiResearch Tunisia
Member, Wiki Project Med
Member, WikiIndaba Steering Committee
Member, Wikimedia and Library User Group Steering Committee
Co-Founder, WikiLingua Maghreb
Founder, TunSci
____________________
+21629499418
-------- Message d'origine --------
De : Darren Cook <darren(a)dcook.org>
Date : 2019/04/14 22:06 (GMT+01:00)
À : wikidata(a)lists.wikimedia.org
Objet : Re: [Wikidata] Integrating WordNet in Wikidata Lexicographical Data
> I thank you for your answer. I invite to read the copyright statement of WordNet:
>
> Permission to use, copy, modify and distribute this software and database and
> its documentation for any purpose and without fee or royalty is hereby granted,
> provided that you agree to comply with the following copyright notice and
> statements, including the disclaimer, and that the same appear on ALL copies of
> the software, database and documentation, including modifications that you make
> for internal use or for distribution.
Everything after "provided that ..." is why you cannot take Wordnet and
import it into Wikidata.
But, because WordNet's license (the English one, and about half the
global wordnet ones [1]) is still a nice liberal one, what end-users can
legally do is use WordNet and Wikidata together in their application.
It is very good to have different sources for the same information, as
it allows that same end user to validate data, and discover mistakes,
whether human error or deliberate vandalism.
Wikidata should (IMHO) be prioritizing making sure it is as easy as
possible to do this. (E.g. producing client libraries to do it; and by
documenting equivalences between alternative data sources.)
Darren
[1]: http://globalwordnet.org/resources/wordnets-in-the-world/
_______________________________________________
Wikidata mailing list
Wikidata(a)lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata
Dear Ms.,
I thank you for your answer. This is absolutely interesting. The choice of the wordnets to integrate to Wikidata Lexicographical Data does not matter. The most important fact is that we should begin to enrich LexData. It will be excellent if you adopt this project as yours. Currently, there are nearly 8000 lexemes in Wikidata. If there is not a person who can work to change this situation, it will take years to let LexData represent the main ten languages including Portuguese.
Yours Sincerely,
Houcemeddine Turki (he/him)
Medical Student, Faculty of Medicine of Sfax, University of Sfax, Tunisia
Undergraduate Researcher, UR12SP36
GLAM and Education Coordinator, Wikimedia TN User Group
Member, WikiResearch Tunisia
Member, Wiki Project Med
Member, WikiIndaba Steering Committee
Member, Wikimedia and Library User Group Steering Committee
Co-Founder, WikiLingua Maghreb
Founder, TunSci
____________________
+21629499418
-------- Message d'origine --------
De : Valeria de Paiva <valeria.depaiva(a)gmail.com>
Date : 2019/04/14 16:21 (GMT+01:00)
À : wikidata(a)lists.wikimedia.org
Objet : [Wikidata] Wikidata and Portuguese wordnets
hi, I would love to see more lexical mappings between Wikidata and multiple opensource wordnets, but care is required.
In particular, for Portuguese, I think anyone attempting this mapping should check
https://www.researchgate.net/publication/298045179_An_overview_of_Portugues…
beforehand. Not surprisingly I prefer my own version of Portuguese wordnet http://openwordnet-pt.org, but so do also
Bond's Open Multilingual Wordnet<http://compling.hss.ntu.edu.sg/omw/summx.html>, BabelNet as well as Google Translate<http://translate.google.com/about/intl/en_ALL/license.html> as we're the representative of the open Portuguese wordnets used by these projects.
best,
Valeria
On 4/13/19 15:06, Houcemeddine A. Turki wrote:
> Dear all,
> I thank you for your efforts. Thanks to my discussion with Dr. David
> Abian, I came across a public domain ontology for Portuguese. This
> ontology is available in http://ontopt.dei.uc.pt<http://ontopt.dei.uc.pt/>. Currently,
> Lexicographical Data does not support Portuguese. I ask if someone can
> build a bot to automatically integrate this ontology into Wikidata.
> Onto.PT is downloadable using
> http://ontopt.dei.uc.pt/index.php?sec=download_ontopt. You can verify
> that it is a public domain ontology using
> https://www.researchgate.net/publication/259935306_OntoPT_Recent_developmen….
> Yours Sincerely,
> Houcemeddine Turki (he/him)
> Medical Student, Faculty of Medicine of Sfax, University of Sfax, Tunisia
> Undergraduate Researcher, UR12SP36
> GLAM and Education Coordinator, Wikimedia TN User Group
> Member, WikiResearch Tunisia
> Member, Wiki Project Med
> Member, WikiIndaba Steering Committee
> Member, Wikimedia and Library User Group Steering Committee
> Co-Founder, WikiLingua Maghreb
> Founder, TunSci
> ____________________
> +21629499418
>
> _______________________________________________
> Wikidata mailing list
> Wikidata(a)lists.wikimedia.org<mailto:Wikidata@lists.wikimedia.org>
> https://lists.wikimedia.org/mailman/listinfo/wikidata
>
--
Valeria de Paiva
http://vcvpaiva.github.io/http://www.cs.bham.ac.uk/~vdp/
Dear Mr.,
I thank you for your answer. I invite to read the copyright statement of WordNet:
Permission to use, copy, modify and distribute this software and database and its documentation for any purpose and without fee or royalty is hereby granted, provided that you agree to comply with the following copyright notice and statements, including the disclaimer, and that the same appear on ALL copies of the software, database and documentation, including modifications that you make for internal use or for distribution.
Yours Sincerely,
Houcemeddine Turki (he/him)
Medical Student, Faculty of Medicine of Sfax, University of Sfax, Tunisia
Undergraduate Researcher, UR12SP36
GLAM and Education Coordinator, Wikimedia TN User Group
Member, WikiResearch Tunisia
Member, Wiki Project Med
Member, WikiIndaba Steering Committee
Member, Wikimedia and Library User Group Steering Committee
Co-Founder, WikiLingua Maghreb
Founder, TunSci
____________________
+21629499418
-------- Message d'origine --------
De : David Abián <davidabian(a)wikimedia.es>
Date : 2019/04/13 11:58 (GMT+01:00)
À : wikidata(a)lists.wikimedia.org
Objet : Re: [Wikidata] Integrating WordNet in Wikidata Lexicographical Data
Hi, Houcemeddine,
Concerning the copyright issues, I'm unfortunately not sure that adding
those references is enough. Wikidata's data have a CC0 dedication, which
means that any data set with conditions of rights different from the
public domain can't be imported into the project. Although we can and
should meet the requirement of referencing the sources, the statement
that Wikidata's data have the CC0 dedication would no longer be true if
we mass imported data from WordNets (except for those WordNets in the
public domain, if any).
Best,
David
On 4/13/19 12:37, Houcemeddine A. Turki wrote:
> Dear all,
> I thank you for your efforts. I saw with a lot of interest the work that
> has been done to ameliorate the quality and the quality of the
> Lexicographical Data. However, I know that most of the added data
> already exists in WordNets. It seems that we are just reinventing the
> wheel. I ask why we cannot have an automated method to integrate
> WordNets in Wikidata. Concerning the copyright issues, they are solved
> by putting the WordNets as references of the added statements.
> Yours Sincerely,
> Houcemeddine Turki (he/him)
> Medical Student, Faculty of Medicine of Sfax, University of Sfax, Tunisia
> Undergraduate Researcher, UR12SP36
> GLAM and Education Coordinator, Wikimedia TN User Group
> Member, WikiResearch Tunisia
> Member, Wiki Project Med
> Member, WikiIndaba Steering Committee
> Member, Wikimedia and Library User Group Steering Committee
> Co-Founder, WikiLingua Maghreb
> Founder, TunSci
--
David Abián
_______________________________________________
Wikidata mailing list
Wikidata(a)lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata
hi, I would love to see more lexical mappings between Wikidata and multiple
opensource wordnets, but care is required.
In particular, for Portuguese, I think anyone attempting this mapping
should check
https://www.researchgate.net/publication/298045179_An_overview_of_Portugues…
beforehand. Not surprisingly I prefer my own version of Portuguese wordnet
http://openwordnet-pt.org, but so do also
Bond's Open Multilingual Wordnet
<http://compling.hss.ntu.edu.sg/omw/summx.html>, BabelNet as well as Google
Translate <http://translate.google.com/about/intl/en_ALL/license.html> as
we're the representative of the open Portuguese wordnets used by these
projects.
best,
Valeria
On 4/13/19 15:06, Houcemeddine A. Turki wrote:
> Dear all,
> I thank you for your efforts. Thanks to my discussion with Dr. David
> Abian, I came across a public domain ontology for Portuguese. This
> ontology is available in http://ontopt.dei.uc.pt. Currently,
> Lexicographical Data does not support Portuguese. I ask if someone can
> build a bot to automatically integrate this ontology into Wikidata.
> Onto.PT is downloadable using
> http://ontopt.dei.uc.pt/index.php?sec=download_ontopt. You can verify
> that it is a public domain ontology using
>
https://www.researchgate.net/publication/259935306_OntoPT_Recent_developmen…
.
> Yours Sincerely,
> Houcemeddine Turki (he/him)
> Medical Student, Faculty of Medicine of Sfax, University of Sfax, Tunisia
> Undergraduate Researcher, UR12SP36
> GLAM and Education Coordinator, Wikimedia TN User Group
> Member, WikiResearch Tunisia
> Member, Wiki Project Med
> Member, WikiIndaba Steering Committee
> Member, Wikimedia and Library User Group Steering Committee
> Co-Founder, WikiLingua Maghreb
> Founder, TunSci
> ____________________
> +21629499418
>
> _______________________________________________
> Wikidata mailing list
> Wikidata(a)lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata
>
--
Valeria de Paiva
http://vcvpaiva.github.io/http://www.cs.bham.ac.uk/~vdp/