Hey folks :)
Charlie has been working on concepts for making it possible to edit
Wikidata from Wikipedia and other wikis. This was her bachelor thesis. She
has now published it:
https://commons.wikimedia.org/wiki/File:Facilitating_the_use_of_Wikidata_in…
I am very happy she put a lot of thought and work into figuring out all the
complexities of the topic and how to make this understandable for editors.
We still have more work to do on the concepts and then actually have to
implement it. Comments welcome.
Cheers
Lydia
--
Lydia Pintscher - http://about.me/lydia.pintscher
Product Manager for Wikidata
Wikimedia Deutschland e.V.
Tempelhofer Ufer 23-24
10963 Berlin
www.wikimedia.de
Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e. V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg unter
der Nummer 23855 Nz. Als gemeinnützig anerkannt durch das Finanzamt für
Körperschaften I Berlin, Steuernummer 27/029/42207.
I want to look up concepts and entities by their name even if it contains
typos or omissions in wiki data.
Can I do this using Wikidata-Toolkit?
Can I use achieve using sparql query from the web interface?
Thx
Dear users, developers and all people interested in semantic wikis,
We are happy to announce SMWCon Fall 2016 - the 13th Semantic MediaWiki
Conference:
* Dates * September 28th to September 30th 2016 (Wednesday to Friday)
* Location * German Institute for International Educational Research
(DIPF), Schloßstraße 29, Frankfurt am Main, Germany.
* Conference page *
https://www.semantic-mediawiki.org/wiki/SMWCon_Fall_2016
* Participants * Everybody interested in semantic wikis, especially
in Semantic MediaWiki, e.g. users, developers, consultants, business
representatives, researchers.
SMWCon Fall 2016 will be supported by The German Institute for
International Educational Research (DIPF) [0] and Open Semantic Data
Association e. V. [1].
Following the success of this format the SMWCon will have one tutorial
and workshop day preceding two conference days.
Participating in the conference: To help us planning, you can already
informally register on the conference page, although a firm registration
will later be needed.
Contributing to the conference: If you want to present your work in the
conference please go to the conference page and add your talk there. To
create an attractive program for the conference, we will later ask you
to give further information about your proposals. Tutorials and
presentations will be video and audio recorded and will be made
available for others after the conference.
Among others, we encourage contributions on the following topics:
Applications of semantic wikis:
* Semantic wikis for enterprise workflows and business intelligence
* Semantic wikis for corporate or personal knowledge management
* Exchange on business models with semantic wikis
* Lessons learned (best/worst practices) from using semantic wikis or
their extensions
* Semantic wikis in e-science, e-humanities, e-learning, e-health,
e-government
* Semantic wikis for finding a common vocabulary among a group of people
* Semantic wikis for teaching students about the Semantic Web
* Offering incentives for users of semantic wikis
* Challenges and obstacles for semantic wikis in business environments
Development of semantic wikis:
* Semantic wikis as knowledge base backends / data integration platforms
* Comparisons of semantic wiki concepts and technologies
* Community building, feature wishlists, roadmapping of Semantic MediaWiki
* Improving user experience in a semantic wiki
* Speeding up semantic wikis
* Integrations and interoperability of semantic wikis with other
applications and mashups
* Modeling of complex domains in semantic wikis, using rules, formulas etc.
* Access control and security aspects in semantic wikis
* Multilingual semantic wikis
For any other question and sponsorship opportunities, please do not
hesitate to contact Karsten Hoffmeyer <karsten at hoffmeyer.info>.
Hope to see you in Frankfurt am Main!
Lia Veja, Karsten Hoffmeyer
(Members of the Organizing Committee)
[0] http://www.dipf.de/en/about-us/contact
[1] https://opensemanticdata.org/
Hi,
Here is a little fun query to show the relative prominence of several
countries' populations on Wikidata [1]:
http://tinyurl.com/zlq9bfv
Doing this for all countries (not just for EU countries) times out, but
you can get individual numbers for each country using BIND, as for the US:
http://tinyurl.com/huouz39
(576 Wikidata people per million in habitants) or for China (6 Wikidata
people per million in habitants). May serve to show some regional biases
but also some natural effects.
Interestingly, it seems we already have almost 0.4% of the current
population of Finland on Wikidata.
Cheers,
Markus
[1]
https://www.mediawiki.org/wiki/Wikibase/Indexing/SPARQL_Query_Examples#Wiki…
--
Markus Kroetzsch
Faculty of Computer Science
Technische Universität Dresden
+49 351 463 38486
http://korrekt.org/
Hi,
Another recent example of a statement that does not seem to have been
updated: one of the WDQS servers seems to think the population of
Denmark is 5, though this was fixed on 1st of June. The other has the
correct value, it seems.
It's hard to reproduce since only one of the servers has it. I got the
error on this query: http://tinyurl.com/hnpxgyh (this is a variant of a
not-so-simple example query I just built: "German states, ordered by the
number of company headquarters per million inhabitants" [1]). For trying
it out, this query might be nicer:
SELECT ?population WHERE {
wd:Q35 wdt:P1082 ?population .
FILTER (?population < 200)
}
since you can change the "200" to trick the caching.
Cheers,
Markus
[1]
https://www.mediawiki.org/wiki/Wikibase/Indexing/SPARQL_Query_Examples#Germ…
Side remark: using arithmetic operations on query results is a great way
to get even more misleading statistics out of Wikidata ;-) It does not
seem as if we have that feature yet in many example queries.
--
Markus Kroetzsch
Faculty of Computer Science
Technische Universität Dresden
+49 351 463 38486
http://korrekt.org/
Hey all,
while using shortened URLs from WDQS in the WikiCite report draft
<https://meta.wikimedia.org/wiki/WikiCite_2016/Report>, it occurred to me
that tinyurl.com is blacklisted on Meta.
This is a major problem as it prevents concise URLs for gigantic queries
from being linked from other Wikimedia wikis. Has anyone thought of this
issue (Stas, Jonas?), in particular: should we ask Meta to remove the
domain from the blacklist or potentially consider another URL shortening
solution?
Dario
Dear all,
The SQID browser for Wikidata [1] has improved recently. Main new
features include:
* Show incoming statements from other entities
* Built-in SPARQL result view to browse long result sets
* Some improved translation
* Improved code structure
If you use SQID a lot, you may have to (shift+)reload the page to see
the updates in your browser. More details on key features below.
== Incoming statements from other entities ==
The highlight is the display of incoming statements, which I hope you
agree is surprisingly fast. Even challenging items like Q30 [2] with a
large number of incoming statements complete in rather short time.
Smaller items are even faster. First opening a page always needs a bit
more time to load the data, but if you stay in one tab, you will have a
very fast browsing experience overall (opening new tabs forces the
browser to reload data you already had in the other tab).
The unavoidable comparison to our great forbearer Reasonator ;-) which
also shows incoming statements. But there are some differences:
* SQID displays all incoming properties, not just a subset (I was
surprised that Reasonator shows only a subset -- maybe a bug? You can
see it if you open Q30 there [3] and compare to SQID: e.g., the "owned
by" incoming property somehow does not show in Reasonator).
* Reasonator and SQID use slightly different cut-offs regarding the
default number of inlinks to display. SQID shows at most 100 entites per
property, Reasonator seems to use a different strategy where you
sometimes get more than a 100 and sometimes less.
* SQID also displays incoming links from property pages.
* Reasonator shows qualifiers on incoming statements, SQID does not
(planned for the future)
* The visual layout is different in both tools.
* Display speed ;-) Kudos to the WDQS team for making this possible.
SQID completely relies on SPARQL to deliver this performance.
== Towards better translation ==
Thanks to Dan Michael O. Heggø, we already have a new translation to
Norwegian Bokmål (nb), in addition to the existing en and de.
We are working towards improving I18N, and there is still some way to
go. As an important first step, the current version has split the
messages into separate files. Translators who want to add languages just
need to translate one file [4].
The split into files is also an important first step towards a future
translatewiki integration, which we would like to have at some point.
Help wanted: if you know how to connect our message files to
translatewiki, this would be a great contribution to the project.
== Next steps ==
* Better translation (I18N is not complete for all components yet)
* Improve data view (show site links, better search/display support for
some datatypes)
* Show qualifiers on incoming statements
* Add built-in reasoning and quality control features that highlight
issues while you look at the data.
* Query UI to search for entities if you don't feel like writing SPARQL.
== Contribute ==
To contribute ideas or code, please use our project home at github [5],
where you can discuss issues and report bugs.
Acks: Michael Günther and Georg Wild who have made important
contributions to this release.
Best regards,
Markus
[1] http://tools.wmflabs.org/sqid/
[2] http://tools.wmflabs.org/sqid/#/view?id=Q30
[3] https://tools.wmflabs.org/reasonator/?q=Q30
[4] https://github.com/Wikidata/WikidataClassBrowser/tree/master/src/lang
[5] https://github.com/Wikidata/WikidataClassBrowser
--
Markus Kroetzsch
Faculty of Computer Science
Technische Universität Dresden
+49 351 463 38486
http://korrekt.org/
Hey all,
Wikimedia Deutschland and the Wikimedia Foundation hosted the WikiCite
<https://meta.wikimedia.org/wiki/WikiCite_2016> event in Berlin last week,
bringing together a large group
<https://meta.wikimedia.org/wiki/WikiCite_2016#Participant_list> of
Wikidatans, Wikipedians, librarians, developers and researchers from all
over the world.
The event built a lot of momentum around the definition of data models,
workflows and technology needed to better represent source and citation
data from Wikimedia projects, Wikidata in particular.
While we're still drafting a human-readable report
<https://meta.wikimedia.org/wiki/WikiCite_2016/Report>, I thought I'd share
a preview of the notes from the various workgroups, to give you a sense of
what we worked on and to let everyone join the discussion:
Main workgroups
Modeling bibliographic source metadata
<https://meta.wikimedia.org/wiki/WikiCite_2016/Group_1>
Discuss and draft data models to represent different types of sources as
Wikidata items
Reference extraction and metadata lookup tools
<https://meta.wikimedia.org/wiki/WikiCite_2016/Group_2>
Design or improve tools to extract identifiers and bibliographic data from
Wikipedia citation templates, look up and retrieve metadata
Representing citations and citation events
<https://meta.wikimedia.org/wiki/WikiCite_2016/Group_3>
Discuss how to express the citation of a source in a Wikimedia artifact
(such as a Wikipedia article, a Wikidata statements etc.) and review
alternative ways to represent them
(Semi-)automated ways to add references to Wikidata statements
<https://meta.wikimedia.org/wiki/WikiCite_2016/Group_4>
Improve tools for semi-automated statement and reference creation
(StrepHit, ContentMine)
Use cases for source-related queries
<https://meta.wikimedia.org/wiki/WikiCite_2016/Group_5>
Identify use cases for SPARQL queries involving source metadata. Obtain a
small open licensed bibliographic and citation graph dataset to build a
proof-of-concept of the querying and visualization potential of source
metadata in Wikidata.
Additional workgroups
Wikidata as the central hub on license information on databases
<https://meta.wikimedia.org/wiki/WikiCite_2016/Group_6>
Add license information to Wikidata to make Wikidata the central hub on
license information on databases
Using citations and bibliographic source metadata
<https://meta.wikimedia.org/wiki/WikiCite_2016/Group_7>
Merge groups working on citation structure and source metadata models and
integrate their recommendations
Citoid-Wikidata integration
<https://meta.wikimedia.org/wiki/WikiCite_2016/Group_8>
Extend Citoid to write source metadata into Wikidata
We're opening up the wikicite-discuss(a)wikimedia.org mailing list to anyone
interested in interacting with the participants in the event (we encouraged
them to use the official wikidata list for anything of interest to the
broader community). Phabricator also has a dedicated tag
<https://phabricator.wikimedia.org/tag/wikicite/> for related initiatives.
The event was generously funded
<https://meta.wikimedia.org/wiki/WikiCite_2016#Funding> by the Alfred P.
Sloan Foundation, the Gordon and Betty Moore Foundation, and Crossref.
We'll be exploring the feasibility of a follow-up event in the next 6-12
months to continue the work we started in Berlin and bring in more people
than we could host due to funding/capacity.
Best,
Dario
on behalf of the organizers