It’s Wikidata’s third birthday! Wohoo \o/
So it’s finally time to show you, what I worked on over the past weeks and
months.
I started working on an extension, called ArticlePlaceholder as part of my
Bachelor’s thesis at the HTW Berlin in cooperation with Wikimedia
Deutschland and espacially with the Wikidata team.
The idea is, to have automatically generated content pages on the different
Wikipedias displaying data of items that don’t have an corresponding
article in that language.
This is will be especially helpful for smaller Wikipedias with a small
contributor base and aims to make more knowledge accessible to more people.
On the long run we even might be able to reduce the number of bot generated
stubs. This will mainly support Wikipedia editors in maintaining but also
in editing.
I’m very excited to let you know, that there is actually something that I
can show you! So instead of telling you in a lot of words, how cool this
all is going to be, I want you to just see yourself:
http://articleplaceholder.wmflabs.org/mediawiki/index.php/Special:FancyUnic…
This is for example an auto generated placeholder for Ada Lovelace.
It’s all very much work in progress, I just wanted to give you a first
sneak peak. :)
So, just in case you wonder, even though Special:FancyUnicorn is an awesome
name for a special page I will obviously change it to something more
fitting.
Also, the design of the page isn’t done at all yet. So far I just bothered
about the loading of the data rather than the layout. I just chose
something, that would be easy to read for people familiar with Wikidata for
this presentation.
But you know what is actually super cool about the layout? It’s completely
written in Lua- so you can overwrite every part of the (upcoming super
beautiful default) design and make it fitting to exactly what your local
community wants and needs. Only the title and the language links are always
set, the rest is adjustable with on wiki scripts. Awesome, right?
There are some other bugs I already know of (
https://phabricator.wikimedia.org/tag/articleplaceholder/
<https://phabricator.wikimedia.org/tag/articleplaceholder/=>) and probably
a lot more I have no idea about yet, so if you find them, feel free to file
a bug.
The documentation is not up to date yet either, I will take care of that in
the next days. But here is the page anyways:
https://www.mediawiki.org/wiki/Extension:ArticlePlaceholder
So have fun discovering the article placeholder! I’m looking forward to
your feedback.
And happy birthday Wikidata! I just want to thank you all for the chance to
be part of all this awesomeness, all the help and support over the time,
and the great work you all do! <3
You rock!
Lucie (Frimelle)
--
Lucie-Aimée Kaffee
Working Student Software Development
Wikimedia Deutschland e.V. | Tempelhofer Ufer 23-24 | 10963 Berlin
Phone: +49 (0)30 219 158 26-0http://wikimedia.de
Imagine a world, in which every single human being can freely share in
the sum of all knowledge.
That‘s our commitment.
Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e. V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg
unter der Nummer 23855 B.
Als gemeinnützig anerkannt durch das Finanzamt für Körperschaften I Berlin,
Steuernummer 27/681/51985.
Hi all,
I think it is time for the next step in the Wikidata development: a better
integration in Wikipedia and her sisterprojects.
Every day thousands of articles are created, and many of those are not
added to Wikidata, even while often an item about this subject exists.
Users forget to add a newly created article to Wikidata as there is no
stimulus at all. The next step in Wikidata development is that after the
creation of an article, users get a message (pop-up, or screen, etc) in
what they are asked to add the article/category to Wikidata. In the first
stage this can be just a pop-up with a message. But it would be better if
this can be a message + some help to do this, so that users can stay in
Wikipedia (or another project), without having to go to Wikidata.
A further step that can be developed after is the suggestion of properties
(if missing), like instance of, and based on this entry further properties.
This will make sure that there is a better integration of Wikipedia and her
sister projects with Wikidata through this workflow.
For this I created a Phabricator task at:
https://phabricator.wikimedia.org/T117070
Thanks!
Romaine
Hi All,
TLDR: on the mobile web, Wikidata descriptions will appear under article
titles in search results, nearby and watchlist starting tomorrow afternoon
(Thu, Oct 29) across projects. A 'kill switch' has been implemented so
that this feature can be turned off if necessary.
Background:
In Q2 of last year, Wikidata descriptions were added to both our official
apps: iOS and Android and resulted in wonderful qualitative feedback (as
the discovery team knows, it is hard to define 'success' with search (fewer
searches, more searches?). Though moving Wikidata descriptions to search
on mobile web was planned for Q3 of last year, it has been sitting in beta
for many months as there was some concern that at scale on the web, it
might prove to be an incentive to vandalize Wikidata (and article editors
would not have an obvious, wikipedia way to undo such edits).
Ultimately, we think that anything showing up on Wikipedia should be
editable ON Wikipedia. Wikidata description editing is something we are
going to aim for. Given the success of descriptions in search results on
apps, we would rather move forward with the presentation and work towards a
goal of editing in-line than to hold up the entire thing based on a fear
that might not be warranted. In consultation with the Wikidata team, we
decided to move forward with pushing the feature to stable as long as we
had the ability to pull the feature back if there were any issues.
We are relying on community feedback to let us know if you have any issues
or hear from anyone that this is causing problems. Our community liaison
team will be posting notices on village pumps shortly. Thanks!
Best,
Jon
WMF Reading Product Lead
Hey folks :)
Here comes birthday present number one from the dev team. Jonas took
some time and reworked the UI of http://query.wikidata.org to make it
prettier and easier to understand. It also has more examples now. You
can expand them on
https://www.mediawiki.org/wiki/Wikibase/Indexing/SPARQL_Query_Examples.
The auto-completion has also been improved.
Check it out! I hope you like it.
Cheers
Lydia
--
Lydia Pintscher - http://about.me/lydia.pintscher
Product Manager for Wikidata
Wikimedia Deutschland e.V.
Tempelhofer Ufer 23-24
10963 Berlin
www.wikimedia.de
Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e. V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg
unter der Nummer 23855 Nz. Als gemeinnützig anerkannt durch das
Finanzamt für Körperschaften I Berlin, Steuernummer 27/681/51985.
Hi all,
I am happy to announce a new tool [1], written by Serge Stratan, which
allows you to browse the taxonomy (subclass of & instance of relations)
between Wikidata's most important class items. For example, here is the
Wikidata taxonomy for Pizza (discussed recently on this list):
http://sergestratan.bitbucket.org?draw=true&optid=s0&item=177,2095,7802,288…
== What you see there ==
Solid green lines mean "subclass of" relations (subclasses are lower),
while dashed purple lines are "instance of" relations (instances are
lower). Drag and zoom the view as usual. Hover over items for more
information. Click on arrows with numbers to display upper or lower
neighbours. Right-click on classes to get more options.
The sidebar on the left shows statistics and presumed problems in the
data (redundancies and likely errors). You can select a report type to
see the reports, and click on any line to show the error. If you search
for a class in the search field, the errors will be narrowed down to
issues related to the taxonomy of this class.
The toolbar at the top has options to show and hide items based on the
current selection (left click on any box).
Edges in red are the wrong way around (top to bottom). This occurs only
when there are cycles in the "taxonomy".
== Micro tutorial ==
(1) Enter "Unicorn" in the search box, press return.
(2) Zoom out a bit by scrolling your mouse/touchpad
(3) Click on the "Unicorn" item box. It becomes blue (selected).
(4) Click "Expand up" in the toolbar at the top
(5) Zoom out to see the taxonomy of unicorn
(6) Find the class "Fictional Horse" (directly above unicorn) and click
its downwards arrow labelled "3" to see all three children items of
"fictional horse".
(7) Click the share button on the top right to get a link to this view.
You can also create your own share link manually by just changing the
Qids in the URL as you like.
== Status and limitations ==
This is a prototype and it still has some limits:
* It only shows "proper" classes that have at least one instance or
subclass. This is to reduce the overall data size and load time.
* The data is based on dumps (the date is shown on the right). It is not
a live view.
* The layout is sometimes too dense. You can find a "hidden" option to
make it more spacy behind the sidebar (click "Sidebar" to see it). This
helps to disentangle larger graphs.
* There are some minor bugs in the UI. You sometimes need to click more
than once until the right thing happens.
* The help page at http://sergestratan.bitbucket.org/howtouse.html does
not explain everything in detail yet.
It is planned to work on some of these limitations in the future.
The hope is that this tool will reveal many errors in Wikidata's
taxonomy that are otherwise hard to detect. For example, you can see
easily that every "Ship" is an "Event" in Wikidata, that every "Hobbit"
is a "Fantasy Race", and that every "Monday" is both a "Mathematical
object" and a "Unit of measurement".
Feedback is welcome (on the tool; better start new threads for feedback
on the Wikidata taxonomy ;-),
Markus
[1] http://sergestratan.bitbucket.org
--
Markus Kroetzsch
Faculty of Computer Science
Technische Universität Dresden
+49 351 463 38486
http://korrekt.org/
Hi all,
For this Flemish museums on Wikidata project <https://www.wikidata.org/wiki/Wikidata:Flemish_art_collections,_Wikidata_an…> ( … we hope to import some 30,000 Flemish artworks in the upcoming months :-) … ) I and the rest of the project team are trying to find out if and how we’ll be able to retrieve RDF from Wikidata - one RDF export/file for all concerned items at once.
So this is not RDF for a single item (like this <https://www.wikidata.org/wiki/Special:EntityData/Q21012032.rdf>) and also not a RDF dump of all of Wikidata like mentioned here <https://www.wikidata.org/wiki/Wikidata:Data_access#Access_to_dumps>. It would be an RDF file corresponding to the results of this WDQ query <http://tools.wmflabs.org/autolist/autolist1.html?q=CLAIM%5B195:1471477%5D%2…> (which should produce more than 30,000 items in a few months!).
Any tips on how to achieve this? Wikidata Toolkit? But how/what to do? We are not programmers/developers but we do have some budget to hire someone to build us something, so pointers to a (Belgian??) developer who could help would also be very welcome.
The project raises quite a few questions, by the way, so I might come back with more :-)
Many thanks in advance! Sandra (User:Spinster)
Hey folks :)
As previously announced we have now added Meta, MediaWiki and
Wikispecies to Wikidata. They got phase 1, meaning sitelinks for them
can be added. Depending on how this goes we'll enable phase 2 - access
to the actual data. I'll let you know when that will happen as usual.
Welcome, sisters!
Cheers
Lydia
--
Lydia Pintscher - http://about.me/lydia.pintscher
Product Manager for Wikidata
Wikimedia Deutschland e.V.
Tempelhofer Ufer 23-24
10963 Berlin
www.wikimedia.de
Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e. V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg
unter der Nummer 23855 Nz. Als gemeinnützig anerkannt durch das
Finanzamt für Körperschaften I Berlin, Steuernummer 27/681/51985.
*Extended Deadline November 20, 2015
CFP: Semantic Web Journal - Special Issue on Quality Management of
Semantic Web Assets (Data, Services and Systems):*
http://www.semantic-web-journal.net/blog/call-papers-special-issue-quality-…
Submission guidelines
*Deadline:October 31, 2015* > *November 20, 2015**
*
Submissions shall be made through the Semantic Web journal website at
http://www.semantic-web-journal.net. Prospective authors must take
notice of the submission guidelines posted at
http://www.semantic-web-journal.net/authors. Note that you need to
request an account on the website for submitting a paper. Please
indicate in the cover letter that it is for the Special Issue on Quality
Management of Semantic Web Assets (Data, Services and Systems).
Submissions are possible in the following categories: full research
papers, application reports, reports on tools and systems, and case
studies. While there is no upper limit, paper length must be justified
by content.
Guest editors
* Amrapali Zaveri, University of Leipzig, AKSW Group, Germany
* Dimitris Kontokostas, University of Leipzig, AKSW Group, Germany
* Sebastian Hellmann, University of Leipzig, AKSW Group, Germany
* Jürgen Umbrich, Vienna University of Economics and Business, Austria
*Overview and Topics*
The standardization and adoption of Semantic Web technologies has
resulted in a variety of assets, including an unprecedented volume of
data being semantically enriched and systems and services, which consume
or publish this data. Although gathering, processing and publishing data
is a step towards further adoption of Semantic Web, quality does not yet
play a central role in these assets (e.g., data lifecycle,
system/service development).
Quality management essentially refers to activities and tasks involved
to guarantee a certain level of consistency and to meet the quality
requirements for the assets. In general, quality management consists of
the following four phases and components: (i) quality planning, (ii)
quality control, (iii) quality assurance and (iv) quality improvement.
The quality planning phase in the Semantic Web typically involves the
design of procedures, strategies and policies to support the management
of the assets. The quality control and assurance components have their
primary aim in preventing errors and to meet quality requirements
pertaining to the Semantic Web standards. A core part for both
components are quality assessment methods which provide the necessary
input for the controlling and assurance tasks.
Quality assessment of Semantic Web Assets (data, services and systems),
in particular, presents new challenges that were not handled before in
other research areas. Thus, adopting existing approaches for data
quality assessment is not a straightforward solution. These challenges
are related to the openness of the Semantic Web, the diversity of the
information and the unbounded, dynamic set of autonomous data sources,
publishers and consumers (legal and software agents). Additionally,
detecting the quality of available data sources and making the
information explicit is yet another challenge. Moreover, noise in one
data set, or missing links between different data sets, propagates
throughout the Web of Data, and imposes great challenges on the data
value chain.
In case of systems and services, different implementations follow the
specifications for RDF and SPARQL to varying extents, or even propose
and offer new, non-standardized extensions. This causes strong
incompatibilities between systems, e.g., between the used SPARQL
features in the query engines and support features in RDF stores. The
potential heterogeneity and incompatibility poses several challenges for
the quality assessments in and for such systems and services.
Eventually, quality improvement methods are used to further enhance the
value of the Semantic Web Assets. One important step to improve the
quality of data is identifying the root cause of the problem and then
designing corresponding data improvement solutions. These solutions
select the most effective and efficient strategies and related set of
techniques and tools to improve quality. Quality improvement metrics for
products and services entails understanding and improving operational
processes and establishing valid and reliable service performance measures.
This Special Issue is addressed to those members of the community
interested in providing novel methodologies or frameworks in managing,
assessing, monitoring, maintaining and improving the quality of the
Semantic Web data, services and systems and also introduce tools and
user interfaces which can effectively assist in this management.
Topics of Interest
We welcome original high quality submissions on (but are not restricted
to) the following topics:
* Methodologies and frameworks to plan, control, assure or improve the
quality of Semantic Web Assets
* Quality exploration and analysis interfaces
* Quality monitoring
* Developing, deploying and managing quality service ecosystems
* Assessing the quality evolution of Semantic Web Assets
* Large-scale quality assessment of structured datasets
* Crowdsourcing data quality assessment
* Quality assessment leveraging background knowledge
* Use-case driven quality management
* Evaluation of trustworthiness of data
* Web Data and LOD quality benchmarks
* Data Quality improvement methods and frameworks, e.g., linkage,
alignment, cleaning, enrichment, correctness
* Service/system quality improvement methods and frameworks
* Managing sustainability issues in services
* Guarantee of service (availability, performance)
* Systems for transparent management of open data