Hey folks,
we plan to drop the wb_entity_per_page table sometime soon[0], because
it is just not required (as we will likely always have a programmatic
mapping from entity id to page title) and it does not supported non
-numeric entity ids as it is now. Due to this removing it is a blocker
for the commons metadata.
Is anybody using that for their tools (on tool labs)? If so, please
tell us so that we can give you instructions and a longer grace period
to update your scripts.
Cheers,
Marius
[0]: https://phabricator.wikimedia.org/T95685
Hi folks!
My name is Glorian Yapinus, but you can simply call me Glorian ;) . For the
next 6 months, I will assist Lydia in supporting you all.
Regarding to my educational background, I hold a bachelor's degree in
Information Technology and currently, I am working on my Master's in
Software Engineering and Management.
I am a warm and nice person. So, please do not hesitate to reach out to me
for any queries :-)
Last but not least, I am looking forward to working with you.
Cheers,
Glorian
--
Glorian Yapinus
Product Management Intern for Wikidata
Imagine a world, in which every single human being can freely share in the
sum of all knowledge. That‘s our commitment.
Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e. V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg unter
der Nummer 23855 B. Als gemeinnützig anerkannt durch das Finanzamt für
Körperschaften I Berlin, Steuernummer 27/029/42207.
Hi all,
I think this topic might have been discussed many months ago. For
certain data types in the chemical compound space (P233, canonical
smiles, P2017 isomeric smiles and P234 Inchi key) a higher character
limit than 400 would be really helpful (1500 to 2000 chars (I sense
that this might cause problems with SPARQL)). Are there any plans on
implementing this? In general, for quality assurance, many string
property types would profit from a fixed max string length.
Best,
Sebastian
Sebastian Burgstaller-Muehlbacher, PhD
Research Associate
Andrew Su Lab
MEM-216, Department of Molecular and Experimental Medicine
The Scripps Research Institute
10550 North Torrey Pines Road
La Jolla, CA 92037
@sebotic
Can someone look into how or why all of those 'creators' were added with
the Primary Sources Tool incorrectly on the topic 'painting' ?
https://www.wikidata.org/wiki/Q11629
WARNING: The page actually will freeze for 1-2 min while the 'creator'
statement tries to load into the browser !
-Thad
Hello all,
I'm very glad to give you some piece of information about an event we will
organize in 2017: the WikidataCon
<https://www.wikidata.org/wiki/Wikidata:WikidataCon_2017> :)
This event will take place to celebrate Wikidata's 5th birthday, but most
of all, to celebrate you, the editors who build Wikidata everyday.
We will try to make this event as open and contributive as possible,
especially in the content: you will be able to choose what you want to
hear, do, participate to, and you will be the first providers of the
content.
For any question, feedback, suggestion, please use the talk page
<https://www.wikidata.org/wiki/Wikidata_talk:WikidataCon_2017>.
I'm looking forward to organizing this event with you!
Cheers,
--
Léa Lacroix
Community Communication Manager for Wikidata
Wikimedia Deutschland e.V.
Tempelhofer Ufer 23-24
10963 Berlin
www.wikimedia.de
Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e. V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg unter
der Nummer 23855 Nz. Als gemeinnützig anerkannt durch das Finanzamt für
Körperschaften I Berlin, Steuernummer 27/029/42207.
Hi,
I’m a PhD student/researcher at the University of Minnesota who (along with Max Klein and another grad student/researcher) has been interested in understanding the extent to which Wikidata is used in (English, for now) Wikipedia.
There seems to be no easy way to determine Wikidata usage in Wikipedia pages so I’ll describe two approaches we’ve considered as our best attempts at solving this problem. I’ll also describe shortcomings of each approach.
The first approach involves analyzing Wikipedia templates to look for explicit references (i.e. “#property:P<some number>”) across all templates. For a given template containing a certain property reference, we then assume that the statement corresponding to the Wikidata property is used in all Wikipedia pages that transclude that template. However, there are two clear limitations to this approach:
If we assume that the statement corresponding to the Wikidata property is used in all Wikipedia pages that transclude that template, this results in a sort of upper bound on the number of actual property usages in Wikipedia. However, we have no sense of what the actual usage looks like since each template has its own set of logic and, whether or not a given property would get rendered in Wikipedia is dependent on that (sometimes quite complicated) logic. A possible way to get a sense of usage would be to sample a small set of random pages (that use templates using Wikidata) and manually look up whether or not the Wikidata statement for the given Wikidata item <https://www.wikidata.org/wiki/Help:Items> is exactly the same as that rendered in the corresponding Wikipedia page. If it was, then we might assume the property is being used. Of course, this is not a perfect approach since it's possible that a Wikidata statement is used in Wikipedia but it is formatted differently in Wikidata versus in Wikipedia (e.g. a date is rendered using a different format).
This approach does not account for Lua modules, which can be referenced from within templates. The modules can (and sometimes do) contain code that supplies Wikidata to Wikipedia pages that are transcluded by the given templates containing the module references. Without understanding and accounting for the logic in all Lua modules that use Wikidata, it does not seem possible to actually know which Wikidata properties are being introduced to Wikipedia pages through this method.
The second approach involves expanding (using the MediaWiki API, see https://www.mediawiki.org/wiki/API:Expandtemplates <https://www.mediawiki.org/wiki/API:Expandtemplates>) already transcluded templates into HTML tables in two ways: 1) in the context of the appropriate Wikipedia page and 2) out of context of the appropriate Wikipedia page (e.g. in my own sandbox). It’s my understanding that if the Wikipedia page uses Wikidata, then that Wikidata should show up in the expansion if the template is expanded in the context of its page, and not when expanded elsewhere (e.g. in my sandbox). We would then check to see if there is a difference between the two expansions by html diff-ing. The difference between the two expanded templates would presumably be due to Wikidata. Of course, there are limitations to this approach as well:
It's possible that a Wikipedia contributor manually entered in data (into a transcluded template) that exactly matches data in Wikidata and thus, the expansions would be the same across the diff-ing — Wikidata would not be recognizable in this case.
Once we identify (through diff-ing) where Wikidata is being used in expanded templates, it's not obvious what specific Wikidata properties/statements were used. In other words, "linking" Wikidata to corresponding html (table) rows in an expanded template seems challenging.
Any insight about how we can approach this problem would be greatly appreciated!
Thanks,
Andrew Hall
Hi everyone. I have a question about the Wikidata xml dump, but I'm
posting this question here, because it looks more related to Wikidata.
In short, it seems that the "pages-articles.xml" does not include the
datatype property for snaks. For example, the xml dump does not list a
datatype for Q38 (Italy) and P41 (flag image). In contrast, the json
dump does list a datatype of "commonsMedia".
Can this datatype property be included in future xml dumps? The
alternative would be to download two large and redundant dumps (xml
and json) in order to reconstruct a Wikidata instance.
More information is provided below the break. Let me know if you need
anything else.
Thanks.
----
Here's an excerpt from the xml data dump for Q38 (Italy) and P41 (flag
image). Notice that there is no "datatype" property
// https://dumps.wikimedia.org/wikidatawiki/20161120/wikidatawiki-20161120-pag…
"mainsnak": {
"snaktype": "value",
"property": "P41",
"hash": "a3bd1e026c51f5e0bdf30b2323a7a1fb913c9863",
"datavalue": {
"value": "Flag of Italy.svg",
"type": "string"
}
},
Meanwhile, the API and the JSON dump lists a datatype property of
"commonsMedia":
// https://www.wikidata.org/w/api.php?action=wbgetentities&ids=q38
// https://dumps.wikimedia.org/wikidatawiki/entities/20161114/wikidata-2016111…
"P41": [{
"mainsnak": {
"snaktype": "value",
"property": "P41",
"datavalue": {
"value": "Flag of Italy.svg",
"type": "string"
},
"datatype": "commonsMedia"
},
As far as I can tell, the Turtle (ttl) dump does not list a datatype
property either, but this may be because I don't understand its
format.
wd:Q38 p:P41 wds:q38-574446A6-FD05-47AE-86E3-AA745993B65D .
wds:q38-574446A6-FD05-47AE-86E3-AA745993B65D a wikibase:Statement,
wikibase:BestRank ;
wikibase:rank wikibase:NormalRank ;
ps:P41 <http://commons.wikimedia.org/wiki/Special:FilePath/Flag%20of%20Italy.svg>
;
pq:P580 "1946-06-19T00:00:00Z"^^xsd:dateTime ;
pqv:P580 wdv:204e90b1bce9f96d6d4ff632a8da0ecc .
I would connect this article:
https://it.wikipedia.org/wiki/Lista_nera_(economia)
with this item:
https://www.wikidata.org/wiki/Q607466
but I receive an error from the Italian interwiki and a message that I
don't have the rights to do it if I would update it directly in Wikidata.
It seems to me strange because I usually did it for other items.
The message itself is not so much clear.
Is there something that I can do to update this interwiki?
--
Ilario Valdelli
Wikimedia CH
Verein zur Förderung Freien Wissens
Association pour l’avancement des connaissances libre
Associazione per il sostegno alla conoscenza libera
Switzerland - 8008 Zürich
Tel: +41764821371
http://www.wikimedia.ch
---
Questa e-mail è stata controllata per individuare virus con Avast antivirus.
https://www.avast.com/antivirus