Hi,
I have a couple of questions regarding the Wiki Page ID. Does it always
stay unique for the page, where the page itself is just a placeholder for
any kind of information that might change over time?
Consider the following cases:
1. The first time someone creates page "Moon" it is assigned ID=1. If at
some point the page is renamed to "The_Moon", the ID=1 remains intact. Is
this correct?
2. What if we have page "Moon" with ID=1. Someone creates a second-page
"The_Moon" with ID=2. Is it possible that page "Moon" is transformed into a
redirect? Then, "Moon" would be redirecting to page "The_Moon"?
3. Is it possible for page "Moon" to become a category "Category:Moon" with
the same ID=1?
Thanks,
Gintas
Hello everyone,
I'd like to ask if Wikidata could please offer a HDT [1] dump along with the already available Turtle dump [2]. HDT is a binary format to store RDF data, which is pretty useful because it can be queried from command line, it can be used as a Jena/Fuseki source, and it also uses orders-of-magnitude less space to store the same data. The problem is that it's very impractical to generate a HDT, because the current implementation requires a lot of RAM processing to convert a file. For Wikidata it will probably require a machine with 100-200GB of RAM. This is unfeasible for me because I don't have such a machine, but if you guys have one to share, I can help setup the rdf2hdt software required to convert Wikidata Turtle to HDT.
Thank you.
[1] http://www.rdfhdt.org/
[2] https://dumps.wikimedia.org/wikidatawiki/entities/
Hi all,
With a couple of students we are working on various topics relating to
the dynamics of RDF and Wikidata. The public dumps in RDF cover the past
couple of months:
https://dumps.wikimedia.org/wikidatawiki/entities/
I'm wondering is there a way to get access to older dumps or perhaps
generate them from available data? We've been collecting dumps but it
seems we have a gap for a dump on 2017/07/04 right in the middle of our
collection. :) (If anyone has a copy of the truthy data for that
particular month, I would be very grateful if they can reach out.)
In general, I think it would be fantastic to have a way to access all
historical dumps. In particular, datasets might be used in papers and
for reproducibility purposes it would be lift a burden from the authors
to be able to link (rather than having to host) the data used. I am not
sure if such an archive is feasible or not though.
Thanks,
Aidan
If P31, instance of, is the first item in any page and;
a church and or a monastery in russia is a;
church = Q16970
Metochion = Q1398776
monastery = Q44613
tourist attraction = Q570116
and editors revert and say that;
Russian Orthodox Church = Q60995
Eastern Orthodox Church = Q 35032
do not belong there which there are on hundred’s of pages already, then where do they go, how can we get consensus.
and can a bot then make the changes ?
Hey all,
A Masters student of mine (José Moreno in CC) has been working on a
faceted navigation system for (large-scale) RDF datasets called "GraFa".
The system is available here loaded with a recent version of Wikidata:
http://grafa.dcc.uchile.cl/
Hopefully it is more or less self-explanatory for the moment. :)
If you have a moment to spare, we would hugely appreciate it if you
could interact with the system for a few minutes and then answer a quick
questionnaire that should only take a couple more minutes:
https://goo.gl/forms/h07qzn0aNGsRB6ny1
Just for the moment while the questionnaire is open, we would kindly
request to send feedback to us personally (off-list) to not affect
others' responses. We will leave the questionnaire open for a week until
January 16th, 17:00 GMT. After that time of course we would be happy to
discuss anything you might be interested in on the list. :)
After completing the questionnaire, please also feel free to visit or
list something you noticed on the Issue Tracker:
https://github.com/joseignm/GraFa/issues
Many thanks,
Aidan and José
Hi,
*** Before I begin: I've never been a major OpenStreetMap contributor, so
forgive me if I misunderstand something basic about it. ***
Lately, some work has been done on improving the integration of
OpenStreetMap (OSM) and Wikimedia projects, in the Kartographer extension.
In particular, I'm curious about this task:
https://phabricator.wikimedia.org/T112948
It's about showing place names in the wiki language. It may get resolved
soon (yay!!)
But it raises an important question: What happens if the place name was not
translated into the wiki language? As a not-so-extreme example, what
happens if a place name is only available in the OSM database in Chinese?
Unfortunately, it will be not very useful to readers of the English
Wikipedia.
The desirable solution is to give Wikipedia editors who know the relevant
languages an easy way to translate the labels.
Reading
https://wiki.openstreetmap.org/wiki/Translation#OpenStreetMap_website_inter…
, I see that there is no *easy* way to do it on the OpenStreetMap side. To
add a translation of a place name, you need to:
* find it on the map
* edit it
* type "name:LANGUAGE_CODE" in the properties list (for example "name:ru"
for Russian)
* write the name
* save
* wait for it to get published (I'm not sure how long does it take; maybe
it's instant, but I made a test edit, and I still don't see it.)
This is not super-efficient for several reasons:
* Finding each place on the map may be time-consuming for practical
considerations.
* Sending each change manually is also time-consuming.
* Typing the property name manually is slowish and error-prone.
A lot of this data is already available on Wikidata. In fact, OpenStreetMap
already has a Wikidata item property for each place. (It can also have a
property for Wikipedia link, one for each language. It looks redundant to
me: A link to the Wikidata item page would be enough.)
Did anybody ever suggest importing the place names available on Wikidata to
OSM, or to synchronize them regularly?
--
Amir Elisha Aharoni · אָמִיר אֱלִישָׁע אַהֲרוֹנִי
http://aharoni.wordpress.com
“We're living in pieces,
I want to live in peace.” – T. Moore
If P31, instance of, is the first item in any page and;
a church or monastery in russia is as a;
church = Q16970
Metochion = Q1398776
monastery = Q44613
tourist attraction = Q570116
and other editors swoop down revert and say that;
Russian Orthodox Church = Q60995
Eastern Orthodox Church = Q 35032
do not belong there which are on hundred’s of pages already, then where do they go, can we get consensus.
and can a bot make the changes ?
Hello,
If you’re regularly using Lua modules, creating or improving some of them,
we need your feedback!
The Wikidata development team would like to provide more Lua functions, in
order to improve the experience of people who write Lua scripts to reuse
Wikidata's data on the Wikimedia projects. Our goals are to help
harmonizing the existing modules across the Wikimedia projects, to make
coding in Lua easier for the communities, and to improve the performance of
the modules.
We would like to know more about your habits, your needs, and what could
help you. We have a few questions for you on this page
<https://www.wikidata.org/wiki/Wikidata:New_convenience_functions_for_Lua>.
Note that if you don’t feel comfortable with writing in English, you can
answer in your preferred language.
Feel free to ping or share this message with any editors that could be
interested!
Thanks a lot for your help,
--
Léa Lacroix
Project Manager Community Communication for Wikidata
Wikimedia Deutschland e.V.
Tempelhofer Ufer 23-24
10963 Berlin
www.wikimedia.de
Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e. V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg unter
der Nummer 23855 Nz. Als gemeinnützig anerkannt durch das Finanzamt für
Körperschaften I Berlin, Steuernummer 27/029/42207.
there is confusion among all the 100’s of pages in regards to where,
q60995 should go can,
if on, p31 or p361 or p140 or p279, or all.
and;
q35032 should go can,
if on, p31 or p361 or p140 or p279, or all.