I have a couple of questions regarding the Wiki Page ID. Does it always
stay unique for the page, where the page itself is just a placeholder for
any kind of information that might change over time?
Consider the following cases:
1. The first time someone creates page "Moon" it is assigned ID=1. If at
some point the page is renamed to "The_Moon", the ID=1 remains intact. Is
2. What if we have page "Moon" with ID=1. Someone creates a second-page
"The_Moon" with ID=2. Is it possible that page "Moon" is transformed into a
redirect? Then, "Moon" would be redirecting to page "The_Moon"?
3. Is it possible for page "Moon" to become a category "Category:Moon" with
the same ID=1?
I'd like to ask if Wikidata could please offer a HDT  dump along with the already available Turtle dump . HDT is a binary format to store RDF data, which is pretty useful because it can be queried from command line, it can be used as a Jena/Fuseki source, and it also uses orders-of-magnitude less space to store the same data. The problem is that it's very impractical to generate a HDT, because the current implementation requires a lot of RAM processing to convert a file. For Wikidata it will probably require a machine with 100-200GB of RAM. This is unfeasible for me because I don't have such a machine, but if you guys have one to share, I can help setup the rdf2hdt software required to convert Wikidata Turtle to HDT.
A Masters student of mine (José Moreno in CC) has been working on a
faceted navigation system for (large-scale) RDF datasets called "GraFa".
The system is available here loaded with a recent version of Wikidata:
Hopefully it is more or less self-explanatory for the moment. :)
If you have a moment to spare, we would hugely appreciate it if you
could interact with the system for a few minutes and then answer a quick
questionnaire that should only take a couple more minutes:
Just for the moment while the questionnaire is open, we would kindly
request to send feedback to us personally (off-list) to not affect
others' responses. We will leave the questionnaire open for a week until
January 16th, 17:00 GMT. After that time of course we would be happy to
discuss anything you might be interested in on the list. :)
After completing the questionnaire, please also feel free to visit or
list something you noticed on the Issue Tracker:
Aidan and José
last year, we applied for a Wikimedia grant to feed qualified data from Wikipedia infoboxes (i.e. missing statements with references) via the DBpedia software into Wikidata. The evaluation was already quite good, but some parts were still missing and we would like to ask for your help and feedback for the next round. The new application is here: https://meta.wikimedia.org/wiki/Grants:Project/DBpedia/GlobalFactSync
The main purpose of the grant is:
- Wikipedia infoboxes are quite rich, are manually curated and have references. DBpedia is already extracting that data quite well (i.e. there is no other software that does it better). However, extracting references is not a priority on our agenda. They would be very useful to Wikidata, but there are no user requests for this from DBpedia users.
- DBpedia also has all the infos of all infoboxes of all Wikipedia editions (>10k pages), so we also know quite well, where Wikidata is used already and where information is available in Wikidata or one language version and missing in another.
- side-goal: bring the Wikidata, Wikipedia and DBpedia communities closer together
Here is a diff between the old an new proposal:
- extraction of infobox references will still be a goal of the reworked proposal
- we have been working on the fusion and data comparison engine (the part of the budget that came from us) for a while now and there are first results:
We only took three properties for now and showed the gain where no Wikidata statement was available. birthDate/deathDate is already quite good. Details here: https://drive.google.com/file/d/1j5GojhzFJxLYTXerLJYz3Ih-K6UtpnG_/view?usp=…
Our plan here is to map all Wikidata properties to the DBpedia Ontology and then have the info to compare coverage of Wikidata with all infoboxes across languages.
- we will remove the text extraction part from the old proposal (which is here for you reference: https://meta.wikimedia.org/wiki/Grants:Project/DBpedia/CrossWikiFact). This will still be a focus during our work in 2018, together with Diffbot and the new DBpedia NLP department, but we think that it distracted from the core of the proposal. Results from the Wikipedia article text extraction can be added later once they are available and discussed separately.
- We proposed to make an extra website that helps to synchronize all Wikipedias and Wikidata with DBpedia as its backend. While the external website is not an ideal solution, we are lacking alternatives. The Primary Sources Tool is mainly for importing data into Wikidata, not so much synchronization. The MediaWiki instances of the Wikipedias do not seem to have any good interfaces to provide suggestions and pinpoint missing info. Especially to this part, we would like to ask for your help and suggestions, either per mail to the list or on the talk page: https://meta.wikimedia.org/wiki/Grants_talk:Project/DBpedia/GlobalFactSync
We are looking forward to a fruitful collaboration with you and we thank you for your feedback!
All the best
Institut für Informatik
Abt. Betriebliche Informationssysteme, AKSW/KILT
04109 Leipzig DE
tel: +49 177 3277537
I'm interested in installations of Wikibase (the software behind
Wikidata) that are used outside Wikimedia projects. While we have
already collected some, we are interested in getting to know more of them.
Do you run an installation? Do you know of installations of Wikibase "in
Comments and links are much appreciated!
Software Communication Strategist
Wikimedia Deutschland e.V. | Tempelhofer Ufer 23-24 | 10963 Berlin
Tel. (030) 219 158 26-0
Stellen Sie sich eine Welt vor, in der jeder Mensch an der Menge allen
Wissens frei teilhaben kann.
Helfen Sie uns dabei!
Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e.V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg
unter der Nummer 23855 B.
Als gemeinnützig anerkannt durch das Finanzamt für Körperschaften I
Berlin, Steuernummer 27/681/51985.
Is there a Q item in Wikidata for every symbol in all 8,475 languages (per
Glottolog - http://glottolog.org/glottolog/language) total?
For example, here's the letter "A" in Wikidata:
Could such be used for planning for Wikimedia developing into all of the
7,000 languages, by 2030, that Katherine Maher mentioned last August 2017
at Wikimania? Or for Wiktionary?
How many symbols are there in all 8,475 languages (per Glottolog -
http://glottolog.org/glottolog/language …) total?
https://wiki.worlduniversityandschool.org/wiki/Languages "Unicode 7 has
113,021 assigned code points (excluding reserved and unassigned ones)"
… Re Gutenberg Press, how to combine these anew?
- Scott MacLeod - Founder & President
- 415 480 4577
- World University and School
- CC World University and School - like CC Wikipedia with best STEM-centric
CC OpenCourseWare - incorporated as a nonprofit university and school in
California, and is a U.S. 501 (c) (3) tax-exempt educational organization.
IMPORTANT NOTICE: This transmission and any attachments are intended only
for the use of the individual or entity to which they are addressed and may
contain information that is privileged, confidential, or exempt from
disclosure under applicable federal or state laws. If the reader of this
transmission is not the intended recipient, you are hereby notified that
any use, dissemination, distribution, or copying of this communication is
strictly prohibited. If you have received this transmission in error,
please notify me immediately by email or telephone.
World University and School is sending you this because of your interest in
free, online, higher education. If you don't want to receive these, please
reply with 'unsubscribe' in the body of the email, leaving the subject line
intact. Thank you.
The Wikidata team will organize an IRC office hour, *January 30th, at 17:00
(18:00 Berlin time). It will take place in the Wikimedia office IRC channel
<http://webchat.freenode.net/?channels=#wikimedia-office>. As usual, we
will present you some news from the development team, the projects to come,
and collect your feedback.
This year, we would like to try something different, and have a topic to
focus on during the office hour. If you have any topic you'd like to bring
for the first meeting, please share it here!
Project Manager Community Communication for Wikidata
Wikimedia Deutschland e.V.
Tempelhofer Ufer 23-24
Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e. V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg unter
der Nummer 23855 Nz. Als gemeinnützig anerkannt durch das Finanzamt für
Körperschaften I Berlin, Steuernummer 27/029/42207.