regarding an actual topic in Germany about publication of the
timetable-data of Deutsche Bahn (German national railway company) and
their willingness of a discussion with other Open-Data-Supporters it may
be a good idea of providing an expiration dates for Wikidata-records.
In their open letter to Mr. Kreil  they announced that it may cause
problems providing the timetable-data in an open way if e.g. anybody
uses old data.
I am considering the task of converting the templates from the gene
articles in Wikipedia (http://en.wikipedia.org/wiki/Portal:Gene_Wiki) to
use/create wikidata assertions. This involves an extensive update of the
template structure as well as the code for the bot that keeps them in sync
with external public databases. (https://bitbucket.org/sulab/pygenewiki)
More specifically I'm thinking about working with a Google Summer of Code
student on this project.
Given a time frame of now through August, would it make sense for us to
pursue this objective directly in the context of wikidata (through the
public API). Or would it be better for us to install our own version of
the wikibase software (kept in sync with code updates) and develop the new
gene wiki bot code locally with the aim of switching to the public API
later? Or is it too early to consider this project?
I want to get involved and support wikidata with this important data, but
I'm hesitant to ramp up development (especially with a student) in a moving
Argh... Today the update of the demo servers failed on test-repo due a
puppet problem. The client thus doesn't work either. Sorry for that -
I'm trying to figure out why this happened to get the demo servers back
and let you know. Might be that this is only tomorrow (as in European
Systemadministratorin und Projektassistenz Wikidata
Wikimedia Deutschland e.V. | Obentrautstr. 72 | 10963 Berlin
Tel. (030) 219 158 260
Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e.V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg
unter der Nummer 23855 B. Als gemeinnützig anerkannt durch das Finanzamt
für Körperschaften I Berlin, Steuernummer 27/681/51985.
Hi and sorry for break threading,
I talked to Ariel, who is responsible for dumps, and she described the
schedule as follows :
The dumps run in this order: new wikis with no run go next as soon as a server
completes an existing run. All other wikis are done in 'longest
to go without a run goes next', per list. There are two lists: large (de
it pl etc) wikis, about 9-10 of those, and small wikis (the rest). En wiki
runs by itself, by hand, once a month.
In her estimate the next wikidata dump will have its turn in 2-3 days,
which I understand as Feb 27-28.
I hope that helps.
Sorry if I am a bit disturbing or my letter sent wrong time.
I have a proposal regarding future. Every information has two sides like an
receiver and source.
When a child asks how many citizens San-Francisco has the answer is
different than 744000.
So, the question and answer depends on who asking, why, where etc.
My proposal is to make Global Social Media like wikipedia, but with form
"question - answer",
where asking or answering side is computer or human being or random (time).
Questions are answered by questions' "suppliers". Suppliers are
organizations or humans or time,
interested in answering a questions.
The knowledge needs to be free but we often need less exact knowledge.
Something like we need
to think less every day that every sign we become older and less stronger
and more probability to die. :)
I have made a single page which answers only one question by this moment
based on time:
What are you thinking about it?
I try to get a latest wikidata dump to play with. I have checked the site
http://dumps.wikimedia.org/wikidatawiki/. The latest dump is for 20130215.
Since I would like to get really latest version (so that if I want to get
up-to-date data, I don't crawl wikidata.org with recentchange API too
much), I may want to wait for the next dump to build my local copy.
Do you know when we will have the next dump? Or what is the general
policy/schedule to generate a dump?
Thanks a lot.
there was a talk (here or on project talk, I don't remember) about
identical names of properties. In several languages sex (=being male or
female) and genus in biology have the same name which is prohibited in
Now, I see ország (rendszertan)
in recent changes; ország means country and it also has the biological
meaning (plants, animals...)
A good solution would be if these disambiguating words in parentheses could
disappear when applying them in phase 2 client. Then we could encourage
people to use this form.
As we say in Hungary, both the goat has enough and the cabbage remains.
I am not sure if I was clear, tell me if not.
I have started an RFC, putting down my initial thoughts on how to make
Wikidata API more seamless with the core API.
The goal is to minimize execution time, bandwidth, and server load, while
making it play nicely with the rest of the action=query, allow for
continuations and multi-values data request capabilities.
For now I only looked at the wbgetentities, but the RFC will be updated
with the other modules. Please let me know what you think.