Hey folks :)
On request Commons is the next sister project to get access to the
data on Wikidata. We'll be doing this on December 2nd. Please help
update and expand https://commons.wikimedia.org/wiki/Commons:Wikidata
and https://www.wikidata.org/wiki/Wikidata:Wikimedia_Commons. Two
caveats: 1) This is restricted to accessing data from the item
connected to the page via sitelink. Access to data from arbitrary
items will follow in January/February. 2) This is not for storing meta
data about individual files. This will come later as part of the
structured data on Commons project
(https://commons.wikimedia.org/wiki/Commons:Structured_data) and be
stored on Commons itself.
Looking forward to seeing what great things this will make possible again!
Lydia Pintscher - http://about.me/lydia.pintscher
Product Manager for Wikidata
Wikimedia Deutschland e.V.
Tempelhofer Ufer 23-24
Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e. V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg
unter der Nummer 23855 Nz. Als gemeinnützig anerkannt durch das
Finanzamt für Körperschaften I Berlin, Steuernummer 27/681/51985.
Hi Wikidata folks,
I've been struggling against bug 72348 , which leads the dumps to
contain both old and new style records. This makes parsing extra
I tried to follow the bugzilla trail, but it isn't clear where the bug
stands. Do you have any guesses about when this issue is likely to get
resolved? Days? Weeks? Months?
Thanks for your help!
Shilad W. Sen
Mathematics, Statistics, and Computer Science Dept.
I haven't seen this mentioned in the context of Wikidata yet, so here:
The latest beta version of the Wikipedia app for iOS (iPhone, iPad, iPod)
shows descriptions from Wikidata as summaries in the search results.
If you have an iOS device and want to see it in action, see the
Amir Elisha Aharoni · אָמִיר אֱלִישָׁע אַהֲרוֹנִי
“We're living in pieces,
I want to live in peace.” – T. Moore
I'm bringing this up as my proof-by-construction answer to a
knock-down-drag-out thread earlier where people complained about the
difficulty of running queries against DBpedia and Wikidata.
I think some people will find the product described below to be a faster
road to where they are heading in the short term. In the longer term I am
thinking a v4 or v5 infovore may be able to evaluate the contexts of facts
in Wikidata and thus create a world view which can be quality controlled
for particular outcomes.
Well, Infovore 3.1 happened quickly after Infovore because I made a quick
attempt to get my Jena up to date and found it was easy to update, so I
did. The importance here is that there is a lot of cool stuff going on
with Jena, such as the RDFThrift serialization format, and also some
Hadoop I/O tools written by Rob Vesse, and tracking the latest version
helps us connect with that. Release page here:
Infovore 3.1 was used to process the Freebase RDF Dump to create a
quality-controlled RDF data set called :BaseKB; generally queries look
the same on Freebase and :BaseKB, but :BaseKB gives the right answers,
faster, and with less memory consumption. This week's release is in the
something very close to this is going to become :BaseKB Gold 2. This is
simpler and better product that the last Gold release from Spring 2014.
Here are a few reasons:
* Unicode escape sequences in Freebase are now converted to Unicode
characters in RDF
* The rejection rate of triples has dramatically dropped, because of both
changes to Infovore and improvements in Freebase content
* The product is now packaged as a set of files partitioned and sorted on
subject; this means you can download one file and get a sample of facts
about a given topic; there is no longer the "horizontal division"
Between duplicate fact filtering and compression, :BaseKB Now is nearly
half the size of the Freebase RDF Dump.
If you're interested please join the mailing list at
Sorry, not sure if this is the right place to post this bug report?
reports quite a few messages like:
*The time allocated for running scripts has expired.**The time allocated
for running scripts has expired.**The time allocated for running scripts
has expired.**The time allocated for running scripts has expired.**The time
allocated for running scripts has expired.**The time allocated for running
scripts has expired.**The time allocated for running scripts has expired.**The
time allocated for running scripts has expired.**The time allocated for
running scripts has expired.**The time allocated for running scripts has
expired.**The time allocated for running scripts has expired.**The time
allocated for running scripts has expired.*
I was looking through the configuration trying to debug my issues from my last
email and noticed the list of blacklisted IDs. They appear to be numbers with
special meaning. I was curious about two things, why are they blacklisted and
what is the meaning of the remaining number?
* 1: I imagine that this just refers to #1
* 23: Probably refers to the 23 enigma
* 42: Life the universe and everything
* 1337: leet
* 9001: ISO 9001, which deals with quality assurance
* 31337: Elite
The only number that left me lost was 720101010. I couldn't figure this one
out. This list is located in
Doing a quick grep for the jquery.ui.menu within the Wikidata extension folder
I came up with the following results:
// needs jquery.ui.menu
Just fooling around I decided to comment out the two lines requiring
jquery.ui.menu in the jquery.wikibase/resources.php file. Refreshing the page
I ran into the same error as before, but this time with a module called JSON.
Does the Wikidata extension depend on another extension that adds these
resource loader modules? According to ResourceLoader/Default_modules on
Mediawiki.org, Mediawiki does not ship with jquery.ui.menu. This does seem a
bit weird, because I believe the comment that jquery.ui.autocomplete requires
jquery.ui.menu is correct and ResourceLoader includes jquery.ui.autocomplete by
Am I missing something? I'm definitely confused at this point.
As I wrote this email I was called into my managers office to be informed that
I am being laid off as part of a corporate restructuring. They've decided to
outsource all of IT. So while I am still curious about the answers to the
questions laid out in this email and the previous, this is no longer a project
I will be actively working on. I may still fool around with trying to get a
Wikibase installation set up on my own.
My email for this mailing list will also be changing to my personal email of
zellfaze(a)zellfaze.org beginning some time in the next few days.
so I see there is some work being done on mapping Wikidata data model
to RDF .
Just a thought: what if you actually used RDF and Wikidata's concepts
modeled in it right from the start? And used standard RDF tools, APIs,
query language (SPARQL) instead of building the whole thing from
Is it just me or was this decision really a colossal waste of resources?
Trying to set up Wikibase and I might have something
misconfigured somewhere, or I may have stumbled upon a bug.
I've installed Wikibase per the instructions, made some
very minor tweaks to the configuration and added my first
property and item.
Going to the item page with (?debug=true) I get an error
message in Firebug:
Error: Unknown dependency: jquery.ui.menu
throw new Error( 'Unknown dependency: ' + module );
I can't seem to add statements to items. I can't see
what I may have done wrong, but I am working off the
assumption that I made a mistake somewhere and that this
isn't a bug.