Greetings to all,
The celebration of the 8th birthday of Wikidata has already started and as
part of it several events are currently happening online and offline across
the world. As part of it, a 5 days long data-thon is planned to improve
India-specific Wikidata Properties and celebrate Wikidata's Birthday by
contributing to the free open data movement.
The primary objective of this "data-thon" is to add / improve the usage of
Wikidata Properties related to India in Wikidata. Currently there are 76
India-specific properties available in Wikidata. But most of the properties
are used less in numbers in the statements. So this data-thon is organised
to improve the usage of those properties in items and improve the
structured data in Wikidata.
We hope to encourage participation from other Wikidata communities.
Please have a look at the event page and please consider joining the
Event Link - https://w.wiki/iUa
Start time - October 25, 2020, 00:00 IST
End time - October 29, 2020 23:59 IST
*Please don’t print this e-mail unless you really need to.Every 3000 sheets
consume a tree. Conserve Trees for a better tomorrow!*
This is simply marvelous…
The JSON serialization returns just the WB P and value which may take less effort to extract value for what we need at the moment. TTL returns so much more info which we should be able to do more at a later time.
Very exciting. Thank you so very much!
Descriptive Data Management
Discovery Services Division
> Begin forwarded message:
> Date: Fri, 23 Oct 2020 10:38:56 +0700
> From: Fariz Darari
> To: Discussion list for the Wikidata project
> Subject: Re: [Wikidata] Any API available for extraction based on Q#?
> Hello Jackie,
> not necessarily an answer, but you could get description/statements of Q#
> via: https://www.wikidata.org/wiki/Special:EntityData/Q#.ttl
> So, for instance, the (turtle syntax) statements of Indonesia would be:
> On Wed, Oct 21, 2020 at 7:23 PM j s <aa3544bd(a)gmail.com> wrote:
>> I am extremely new to SPARQL and have yet to find how I could feed a file
>> containing a series of Q item numbers in order to extract statements for
>> our needs.
>> I suspect some of you may have succeeded in this. Would you recommend an
>> API tool (or combination of tools) to use to extract all statements from a
>> file that contains a series of Q item number? I must preface to say that I
>> am a metadata person not a software developer.
>> Thank you very much for any pointers!
>> With regards,
I am extremely new to SPARQL and have yet to find how I could feed a file
containing a series of Q item numbers in order to extract statements for
I suspect some of you may have succeeded in this. Would you recommend an
API tool (or combination of tools) to use to extract all statements from a
file that contains a series of Q item number? I must preface to say that I
am a metadata person not a software developer.
Thank you very much for any pointers!
Wikidata's 8th birthday
<https://www.wikidata.org/wiki/Wikidata:Eighth_Birthday> is next week, and
in honor of this we're hosting "Wikidata Education Week." Join us for fun
events and activities! Don't miss live interviews with Wikimedians who do
amazing things in education with Wikidata! You can find everything you need
to know here
Let's get #Wikidata and #EduWiki trending! Wikidata in education is the
See you next week!
-Sailesh & the Education Team
Wikimedia Foundation | Program Associate, Education
I am pleased to announce the 23 recipients of the *WikiCite* project grants
and eScholarships. The WikiCite initiative focuses the development of open
citations and linked bibliographic data to serve free knowledge.
There is impressive diversity among these recipients in terms of:
- the types of activities (content creation & upload, outreach & training,
software development, and documentation/localization),
- the topics (everything from Balinese palm-leaf manuscripts, to Brazilian
legislation, to Nigerian newspapers...)
- and the recipient locations (15 countries, the majority of which are in
the global South).
Combined these grants are valued at $69k USD, yet we received more than
double the number of excellent applications than the budget could support.
To learn about each recipients’ project see this blogpost:
And while I’ve got your attention...
The WikiCite 2020 Virtual Conference is happening on Monday-Wednesday.
Sessions will be live-streamed on several platforms, in time zones across
the globe, and with presentations in English, French, German, Indonesian,
Please come and join us.
*Liam Wyatt [Wittylama]*
WikiCite <https://meta.wikimedia.org/wiki/WikiCite> Program Manager & Okapi
<https://meta.wikimedia.org/wiki/Okapi> Community Liaison
When you call the "wbsearchentities" API, the main label and description
returned for each item are always hardcoded in English. (Which is strange.)
Thankfully, it also returns the label for each item in the specified
language - here is an example:
However, it doesn't seem like there's any way to get the *description* of
each item in the specified language. Is there?
WikiWorks · MediaWiki Consulting · http://wikiworks.com