During the data centre switchover routine
<https://wikitech.wikimedia.org/wiki/Switch_Datacenter> on October 10th,
some unexpected problems occurred over the past days:
- For a few hours, a small part of the data was not accessible. Some
items and lexemes seemed to have disappeared.
- Some data may have been lost, including edits, preferences changed as
well as user accounts created during a period or about 50 minutes (from
2018-09-13 09:08:17 UTC to 2018-09-13 09:58:26).
Part of the data has already been restored (edits and revisions. The rest
(user accounts, preferences) will be restored at the beginning of next
If you edited Wikidata on September 13th, please check your contributions.
If you encounter any problem in the next days, like items not reappearing
or something missing, let me know.
If you're interested in technical details, you can have a look at the
Phabricator ticket <https://phabricator.wikimedia.org/T206743>. Thanks for
Project Manager Community Communication for Wikidata
Wikimedia Deutschland e.V.
Tempelhofer Ufer 23-24
Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e. V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg unter
der Nummer 23855 Nz. Als gemeinnützig anerkannt durch das Finanzamt für
Körperschaften I Berlin, Steuernummer 27/029/42207.
I don't normally post these map updates to the mailing list but figured I
would give it a go this time.
I just generated so new Wikidata item maps and a new difference image
(compared to March 2018) which highlights various areas of increase in
terms of items with coordinate locations.
I have collected snippets of the map and highlighted the areas and I would
love to be able to attribute the increase in coverage to users or projects.
If you know of any users or projects that caused the areas of increase
highlighted in the blog post below then please let me know.
The areas are:
- Canada & the US
- São Paulo & Brazil
- Scotland & Wales
- Sri Lanka & Maldives
I think this is the same ?
A bot has just come in and fixed everything, in this case, the
formatter url was left intact so the old ulr’s
were still there, the bot just put in the wayback URL’s.
but what the bot did not due was reverse what editors had done
in the past.
in the pages under IMDb ID, ch #######
1. The rank on the pages is still marked at deprecation.
2. ( reason for
deprecation) p2241 was added and is still
2. ( former IMDb
character page), Q44374960 was created just for this and added and it is still
From: Nicholas Humfrey
Sent: Friday, October 5, 2018 7:31 AM
To: Discussion list for the
Subject: Re: [Wikidata] Wiki PageID
I have finally got around to starting to decommission dbpedialite.org
Pages now redirect to Wikidata entity URIs.
I won't be renewing the domain name when it expires 2019-05-09.
From: Nicholas Humfrey
< nicholas.humfrey(a)bbc.co.uk >
Date: Monday, 24 April 2017 at 15:51
To: "Discussion list for the Wikidata project."
< wikidata(a)lists.wikimedia.org >
Subject: Re: [Wikidata] Wiki PageID
A number of years ago I was having some very
frustrating times with the identifier instability in dbpedia. Two people looking
up an identifier for the same concept at different times ended up with different
So I created a proof of concept, dbpedialite, which uses
the time there was a page title edit war between Stoat and
But now we have Wikidata – which solves this problem much
better – so I should really get on and decommission dbpedialite.
you using Wikipedia Page IDs for? Might it be better to store the Wikidata
ID and then lookup the Wikipedia page on
< wikidata-bounces(a)lists.wikimedia.org > on behalf of Gintautas
Sulskus < gintautas.sulskus(a)gmail.com >
Reply-To: "Discussion list
for the Wikidata project." < wikidata(a)lists.wikimedia.org >
Monday, 24 April 2017 at 15:37
To: "Discussion list for the Wikidata
http://www. bbc . co . uk
This e-mail (and any attachments) is confidential and
may contain personal views which are not the views of the BBC unless specifically stated.
If you have received it in
error, please delete it from your system.
Do not use, copy or disclose the
information in any way nor act in reliance on it and notify the sender
Please note that the BBC monitors e-mails
sent or received.
Further communication will signify your consent to
There is a search prototype for using Commons with structured data
available for testing and feedback. Please visit the search prototype page
on Commons for information about where to find the prototype and how to use
it . Thanks, see you on the wiki.
Community Relations Specialist
I have a couple of questions regarding the Wiki Page ID. Does it always
stay unique for the page, where the page itself is just a placeholder for
any kind of information that might change over time?
Consider the following cases:
1. The first time someone creates page "Moon" it is assigned ID=1. If at
some point the page is renamed to "The_Moon", the ID=1 remains intact. Is
2. What if we have page "Moon" with ID=1. Someone creates a second-page
"The_Moon" with ID=2. Is it possible that page "Moon" is transformed into a
redirect? Then, "Moon" would be redirecting to page "The_Moon"?
3. Is it possible for page "Moon" to become a category "Category:Moon" with
the same ID=1?
Dear fellow Wikidataists,
did you know that data in Wikidata about Asian countries is 100% complete
for the head of government, yet is only 33% complete for the central bank?
Did you know that data in Wikidata about African countries is less complete
than European countries?
As an early little birthday gift for the 6th Birthday of Wikidata, we are
proud to release ProWD - Profiling WikiData. ProWD is a tool to analyze
data completeness in Wikidata, which is based on the class (e.g., country),
facet (e.g., continent), and attribute (e.g., central bank) of the entities.
(1) Know which entity has been completed? Which has not?
(2) See overall completeness of a class of entities.
(3) Compare completeness between different facets of a class.
ProWD is available online at https://prowd-prototype.herokuapp.com/
ProWD tutorial is available online at http://bit.ly/prowd-tut
We would be very happy to have your visit to play around with ProWD. We
would appreciate for any insights on how ProWD can be improved further, or
how ProWD might be useful for your Wikidata projects.
On behalf of ProWD Team
I'd like to ask if Wikidata could please offer a HDT  dump along with the already available Turtle dump . HDT is a binary format to store RDF data, which is pretty useful because it can be queried from command line, it can be used as a Jena/Fuseki source, and it also uses orders-of-magnitude less space to store the same data. The problem is that it's very impractical to generate a HDT, because the current implementation requires a lot of RAM processing to convert a file. For Wikidata it will probably require a machine with 100-200GB of RAM. This is unfeasible for me because I don't have such a machine, but if you guys have one to share, I can help setup the rdf2hdt software required to convert Wikidata Turtle to HDT.