The link to microcephaly has become clearer this week:
https://en.wikipedia.org/w/index.php?title=Zika_virus&oldid=704879995#cite_…
states "A complete ZIKV genome sequence [..] was recovered from brain
tissue" (of a fetus whose mother had been infected with Zika virus).
Given that the mass media are currently all over Zika, simple page
view stats are essentially useless for tracking the spread of the
disease - the PLOS Computational Biology article that Anthony has
linked states "Wikipedia data have a variety of instabilities that
need to be understood and compensated for. For example, Wikipedia
shares many of the problems of other internet data, such as highly
variable interest-driven traffic caused by news reporting and other
sources."
However, correlating geolocated view stats or searches with external info like
http://www.healthmap.org/zika/#timeline
might be useful.
In addition, if we had some representation of clickstreams for
Zika-related articles in languages spoken in affected areas, this
could help guide the development of Zika-related content in those
languages.
Beyond Wikipedia, there is a page on Wikidata to coordinate activities
around Zika:
https://www.wikidata.org/wiki/Wikidata:WikiProject_Medicine/Zika .
Cheers,
d.
On Mon, Feb 15, 2016 at 4:24 AM, Dan Andreescu <dandreescu(a)wikimedia.org> wrote:
> On Sun, Feb 14, 2016 at 2:58 PM, Leila Zia <leila(a)wikimedia.org> wrote:
>>
>> Hey Dan,
>>
>> On Sun, Feb 14, 2016 at 3:02 AM, Dan Andreescu <dandreescu(a)wikimedia.org>
>> wrote:
>>
>>>
>>> So, I felt personally compelled in the case of Zika, and the confusing
>>> coverage it has seen, to offer to personally help.
>>
>>
>> Which aspect of the coverage are you referring to as confusing?
>
>
> Well, so the first reports were that 3500 cases of microcephaly were linked
> to Zika in Brazil, since October. If you do the math, with Brazil's birth
> rate of 300,000 per year, 3500 for three months is incredibly high. The
> number went up to 4400 before it was discredited and the latest I read is
> that it's down to 404 [1] and there are claims of over-inflation. That same
> article talks about serious doubts that Zika even has anything to do with
> microcephaly. In reading around some more about the subject, it seems like
> a multi-variate analysis gone wrong.
>
>>
>>
>>>
>>> I can run queries, test hypotheses, and help publish data that could back
>>> up articles. Privacy of our editors is of course still obviously protected,
>>> but that's easier to do in a specific case with human review than in the
>>> general case.
>>
>>
>> I'm up for brainstorming about what we can do and helping. Please keep me
>> in the loop. In general, given that a big chunk of our traffic comes from
>> Google at the moment, it would be great to work with the researchers in
>> Google involved in Google's health related initiatives to produce
>> complementary knowledge to what Google can already tell about Zika (for
>> example, this). I'll reach out to the few people I know to get some more
>> information.
>> Depending on what complementary knowledge we want to produce, working with
>> WikiProject Medicine can be helpful, too.
>
>
> Cool, yeah, I'm nowhere close to knowledgeable on this, I can data-dog
> though :)
>
>
> [1] www.cbc.ca/news/health/microcephaly-brazil-zika-reality-1.3442580
>
> _______________________________________________
> Wikimedia-Medicine mailing list
> Wikimedia-Medicine(a)lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikimedia-medicine
>
Hi all,
Currently I am working on translating three folders about Wikidata (aiming
at GLAMs, businesses and research) to English, and later Dutch and French
(so they can be used in Belgium, France and the Netherlands).
I translated the texts of these folders here:
https://be.wikimedia.org/wiki/User:Romaine/Wikidata
The texts of the folders are not completely reviewed, but if anyone as
native speaker wants to look at them and fix some grammar/etc, feel free to
do that.
Greetings,
Romaine
What is the canonical, authoritative source to find statistics about
Wikidata?
On Sun, Feb 21, 2016 at 11:41 AM, Markus Krötzsch <
markus(a)semantic-mediawiki.org> wrote:
>
> Is it possible that you have actually used the flawed statistics from the
> Wikidata main page regarding the size of the project? 14.5M items in Aug
> 2015 seems far too low a number. Our RDF exports from mid August already
> contained more than 18.4M items. It would be nice to get this fixed at some
> point. There are currently almost 20M items, and the main page still shows
> only 16.5M.
I see the following counts:
16.5M (current?) - https://www.wikidata.org/wiki/Special:Statistics
19.2M (December 2015) - https://tools.wmflabs.org/wikidata-todo/stats.php
Where do I look for the real number?
Tom
Hi all,
as you know, Tpt has been working as an intern this summer at Google. He
finished his work a few weeks ago and I am happy to announce today the
publication of all scripts and the resulting data he has been working on.
Additionally, we publish a few novel visualizations of the data in Wikidata
and Freebase. We are still working on the actual report summarizing the
effort and providing numbers on its effectiveness and progress. This will
take another few weeks.
First, thanks to Tpt for his amazing work! I have not expected to see such
rich results. He has exceeded my expectations by far, and produced much
more transferable data than I expected. Additionally, he also was working
on the primary sources tool directly and helped Marco Fossati to upload a
second, sports-related dataset (you can select that by clicking on the
gears icon next to the Freebase item link in the sidebar on Wikidata, when
you switch on the Primary Sources tool).
The scripts that were created and used can be found here:
https://github.com/google/freebase-wikidata-converter
All scripts are released under the Apache license v2.
The following data files are also released. All data is released under the
CC0 license (in order to make this explicit, a comment has been added to
the start of each file, stating the copyright and the license. If any
script dealing with the files hiccups due to that line, simply remove the
first line).
https://tools.wmflabs.org/wikidata-primary-sources/data/freebase-mapped-mis…
The actual missing statements, including URLs for sources, are in this
file. This was filtered against statements already existing in Wikidata,
and the statements are mapped to Wikidata IDs. This contains about 14.3M
statements (214MB gzipped, 831MB unzipped). These are created using the
mappings below in addition to the mappings already in Wikidata. The quality
of these statements is rather mixed.
Additional datasets that we know meet a higher quality bar have been
previously released and uploaded directly to Wikidata by Tpt, following
community consultation.
https://tools.wmflabs.org/wikidata-primary-sources/data/additional-mapping.…
Contains additional mappings between Freebase MIDs and Wikidata QIDs, which
are not available in Wikidata. These are mappings based on statistical
methods and single interwiki links. Unlike the first set of mappings we had
created and published previously (which required multiple interwiki links
at least), these mappings are expected to have a lower quality - sufficient
for a manual process, but probably not sufficient for an automatic upload.
This contains about 3.4M mappings (30 MB gzipped, 64MB unzipped).
https://tools.wmflabs.org/wikidata-primary-sources/data/freebase-new-labels…
This file includes labels and aliases for Wikidata items which seem to be
currently missing. The quality of these labels is undetermined. The file
contains about 860k labels in about 160 languages, with 33 languages having
more than 10k labels each (14MB gzipped, 32MB unzipped).
https://tools.wmflabs.org/wikidata-primary-sources/data/freebase-reviewed-m…
This is an interesting file as it includes a quality signal for the
statements in Freebase. What you will find here are ordered pairs of
Freebase mids and properties, each indicating that the given pair were
going through a review process and likely have a higher quality on average.
This is only for those pairs that are missing from Wikidata. The file
includes about 1.4M pairs, and this can be used for importing part of the
data directly (6MB gzipped, 52MB unzipped).
Now anyone can take the statements, analyse them, slice and dice them,
upload them, use them for your own tools and games, etc. They remain
available through the primary sources tool as well, which has already led
to several thousand new statements in the last few weeks.
Additionally, Tpt and I created in the last few days of his internship a
few visualizations of the current data in Wikidata and in Freebase.
First, the following is a visualization of the whole of Wikidata:
https://tools.wmflabs.org/wikidata-primary-sources/data/wikidata-color.png
The visualization needs a bit of explanation, I guess. The y-axis (up/down)
represents time, the x-axis (left/right) represents space / geolocation.
The further down, the closer you are to the present, the further up the
more you go in the past. Time is given in a rational scale - the 20th
century gets much more space than the 1st century. The x-axis represents
longitude, with the prime meridian in the center of the image.
Every item is being put at its longitude (averaged, if several) and at its
earliest point of time mentioned on the item. For items without either,
neighbouring items propagate their value to them (averaging, if necessary).
This is done repeatedly until the items are saturated.
In order to understand that a bit better, the following image offers a
supporting grid: each line from left to right represents a century (up to
the first century), and each line from top to bottom represent a meridian
(with London in the middle of the graph).
https://tools.wmflabs.org/wikidata-primary-sources/data/wikidata-grid-color…
The same visualizations has also been created for Freebase:
https://tools.wmflabs.org/wikidata-primary-sources/data/freebase-color.pnghttps://tools.wmflabs.org/wikidata-primary-sources/data/freebase-grid-color…
In order to compare the two graphs, we also overlaid them over each other.
I will leave the interpretation to you, but you can easily see the
strengths of weaknesses of both knowledge bases.
https://tools.wmflabs.org/wikidata-primary-sources/data/wikidata-red-freeba…https://tools.wmflabs.org/wikidata-primary-sources/data/freebase-red-wikida…
The programs for creating the visualizations are all available in the
Github repository mentioned above (plenty of RAM is recommended to run it).
Enjoy the visualizations, the data and the script! Tpt and I are available
to answer questions. I hope this will help with understanding and analysing
some of the results of the work that we did this summer.
Cheers,
Denny
Hi Tom,
FYI, the primary sources tool is not dead: besides Freebase, it will
also cater for other datasets.
The StrepHit team will take care of it in the next few months, as per
one of the project goals [1].
The code repository is owned by Google, and the StrepHit team will
collaborate with the maintainers via the standard pull
request/review/merge process.
Cheers,
Marco
[1]
https://meta.wikimedia.org/wiki/Grants:IEG/StrepHit:_Wikidata_Statements_Va…
On 2/22/16 13:00, wikidata-request(a)lists.wikimedia.org wrote:
> From: Tom Morris<tfmorris(a)gmail.com>
> To: "Discussion list for the Wikidata project."
> <wikidata(a)lists.wikimedia.org>
> Subject: Re: [Wikidata] Freebase to Wikidata: Results from Tpt
> internship
> Message-ID:
> <CAE9vqEHGi7KAvOZv4tHhr4P_8LHdYTSJEobkXwvyJa9k+y2SAg(a)mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>> Are there plans for next steps or is this the end of the project as far as
>> >the two of you go?
>> >
> I'm going to assume that the lack of answer to this question over the last
> four months, the lack of updates on the project, and the fact no one is
> even bothering to respond to issues
> <https://github.com/google/primarysources/issues> means that this project
> is dead and abandoned. That's pretty sad. For an internship, it sounds
> like a cool project and a decent result. As an actual serious attempt to
> make productive use of the Freebase data, it's a weak, half-hearted effort
> by Google.
We are also in progress of deploying this extension for Wikidata too in
near future. So your help would be appreciated.
---------- Forwarded message ---------
From: Amir Ladsgroup <ladsgroup(a)gmail.com>
Date: Sat, Feb 20, 2016 at 2:05 AM
Subject: ORES extension soon be deployed, help us test it
To: wikitech-l <wikitech-l(a)lists.wikimedia.org>, <ai(a)lists.wikimedia.org>
Hey all,
TLDR: ORES extension [1] which is an extension that integrates ORES service
[2] with Wikipedia to make fighting vandalism easier and more efficient is
in the progress of deployment. You can test it in
https://mw-revscoring.wmflabs.org (Enable it in your preferences first)
You probably know ORES. It's an API service that gives probably of an edit
being vandalism, it also does other AI-related stuff like guessing the
quality of articles in Wikipedia. We have a nice blog post in Wikimedia
Blog [3] and media paid some attention to it [4]. Thanks to Aaron Halfaker
and others [5] for their work in building this service. There are several
tools using ORES to highlight possibly vandalism edits. Huggle, gadgets
like ScoredRevisions, etc. But an extension does this job much more
efficiently.
The extension which is being developed by Adam Wight, Kunal Mehta and me
highlights unpatrolled edits in recentchanges, watchlists, related changes
and in future, user contributions if ORES score of those edits pass a
certain threshold. GUI design is made by May Galloway. ORES API (
ores.wmflabs.org) only gives you a score between 0 and 1. Zero means it's
not vandalism at all and one means it's vandalism for sure. You can test
its simple GUI in https://ores.wmflabs.org/ui/. It's possible to change the
threshold in your preferences in the recent changes tab (you have options
instead of numbers because we thought numbers are not very intuitive).
Also, we enabled it in a test wiki so you test it:
https://mw-revscoring.wmflabs.org. You need to make an account (use a dummy
password) and then enable it in beta features tab. Note that building AI
tool to detect vandalism in a test wiki sounds a little bit silly ;) so we
set up a dummy model that probability of an edit being vandalism is
backward of the last two digits (e.g. diff id:12345 = score:54%). In a more
technical aspect, we store these scores in ores_classification table so we
can do a lot more analysis with them once the extension is deployed. Fun
use cases such as the average score of a certain page or contributions of a
user or members of a category, etc.
We passed security review and we have consensus to enable it in Persian
Wikipedia. We are only blocked on ORES moving from Labs to production
(T106867 [6]). The next wiki is Wikidata, we are good to go once the
community finishes labeling edits so we can build the "damaging" model. We
can enable it Portuguese and Turkish Wikipedia after March because s2 and
s3 have database storage issues right now. For other Wikis, you need to
check if ORES supports the Wiki and if community finished labeling edits
for ORES (check out the table at [2])
If you want to report bugs or add feature requests you can find it in here
[7].
[1]: https://www.mediawiki.org/wiki/Extension:ORES
[2]: https://meta.wikimedia.org/wiki/Objective_Revision_Evaluation_Service
[3]:
https://blog.wikimedia.org/2015/11/30/artificial-intelligence-x-ray-specs/
[4]:
https://meta.wikimedia.org/wiki/Research:Revision_scoring_as_a_service/Media
[5]:
https://meta.wikimedia.org/wiki/Research:Revision_scoring_as_a_service#Team
[6]: https://phabricator.wikimedia.org/T106867
[7]: https://phabricator.wikimedia.org/tag/mediawiki-extensions-ores/
Best
Hi!
With Wikidata Query Service usage raising and more use cases being
found, it is time to consider caching infrastructure for results, since
queries are expensive. One of the questions I would like to solicit
feedback on is the following:
Should we have default SPARQL endpoint cached or uncached? If cached,
which default cache duration would be good for most users? The cache, of
course, applies to the results of the same (identical) query only.
Please also note the following is not an implementation plan, but rather
an opinion poll, whatever we end up deciding we will have an
announcement with actual plan before we do it.
Also, whichever default we choose, there should be a possibility to get
both cached and uncached results. The question is when you access the
endpoint with no options, which one would it be. So possible variants are:
1. query.wikidata.org/sparql is uncached, to get cached result you use
something like query.wikidata.org/sparql?cached=120 to get result no
older than 120 seconds ago.
PRO: least surprise for default users.
CON: relies on goodwill of tool writers, if somebody doesn't know about
cache option and uses the same query heavily, we would have to ask them
to use the parameter.
2. query.wikidata.org/sparql is cached for short duration (e.g. 1
minute) by default, if you'd like fresh result, you do something like
query.wikidata.org/sparql?cached=0. If you're fine with older result,
you can use query.wikidata.org/sparql?cached=3600 and get cached result
if it's still in cache but by default you never get result older than 1
minute. This of course assuming Varnish magic can do this, if not, the
scheme has to be amended.
PRO: performance improvement while keeping default results reasonably fresh
CON: it is not obvious that result is not the freshest data but can be
stale, so if you update something in wikidata and query again within
minute, you can be surprised
3. query.wikidata.org/sparql is cached for long duration (e.g. hours) by
default, if you'd like fresher result you do something like
query.wikidata.org/sparql?cache=120 to get result no older than 2
minutes, or cache=0 if you want uncached one.
PRO: best performance improvement for most queries, works well with
queries that display data that rarely changes, such as lists, etc.
CON: for people not knowing about cache option, in may be rather
confusing to not be able to get up-to-date results.
So we'd like to hear - especially from current SPARQL endpoint users -
what do you think about these and which would work for you?
Also, for the users of the WDQS GUI - provided we have cached and
uncached options, which one the GUI should return by default? Should it
be always uncached? Performance there is not a major question - the
traffic to the GUI is pretty low - but rather convenience. Of course, if
you run cached query from GUI and the data in cache, you can get results
much faster for some queries. OTOH, it may be important in many cases to
be able to access actual content up-to-date, not the cached version.
I also created a poll: https://phabricator.wikimedia.org/V8
so please feel free to vote for your favorite option.
OK, this letter is long enough already so I'll stop here and wait to
hear what everybody's thinking.
Thanks in advance,
--
Stas Malyshev
smalyshev(a)wikimedia.org
From Stas' answer to https://phabricator.wikimedia.org/T127070 I learned the Wikidata Query Service does not "allow external federated queries ... for security reasons (it's basically open proxy)."
Now, obviously endpoints referenced in a federated query via a service clause have to be open - so any attacker could send his queries directly instead of squeezing them through some other endpoint. The only scenario I can think of is that an attackers IP already is blocked by the attacked site. If (instead of much more common ways to fake an IP) the attacker would choose to do it by federated queries through WDQS, this _could_ result in WDQS being blocked by this endpoint.
This is a quite unlikely scenario - in the last 7 years I'm on SPARQL mailing lists I cannot remember this kind of attack of ever having been reported - but of cause it is legitimate to secure production environments against any conceivable attack vector.
However, I think it should be possible to query Wikidata with this kind of query. Federated SPARQL queries are a basic building block for Linked Open Data, and blocking it breaks many uses Wikidata could provide for the linked data cloud. This must not involve the highly-protected production environment, but could be solved by an additional unstable/experimental endpoint under another address.
As an additional illustrating argument: There is an immense difference between referencing something in a service clause and getting a result in a few seconds, or having to use the Wikidata toolkit. To get the initial query for this thread answered by the example program Markus kindly provided at https://github.com/Wikidata/Wikidata-Toolkit-Examples/blob/master/src/examp… (and which worked perfectly - thanks again!), it took me
- more than five hours to download the dataset (in my work environment wired to the DFN network)
- 20 min to execute the query
- considerable time to fiddle with the Java code for the query if I had to adapt it (+ another 20 min to execute it again)
For many parts of the world, or even for users in Germany with a slow DSL connection, the first point alone would prohibit any use. And even with a good internet connection, a new or occasional user would quite probably turn away when offered this procedure instead of getting a "normal" LOD conformant query answered in a few seconds.
Again, I very much value your work and your determination to set up a service with very high availability and performance. Please, make the great Wikidata LOD available in less demanding settings, too. It should be possible for users to do more advanced SPARQL queries for LOD uses in an environment where you can not guarantee a high level of reliability.
Cheers, Joachim
-----Ursprüngliche Nachricht-----
Von: Wikidata [mailto:wikidata-bounces@lists.wikimedia.org] Im Auftrag von Neubert, Joachim
Gesendet: Dienstag, 16. Februar 2016 15:48
An: 'Discussion list for the Wikidata project.'
Betreff: Re: [Wikidata] SPARQL CONSTRUCT results truncated
Thanks Markus, I've created https://phabricator.wikimedia.org/T127070 with the details.
-----Ursprüngliche Nachricht-----
Von: Wikidata [mailto:wikidata-bounces@lists.wikimedia.org] Im Auftrag von Markus Krötzsch
Gesendet: Dienstag, 16. Februar 2016 14:57
An: Discussion list for the Wikidata project.
Betreff: Re: [Wikidata] SPARQL CONSTRUCT results truncated
Hi Joachim,
I think SERVICE queries should be working, but maybe Stas knows more about this. Even if they are disabled, this should not result in some message rather than in a NullPointerException. Looks like a bug.
Markus
On 16.02.2016 13:56, Neubert, Joachim wrote:
> Hi Markus,
>
> Great that you checked that out. I can confirm that the simplified query worked for me, too. It took 15.6s and revealed roughly the same number of results (323789).
>
> When I loaded the results into http://zbw.eu/beta/sparql/econ_pers/query, an endpoint for "economics-related" persons, it matched with 36050 persons (supposedly the "most important" 8 percent of our set).
>
> What I normally would do to get the according Wikipedia site URLs, is a query against the wikidata endpoint, which references the relevant wikidata URIs via a "service" clause:
>
> PREFIX skos: <http://www.w3.org/2004/02/skos/core#>
> PREFIX schema: <http://schema.org/>
> #
> construct {
> ?gnd schema:about ?sitelink .
> }
> where {
> service <http://zbw.eu/beta/sparql/econ_pers/query> {
> ?gnd skos:prefLabel [] ;
> skos:exactMatch ?wd .
> filter(contains(str(?wd), 'wikidata'))
> }
> ?sitelink schema:about ?wd ;
> schema:inLanguage ?language .
> filter (contains(str(?sitelink), 'wikipedia'))
> filter (lang(?wdLabel) = ?language && ?language in ('en', 'de')) }
>
> This however results in a java error.
>
> If "service" clauses are supposed to work in the wikidata endpoint, I'd happily provide addtitional details in phabricator.
>
> For now, I'll get the data via your java example code :)
>
> Cheers, Joachim
>
> -----Ursprüngliche Nachricht-----
> Von: Wikidata [mailto:wikidata-bounces@lists.wikimedia.org] Im Auftrag
> von Markus Kroetzsch
> Gesendet: Samstag, 13. Februar 2016 22:56
> An: Discussion list for the Wikidata project.
> Betreff: Re: [Wikidata] SPARQL CONSTRUCT results truncated
>
> And here is another comment on this interesting topic :-)
>
> I just realised how close the service is to answering the query. It turns out that you can in fact get the whole set of (currently >324000 result items) together with their GND identifiers as a download *within the timeout* (I tried several times without any errors). This is a 63M json result file with >640K individual values, and it downloads in no time on my home network. The query I use is simply this:
>
> PREFIX wd: <http://www.wikidata.org/entity/> PREFIX wdt:
> <http://www.wikidata.org/prop/direct/>
>
> select ?item ?gndId
> where {
> ?item wdt:P227 ?gndId ; # get gnd ID
> wdt:P31 wd:Q5 . # instance of human } ORDER BY ASC(?gndId)
> LIMIT 10
>
> (don't run this in vain: even with the limit, the ORDER clause
> requires the service to compute all results every time someone runs
> this. Also be careful when removing the limit; your browser may hang
> on an HTML page that large; better use the SPARQL endpoint directly to
> download the complete result file.)
>
> It seems that the timeout is only hit when adding more information (labels and wiki URLs) to the result.
>
> So it seems that we are not actually very far away from being able to answer the original query even within the timeout. Certainly not as far away as I first thought. It might not be necessary at all to switch to a different approach (though it would be interesting to know how long LDF takes to answer the above -- our current service takes less than 10sec).
>
> Cheers,
>
> Markus
>
>
> On 13.02.2016 11:40, Peter Haase wrote:
>> Hi,
>>
>> you may want to check out the Linked Data Fragment server in Blazegraph:
>> https://github.com/blazegraph/BlazegraphBasedTPFServer
>>
>> Cheers,
>> Peter
>>> On 13.02.2016, at 01:33, Stas Malyshev <smalyshev(a)wikimedia.org> wrote:
>>>
>>> Hi!
>>>
>>>> The Linked data fragments approach Osma mentioned is very
>>>> interesting (particularly the bit about setting it up on top of an
>>>> regularily updated existing endpoint), and could provide another
>>>> alternative, but I have not yet experimented with it.
>>>
>>> There is apparently this:
>>> https://github.com/CristianCantoro/wikidataldf
>>> though not sure what it its status - I just found it.
>>>
>>> In general, yes, I think checking out LDF may be a good idea. I'll
>>> put it on my todo list.
>>>
>>> --
>>> Stas Malyshev
>>> smalyshev(a)wikimedia.org
>>>
>>> _______________________________________________
>>> Wikidata mailing list
>>> Wikidata(a)lists.wikimedia.org
>>> https://lists.wikimedia.org/mailman/listinfo/wikidata
>>
>>
>> _______________________________________________
>> Wikidata mailing list
>> Wikidata(a)lists.wikimedia.org
>> https://lists.wikimedia.org/mailman/listinfo/wikidata
>>
>
>
> --
> Markus Kroetzsch
> Faculty of Computer Science
> Technische Universität Dresden
> +49 351 463 38486
> http://korrekt.org/
>
> _______________________________________________
> Wikidata mailing list
> Wikidata(a)lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata
> _______________________________________________
> Wikidata mailing list
> Wikidata(a)lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata
>
_______________________________________________
Wikidata mailing list
Wikidata(a)lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata
_______________________________________________
Wikidata mailing list
Wikidata(a)lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata