Hi, folks,
After one night quick work, I had gave a proof-of-concept to demonstrate the feasibility that we can combine Wikidata and Clojure logic programming together.
The source code is at here: https://github.com/mountain/knowledge
An example of an entity: https://github.com/mountain/knowledge/blob/master/src/entities/albert_einste...
Example of types: https://github.com/mountain/knowledge/blob/master/src/meta/types.clj
Example of predicates: https://github.com/mountain/knowledge/blob/master/src/meta/properties.clj
Example of inference: https://github.com/mountain/knowledge/blob/master/test/knowledge/test.clj
Also we found it is very easy to get another language versions of the data other than English.
So, thanks very much for your guys' great work!
But I found the semantic layer of wikidata is shallow, that means you can only knows who are Einstein's father and children, but it can not be inferred automatically from wikidata that Einstein's father is the grandfather of Einstein's children.
So my question is that:
- Do we have a long term plan to evolve wikidata towards a semantic-rich dataset?
Regards, Mingli
Hi Mingli,
thanks, this very interesting, but I think I need a bit more context to understand what you are doing and why.
Is your goal to create a library for accessing Wikidata from Clojure (like a Clojure API for Wikidata)? Or is your goal to use logical inference over Wikidata and you just use Clojure as a tool since it was most convenient?
To your question:
- Do we have a long term plan to evolve wikidata towards a semantic-rich dataset?
There are no concrete designs for adding reasoning features to Wikidata so far (if this is what you mean). There are various open questions, especially related to inferencing over quantifiers. But there are also important technical questions, especially regarding performance. I intend to work the theory out in more detail soon (that is: "How should logical rules over the Wikidata data model look work in principle?"). The implementation then is the next step. I don't think that any of this will be part of the core features of Wikidata soon, but hopefully we can set up a useful external service for Wikidata search and analytics (e.g., to check for property constraint violations in real time instead of using custom code and bots).
Cheers,
Markus
On 05/08/13 17:30, Mingli Yuan wrote:
Hi, folks,
After one night quick work, I had gave a proof-of-concept to demonstrate the feasibility that we can combine Wikidata and Clojure logic programming together.
The source code is at here: https://github.com/mountain/knowledge
An example of an entity: https://github.com/mountain/knowledge/blob/master/src/entities/albert_einste...
Example of types: https://github.com/mountain/knowledge/blob/master/src/meta/types.clj
Example of predicates: https://github.com/mountain/knowledge/blob/master/src/meta/properties.clj
Example of inference: https://github.com/mountain/knowledge/blob/master/test/knowledge/test.clj
Also we found it is very easy to get another language versions of the data other than English.
So, thanks very much for your guys' great work!
But I found the semantic layer of wikidata is shallow, that means you can only knows who are Einstein's father and children, but it can not be inferred automatically from wikidata that Einstein's father is the grandfather of Einstein's children.
So my question is that:
- Do we have a long term plan to evolve wikidata towards a semantic-rich dataset?
Regards, Mingli
Wikidata-l mailing list Wikidata-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikidata-l
Thanks, Markus,
About the background:
One is related with my current work and I can not say it too much. But another story, I can say it publicly.
After playing with wikidata for a while, I realised that we have the potential to create a WolframAlpha-like application. To achieve this, what we need are only a indexer, a generator, a reasoner and a traversor.
Take a look at simplenlg ( https://code.google.com/p/simplenlg/ ), it can take claim triples and transform them into sentences of questions. We then index these the answer by question sentences, the answer could be provided by wikidata claims.
And then we get a QA engine with weak abilities. i.e. * they only know who are Enistein's father and children, and do not know that grandfather is the father's father. * they only know which planets belong to the Sun system, and they do not know which is the biggest, the farest, etc.
So if we add some axiom into the system and also with a reasoner and a traversor. We have the potential to enumerate all possible *simple* questions and answers of human knowledge. Then we get a QA engine with strong abilities.
These are what in my brain now. I know these kinds of things are never easy, but they are possible in the near future.
Regards, Mingli
On Wed, Aug 7, 2013 at 4:14 PM, Markus Krötzsch < markus@semantic-mediawiki.org> wrote:
Hi Mingli,
thanks, this very interesting, but I think I need a bit more context to understand what you are doing and why.
Is your goal to create a library for accessing Wikidata from Clojure (like a Clojure API for Wikidata)? Or is your goal to use logical inference over Wikidata and you just use Clojure as a tool since it was most convenient?
To your question:
- Do we have a long term plan to evolve wikidata towards a semantic-rich dataset?
There are no concrete designs for adding reasoning features to Wikidata so far (if this is what you mean). There are various open questions, especially related to inferencing over quantifiers. But there are also important technical questions, especially regarding performance. I intend to work the theory out in more detail soon (that is: "How should logical rules over the Wikidata data model look work in principle?"). The implementation then is the next step. I don't think that any of this will be part of the core features of Wikidata soon, but hopefully we can set up a useful external service for Wikidata search and analytics (e.g., to check for property constraint violations in real time instead of using custom code and bots).
Cheers,
Markus
On 05/08/13 17:30, Mingli Yuan wrote:
Hi, folks,
After one night quick work, I had gave a proof-of-concept to demonstrate the feasibility that we can combine Wikidata and Clojure logic programming together.
The source code is at here: https://github.com/mountain/**knowledgehttps://github.com/mountain/knowledge
An example of an entity: https://github.com/mountain/**knowledge/blob/master/src/** entities/albert_einstein.cljhttps://github.com/mountain/knowledge/blob/master/src/entities/albert_einstein.clj
Example of types: https://github.com/mountain/**knowledge/blob/master/src/**meta/types.cljhttps://github.com/mountain/knowledge/blob/master/src/meta/types.clj
Example of predicates: https://github.com/mountain/**knowledge/blob/master/src/** meta/properties.cljhttps://github.com/mountain/knowledge/blob/master/src/meta/properties.clj
Example of inference: https://github.com/mountain/**knowledge/blob/master/test/** knowledge/test.cljhttps://github.com/mountain/knowledge/blob/master/test/knowledge/test.clj
Also we found it is very easy to get another language versions of the data other than English.
So, thanks very much for your guys' great work!
But I found the semantic layer of wikidata is shallow, that means you can only knows who are Einstein's father and children, but it can not be inferred automatically from wikidata that Einstein's father is the grandfather of Einstein's children.
So my question is that:
- Do we have a long term plan to evolve wikidata towards a semantic-rich dataset?
Regards, Mingli
______________________________**_________________ Wikidata-l mailing list Wikidata-l@lists.wikimedia.org https://lists.wikimedia.org/**mailman/listinfo/wikidata-lhttps://lists.wikimedia.org/mailman/listinfo/wikidata-l
I could offer an interface: https://toolserver.org/~magnus/thetalkpage/
On Wed, Aug 7, 2013 at 10:00 AM, Mingli Yuan mingli.yuan@gmail.com wrote:
Thanks, Markus,
About the background:
One is related with my current work and I can not say it too much. But another story, I can say it publicly.
After playing with wikidata for a while, I realised that we have the potential to create a WolframAlpha-like application. To achieve this, what we need are only a indexer, a generator, a reasoner and a traversor.
Take a look at simplenlg ( https://code.google.com/p/simplenlg/ ), it can take claim triples and transform them into sentences of questions. We then index these the answer by question sentences, the answer could be provided by wikidata claims.
And then we get a QA engine with weak abilities. i.e.
- they only know who are Enistein's father and children, and do not know
that grandfather is the father's father.
- they only know which planets belong to the Sun system, and they do not
know which is the biggest, the farest, etc.
So if we add some axiom into the system and also with a reasoner and a traversor. We have the potential to enumerate all possible *simple* questions and answers of human knowledge. Then we get a QA engine with strong abilities.
These are what in my brain now. I know these kinds of things are never easy, but they are possible in the near future.
Regards, Mingli
On Wed, Aug 7, 2013 at 4:14 PM, Markus Krötzsch < markus@semantic-mediawiki.org> wrote:
Hi Mingli,
thanks, this very interesting, but I think I need a bit more context to understand what you are doing and why.
Is your goal to create a library for accessing Wikidata from Clojure (like a Clojure API for Wikidata)? Or is your goal to use logical inference over Wikidata and you just use Clojure as a tool since it was most convenient?
To your question:
- Do we have a long term plan to evolve wikidata towards a semantic-rich dataset?
There are no concrete designs for adding reasoning features to Wikidata so far (if this is what you mean). There are various open questions, especially related to inferencing over quantifiers. But there are also important technical questions, especially regarding performance. I intend to work the theory out in more detail soon (that is: "How should logical rules over the Wikidata data model look work in principle?"). The implementation then is the next step. I don't think that any of this will be part of the core features of Wikidata soon, but hopefully we can set up a useful external service for Wikidata search and analytics (e.g., to check for property constraint violations in real time instead of using custom code and bots).
Cheers,
Markus
On 05/08/13 17:30, Mingli Yuan wrote:
Hi, folks,
After one night quick work, I had gave a proof-of-concept to demonstrate the feasibility that we can combine Wikidata and Clojure logic programming together.
The source code is at here: https://github.com/mountain/**knowledgehttps://github.com/mountain/knowledge
An example of an entity: https://github.com/mountain/**knowledge/blob/master/src/** entities/albert_einstein.cljhttps://github.com/mountain/knowledge/blob/master/src/entities/albert_einstein.clj
Example of types: https://github.com/mountain/**knowledge/blob/master/src/**meta/types.cljhttps://github.com/mountain/knowledge/blob/master/src/meta/types.clj
Example of predicates: https://github.com/mountain/**knowledge/blob/master/src/** meta/properties.cljhttps://github.com/mountain/knowledge/blob/master/src/meta/properties.clj
Example of inference: https://github.com/mountain/**knowledge/blob/master/test/** knowledge/test.cljhttps://github.com/mountain/knowledge/blob/master/test/knowledge/test.clj
Also we found it is very easy to get another language versions of the data other than English.
So, thanks very much for your guys' great work!
But I found the semantic layer of wikidata is shallow, that means you can only knows who are Einstein's father and children, but it can not be inferred automatically from wikidata that Einstein's father is the grandfather of Einstein's children.
So my question is that:
- Do we have a long term plan to evolve wikidata towards a semantic-rich dataset?
Regards, Mingli
______________________________**_________________ Wikidata-l mailing list Wikidata-l@lists.wikimedia.org https://lists.wikimedia.org/**mailman/listinfo/wikidata-lhttps://lists.wikimedia.org/mailman/listinfo/wikidata-l
Wikidata-l mailing list Wikidata-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikidata-l
On 07/08/13 10:20, Magnus Manske wrote:
I could offer an interface: https://toolserver.org/~magnus/thetalkpage/ https://toolserver.org/%7Emagnus/thetalkpage/
Yes, Magnus, I would also have suggested this next :-)
Mingli, inspired by Magnus' demo, we have indeed been talking about such QA interfaces to Wikidata a bit already. But as you also note, there is still a lot to be done for this to work. I agree that reasoning will play an important role for this (because we don't want to store all "grandfather" relations in Wikidata when they can be inferred easily). We are also in contact with NLP people regarding generic language interfaces ("understanding" questions instead of indexing generated questions). A number of groups have been doing work in this area using DBPedia or Yago, but Wikidata adds new challenges due to the more complex data model.
Markus
On Wed, Aug 7, 2013 at 10:00 AM, Mingli Yuan <mingli.yuan@gmail.com mailto:mingli.yuan@gmail.com> wrote:
Thanks, Markus, About the background: One is related with my current work and I can not say it too much. But another story, I can say it publicly. After playing with wikidata for a while, I realised that we have the potential to create a WolframAlpha-like application. To achieve this, what we need are only a indexer, a generator, a reasoner and a traversor. Take a look at simplenlg ( https://code.google.com/p/simplenlg/ ), it can take claim triples and transform them into sentences of questions. We then index these the answer by question sentences, the answer could be provided by wikidata claims. And then we get a QA engine with weak abilities. i.e. * they only know who are Enistein's father and children, and do not know that grandfather is the father's father. * they only know which planets belong to the Sun system, and they do not know which is the biggest, the farest, etc. So if we add some axiom into the system and also with a reasoner and a traversor. We have the potential to enumerate all possible *simple* questions and answers of human knowledge. Then we get a QA engine with strong abilities. These are what in my brain now. I know these kinds of things are never easy, but they are possible in the near future. Regards, Mingli On Wed, Aug 7, 2013 at 4:14 PM, Markus Krötzsch <markus@semantic-mediawiki.org <mailto:markus@semantic-mediawiki.org>> wrote: Hi Mingli, thanks, this very interesting, but I think I need a bit more context to understand what you are doing and why. Is your goal to create a library for accessing Wikidata from Clojure (like a Clojure API for Wikidata)? Or is your goal to use logical inference over Wikidata and you just use Clojure as a tool since it was most convenient? To your question: > > * Do we have a long term plan to evolve wikidata towards a > semantic-rich dataset? > There are no concrete designs for adding reasoning features to Wikidata so far (if this is what you mean). There are various open questions, especially related to inferencing over quantifiers. But there are also important technical questions, especially regarding performance. I intend to work the theory out in more detail soon (that is: "How should logical rules over the Wikidata data model look work in principle?"). The implementation then is the next step. I don't think that any of this will be part of the core features of Wikidata soon, but hopefully we can set up a useful external service for Wikidata search and analytics (e.g., to check for property constraint violations in real time instead of using custom code and bots). Cheers, Markus On 05/08/13 17:30, Mingli Yuan wrote: Hi, folks, After one night quick work, I had gave a proof-of-concept to demonstrate the feasibility that we can combine Wikidata and Clojure logic programming together. The source code is at here: https://github.com/mountain/__knowledge <https://github.com/mountain/knowledge> An example of an entity: https://github.com/mountain/__knowledge/blob/master/src/__entities/albert_einstein.clj <https://github.com/mountain/knowledge/blob/master/src/entities/albert_einstein.clj> Example of types: https://github.com/mountain/__knowledge/blob/master/src/__meta/types.clj <https://github.com/mountain/knowledge/blob/master/src/meta/types.clj> Example of predicates: https://github.com/mountain/__knowledge/blob/master/src/__meta/properties.clj <https://github.com/mountain/knowledge/blob/master/src/meta/properties.clj> Example of inference: https://github.com/mountain/__knowledge/blob/master/test/__knowledge/test.clj <https://github.com/mountain/knowledge/blob/master/test/knowledge/test.clj> Also we found it is very easy to get another language versions of the data other than English. So, thanks very much for your guys' great work! But I found the semantic layer of wikidata is shallow, that means you can only knows who are Einstein's father and children, but it can not be inferred automatically from wikidata that Einstein's father is the grandfather of Einstein's children. So my question is that: * Do we have a long term plan to evolve wikidata towards a semantic-rich dataset? Regards, Mingli _________________________________________________ Wikidata-l mailing list Wikidata-l@lists.wikimedia.org <mailto:Wikidata-l@lists.wikimedia.org> https://lists.wikimedia.org/__mailman/listinfo/wikidata-l <https://lists.wikimedia.org/mailman/listinfo/wikidata-l> _______________________________________________ Wikidata-l mailing list Wikidata-l@lists.wikimedia.org <mailto:Wikidata-l@lists.wikimedia.org> https://lists.wikimedia.org/mailman/listinfo/wikidata-l
-- undefined
Very cool, Magnus!
Does it do real query on wikidata? or it is only a UI thing?
Hi, Markus,
About your first question - why I choice the way in clojure?
see my answer to Kartsen: https://groups.google.com/forum/#!topic/clojure/W9KwnX1lVCo
On Wed, Aug 7, 2013 at 5:20 PM, Magnus Manske magnusmanske@googlemail.comwrote:
I could offer an interface: https://toolserver.org/~magnus/thetalkpage/
On Wed, Aug 7, 2013 at 10:00 AM, Mingli Yuan mingli.yuan@gmail.comwrote:
Thanks, Markus,
About the background:
One is related with my current work and I can not say it too much. But another story, I can say it publicly.
After playing with wikidata for a while, I realised that we have the potential to create a WolframAlpha-like application. To achieve this, what we need are only a indexer, a generator, a reasoner and a traversor.
Take a look at simplenlg ( https://code.google.com/p/simplenlg/ ), it can take claim triples and transform them into sentences of questions. We then index these the answer by question sentences, the answer could be provided by wikidata claims.
And then we get a QA engine with weak abilities. i.e.
- they only know who are Enistein's father and children, and do not know
that grandfather is the father's father.
- they only know which planets belong to the Sun system, and they do not
know which is the biggest, the farest, etc.
So if we add some axiom into the system and also with a reasoner and a traversor. We have the potential to enumerate all possible *simple* questions and answers of human knowledge. Then we get a QA engine with strong abilities.
These are what in my brain now. I know these kinds of things are never easy, but they are possible in the near future.
Regards, Mingli
On Wed, Aug 7, 2013 at 4:14 PM, Markus Krötzsch < markus@semantic-mediawiki.org> wrote:
Hi Mingli,
thanks, this very interesting, but I think I need a bit more context to understand what you are doing and why.
Is your goal to create a library for accessing Wikidata from Clojure (like a Clojure API for Wikidata)? Or is your goal to use logical inference over Wikidata and you just use Clojure as a tool since it was most convenient?
To your question:
- Do we have a long term plan to evolve wikidata towards a semantic-rich dataset?
There are no concrete designs for adding reasoning features to Wikidata so far (if this is what you mean). There are various open questions, especially related to inferencing over quantifiers. But there are also important technical questions, especially regarding performance. I intend to work the theory out in more detail soon (that is: "How should logical rules over the Wikidata data model look work in principle?"). The implementation then is the next step. I don't think that any of this will be part of the core features of Wikidata soon, but hopefully we can set up a useful external service for Wikidata search and analytics (e.g., to check for property constraint violations in real time instead of using custom code and bots).
Cheers,
Markus
On 05/08/13 17:30, Mingli Yuan wrote:
Hi, folks,
After one night quick work, I had gave a proof-of-concept to demonstrate the feasibility that we can combine Wikidata and Clojure logic programming together.
The source code is at here: https://github.com/mountain/**knowledgehttps://github.com/mountain/knowledge
An example of an entity: https://github.com/mountain/**knowledge/blob/master/src/** entities/albert_einstein.cljhttps://github.com/mountain/knowledge/blob/master/src/entities/albert_einstein.clj
Example of types: https://github.com/mountain/**knowledge/blob/master/src/** meta/types.cljhttps://github.com/mountain/knowledge/blob/master/src/meta/types.clj
Example of predicates: https://github.com/mountain/**knowledge/blob/master/src/** meta/properties.cljhttps://github.com/mountain/knowledge/blob/master/src/meta/properties.clj
Example of inference: https://github.com/mountain/**knowledge/blob/master/test/** knowledge/test.cljhttps://github.com/mountain/knowledge/blob/master/test/knowledge/test.clj
Also we found it is very easy to get another language versions of the data other than English.
So, thanks very much for your guys' great work!
But I found the semantic layer of wikidata is shallow, that means you can only knows who are Einstein's father and children, but it can not be inferred automatically from wikidata that Einstein's father is the grandfather of Einstein's children.
So my question is that:
- Do we have a long term plan to evolve wikidata towards a semantic-rich dataset?
Regards, Mingli
______________________________**_________________ Wikidata-l mailing list Wikidata-l@lists.wikimedia.org https://lists.wikimedia.org/**mailman/listinfo/wikidata-lhttps://lists.wikimedia.org/mailman/listinfo/wikidata-l
Wikidata-l mailing list Wikidata-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikidata-l
-- undefined
Wikidata-l mailing list Wikidata-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikidata-l
On Wed, Aug 7, 2013 at 3:20 PM, Mingli Yuan mingli.yuan@gmail.com wrote:
Very cool, Magnus!
Does it do real query on wikidata? or it is only a UI thing?
It does use live wikidata. "Reasoning" is hacked with a few hardcoded
regular expressions ;-)
Also, something similar to Magnus' Wiri, here is a bot developed by us on sina weibo (a twitter-like microblogging provider in China)
* http://weibo.com/n/%E6%9E%9C%E5%A3%B3%E5%A8%98
We use dataset from wikidata with some dirty hacks. It is only a several-days quick work.
We really very excited about the availability of such big dataset. The potential of Wikipedia and Wikidata is unlimited!
Long live the free knowledge!
Regards, Mingli
On Wed, Aug 7, 2013 at 10:21 PM, Magnus Manske magnusmanske@googlemail.comwrote:
On Wed, Aug 7, 2013 at 3:20 PM, Mingli Yuan mingli.yuan@gmail.com wrote:
Very cool, Magnus!
Does it do real query on wikidata? or it is only a UI thing?
It does use live wikidata. "Reasoning" is hacked with a few hardcoded
regular expressions ;-)
Wikidata-l mailing list Wikidata-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikidata-l
On 07/08/13 15:40, Mingli Yuan wrote:
Also, something similar to Magnus' Wiri, here is a bot developed by us on sina weibo (a twitter-like microblogging provider in China)
We use dataset from wikidata with some dirty hacks. It is only a several-days quick work.
Sounds exciting (and we always like to learn about uses of the data), but could you give a short description in English what is happening there? The above link takes me to a Chinese registration form only ;-)
Markus
We really very excited about the availability of such big dataset. The potential of Wikipedia and Wikidata is unlimited!
Long live the free knowledge!
Regards, Mingli
On Wed, Aug 7, 2013 at 10:21 PM, Magnus Manske <magnusmanske@googlemail.com mailto:magnusmanske@googlemail.com> wrote:
On Wed, Aug 7, 2013 at 3:20 PM, Mingli Yuan <mingli.yuan@gmail.com <mailto:mingli.yuan@gmail.com>> wrote: Very cool, Magnus! Does it do real query on wikidata? or it is only a UI thing? It does use live wikidata. "Reasoning" is hacked with a few hardcoded regular expressions ;-) _______________________________________________ Wikidata-l mailing list Wikidata-l@lists.wikimedia.org <mailto:Wikidata-l@lists.wikimedia.org> https://lists.wikimedia.org/mailman/listinfo/wikidata-l
Wikidata-l mailing list Wikidata-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikidata-l
Hi, Markus,
It is a very common and simple QA scenario. Here is translation of the related dialogue:
殷西: @果壳娘 美国的国家元首是谁 *yinxi: @gktan Who is the presendent of U.S.A.* 果壳娘: @殷西 贝拉克·奥巴马 *gktan: @yinxi **Barack **Obama* 瓦克星: @果壳娘 莎士比亚的出生地在哪里? *wahlque: @gktan Where did Shakespeare born?* 果壳娘: @瓦克星 埃文河畔斯特拉特福 *gktan: @wahlque **Stratford-upon-Avon* *......*
On Thu, Aug 8, 2013 at 8:34 PM, Markus Krötzsch < markus@semantic-mediawiki.org> wrote:
On 07/08/13 15:40, Mingli Yuan wrote:
Also, something similar to Magnus' Wiri, here is a bot developed by us on sina weibo (a twitter-like microblogging provider in China)
We use dataset from wikidata with some dirty hacks. It is only a several-days quick work.
Sounds exciting (and we always like to learn about uses of the data), but could you give a short description in English what is happening there? The above link takes me to a Chinese registration form only ;-)
Markus
We really very excited about the availability of such big dataset. The potential of Wikipedia and Wikidata is unlimited!
Long live the free knowledge!
Regards, Mingli
On Wed, Aug 7, 2013 at 10:21 PM, Magnus Manske <magnusmanske@googlemail.com <mailto:magnusmanske@**googlemail.commagnusmanske@googlemail.com>> wrote:
On Wed, Aug 7, 2013 at 3:20 PM, Mingli Yuan <mingli.yuan@gmail.com <mailto:mingli.yuan@gmail.com>**> wrote: Very cool, Magnus! Does it do real query on wikidata? or it is only a UI thing? It does use live wikidata. "Reasoning" is hacked with a few hardcoded regular expressions ;-) ______________________________**_________________ Wikidata-l mailing list Wikidata-l@lists.wikimedia.org <mailto:Wikidata-l@lists.**
wikimedia.org Wikidata-l@lists.wikimedia.org> https://lists.wikimedia.org/**mailman/listinfo/wikidata-lhttps://lists.wikimedia.org/mailman/listinfo/wikidata-l
______________________________**_________________ Wikidata-l mailing list Wikidata-l@lists.wikimedia.org https://lists.wikimedia.org/**mailman/listinfo/wikidata-lhttps://lists.wikimedia.org/mailman/listinfo/wikidata-l