I would appreciate clarification what is proposed with regard to exposing problematic Wikidata ontology on Wikipedia. If the idea involves inserting poor-quality information onto English Wikipedia in order to spur us to fix problems with Wikidata, then I am likely to oppose it. English Wikipedia is not an endless resource for free labor, and we have too few skilled and good-faith volunteers to handle our already enormous scope of work.
Pine ( https://meta.wikimedia.org/wiki/User:Pine ) null
Hoi Pine, The ontology of Wikidata has nothing to do with English Wikipedia. The notion that English Wikipedia is the only endless resource of free labour is pathetic. Its dismissive attitude prevents functional contributions that will benefit the users of Wikimedia projects.
For authors of "scholarly articles" we have an increasing amount of information that is impossible for Wikipedia to include. It does not take much to have a template that show them (standard collapsed) and links to "Scholia" information for the paper.
For authors of books we could have a similar template. They could link to *your local library* where you can check if it is available for reading. Alternatively we could link to the "Open Library".
What it would do is provide a SERVICE to our readers that is easy enough to provide, that leverages the data in Wikidata and is of a high quality. The issue about the ontology has everything to do with the discovery of images in Commons. It cannot get worse as it is, it is disfunctional. It only works for English and I understand that is something you do not really notice.
Yes, I do recognise Wikidata is a wiki. It is a work in progress and as such the quality and quantity steadily improves.. Just like English Wikipedia. Thanks, Gerard
On Fri, 19 Oct 2018 at 07:10, Pine W wiki.pine@gmail.com wrote:
I would appreciate clarification what is proposed with regard to exposing problematic Wikidata ontology on Wikipedia. If the idea involves inserting poor-quality information onto English Wikipedia in order to spur us to fix problems with Wikidata, then I am likely to oppose it. English Wikipedia is not an endless resource for free labor, and we have too few skilled and good-faith volunteers to handle our already enormous scope of work.
Pine ( https://meta.wikimedia.org/wiki/User:Pine ) _______________________________________________ Wikidata mailing list Wikidata@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikidata
On 19/10/2018 07:09, Pine W wrote:
I would appreciate clarification what is proposed with regard to exposing problematic Wikidata ontology on Wikipedia. If the idea involves inserting poor-quality information onto English Wikipedia in order to spur us to fix problems with Wikidata, then I am likely to oppose it. English Wikipedia is not an endless resource for free labor, and we have too few skilled and good-faith volunteers to handle our already enormous scope of work.
You are right, and thankfully this is not what is proposed. The proposal was to offer people who search for Commons media the (maybe optional) possibility to find more results by letting the search engine traverse the "more-general-than" links stored in Wikidata. People have discovered cases where some of these links are not correct (surprise! it's a wiki ;-), and the suggestion was that such glitches would be fixed with higher priority if there would be an application relying on it. But even with some wrong links, the results a searcher would get would still include mostly useful hits. Also, at least half of the currently observed problems with this approach would lead to fewer results (e.g., dogs would be hard to include automatically to a search for all mammals), but in such cases the proposed extension would simply do what the baseline approach (ignoring the links) would do anyway, so service would not get any worse. Also, the manual workarounds suggested by some (adding "mammal" to all pictures of some "dog") would be compatible with this, so one could do both to improve search experience on both ends.
Best regards,
Markus
Hi!
possibility to find more results by letting the search engine traverse the "more-general-than" links stored in Wikidata. People have discovered cases where some of these links are not correct (surprise! it's a wiki ;-), and the suggestion was that such glitches would be fixed with higher priority if there would be an application relying on it. But even
The main problem I see here is not that some links are incorrect - which may have bad effects, but it's not the most important issue. The most important one, IMHO, that there's no way to figure out in any scalable and scriptable way what "more-general-than" means for any particular case.
It's different for each type of objects and often inconsistent within the same class (e.g. see confusion between whether "dog" is an animal, a name of the animal, name of the taxon, etc.) It's not that navigating the hierarchy would lead as astray - we're not even there yet to have this problem, because we don't even have a good way to navigate it.
Using instance-of/subclass-of only seems to not be that useful, because a lot of interesting things are not represented in this way - e.g. finding out that Donna Strickland (Q56855591) is a woman (Q467) is impossible using only this hierarchy. We could special-case a bunch of those but given how diverse Wikidata is, I don't think this will ever cover any significant part of the hierarchy unless we find a non-ad-hoc method of doing this.
This also makes it particularly hard to do something like "let's start using it and fix the issues as we discover them", because the main issue here is that we don't have a way to start with anything useful beyond a tiny subset of classes that we can special-case manually. We can't launch a rocket and figure how to build the engine later - having a working engine is a prerequisite to launching the rocket!
There are also significant technical challenges in this - indexing dynamically changing hierarchy is very problematic, and with our approach to ontology anything can be a class, so we'd have to constantly update the hierarchy. But this is more of a technical challenge, which will come after we have some solution for the above.
Hi Stas,
Thanks for elaborating. I think we could always start with traversing only "subclass of". In spite of its limits, it does work in many areas (e.g. buildings, astronomical objects, vehicles, organisations, etc.), even if by far not in all. Where it doesn't work, one would simply not get enough results, but the alternative (do not even use "subclass of") will just make this problem worse. Any approach of fixing the latter will also help the former.
Now regarding issues such as dog, woman, and many other things, it seems clear that what one would need are inference rules. It should be possible to say somewhere that a "if a human is female, then it is also woman" without having to add the unwanted statement "instance of woman" everywhere. Or "if someone has profession 'programmer' then he/she/they is/are a programmer" -- at least for the purpose of media search. The case of dogs would be complicated (referring to quantifiers) but still doable.
Obvious questions arise: * Would we prefer to maintain such rules somewhere rather than adding the relations they might infer manually? (Probably yes, since one would need much fewer rules than manual statements, which would always add redundancy and cause conflicts -- cf. taxonomy modelling discussion -- that are not necessary when applications can select which inference rules to use without touching the underlying data.) * How would the rules look to human editors? (We have made some first proposals for this; see the rules supported by SQID [1]; but one can come up with other options) * Where would such rules be managed? (Preferably on Wikidata, but the encoding in statements would be a challenge; another challenge is how to associate rules with entities -- usually they make connections between several entities) * How would the rules be applied on the live data, especially if there are many updates? (Doable using known algorithms and based on existing tools, but still needs some implementation work; I think for a start one could just reduce the update speed on these "inferred tags" and still get a big improvement over the case where nothing of this type is done at all).
So would this be a mid-term goal to overcome this issue? I would think so, also because there are enough degrees of freedom here to gradually grow this from simple (only allow rules that effectively add some more traversal hints) to powerful (have rules that can use qualifiers, as needed to get from dog to mammal). The main challenge is to find a good approach for community-editing this part without restricting upfront to a few special cases (as for the case of the constraints).
Inference rules come up as potential solutions in many similar tasks where you want users to access/query the data. Imagine someone would look for the brothers of a person (let's assume we'd built an intelligent search for such things) -- again, Wikidata has no concept of "brother" and we would not have any idea how to answer this, unless somewhere we'd have a rule that defines how you can find brother-relationships from the data that we actually have. This happens a lot when you want users who are not familiar with how we organise data find things, but the solution cannot be to add every possible view/inferred statement to Wikidata explicitly.
Obviously, the rule approach is not something we could deploy anytime soon, but it could be something to work towards ...
Cheers,
Markus
[1] Example rule with explanation of how it was applied to find a grandfather of Ada Lovelace: https://tinyurl.com/y7rgmk7o The qualifier sets (X, Y, Z) are unused here and could be hidden entirely, but this is just a prototype.
On 20/10/2018 00:28, Stas Malyshev wrote:
Hi!
possibility to find more results by letting the search engine traverse the "more-general-than" links stored in Wikidata. People have discovered cases where some of these links are not correct (surprise! it's a wiki ;-), and the suggestion was that such glitches would be fixed with higher priority if there would be an application relying on it. But even
The main problem I see here is not that some links are incorrect - which may have bad effects, but it's not the most important issue. The most important one, IMHO, that there's no way to figure out in any scalable and scriptable way what "more-general-than" means for any particular case.
It's different for each type of objects and often inconsistent within the same class (e.g. see confusion between whether "dog" is an animal, a name of the animal, name of the taxon, etc.) It's not that navigating the hierarchy would lead as astray - we're not even there yet to have this problem, because we don't even have a good way to navigate it.
Using instance-of/subclass-of only seems to not be that useful, because a lot of interesting things are not represented in this way - e.g. finding out that Donna Strickland (Q56855591) is a woman (Q467) is impossible using only this hierarchy. We could special-case a bunch of those but given how diverse Wikidata is, I don't think this will ever cover any significant part of the hierarchy unless we find a non-ad-hoc method of doing this.
This also makes it particularly hard to do something like "let's start using it and fix the issues as we discover them", because the main issue here is that we don't have a way to start with anything useful beyond a tiny subset of classes that we can special-case manually. We can't launch a rocket and figure how to build the engine later - having a working engine is a prerequisite to launching the rocket!
There are also significant technical challenges in this - indexing dynamically changing hierarchy is very problematic, and with our approach to ontology anything can be a class, so we'd have to constantly update the hierarchy. But this is more of a technical challenge, which will come after we have some solution for the above.
On Fri, Oct 19, 2018 at 9:47 AM Markus Kroetzsch < markus.kroetzsch@tu-dresden.de> wrote:
On 19/10/2018 07:09, Pine W wrote:
I would appreciate clarification what is proposed with regard to exposing problematic Wikidata ontology on Wikipedia. If the idea involves inserting poor-quality information onto English Wikipedia in order to spur us to fix problems with Wikidata, then I am likely to oppose it. English Wikipedia is not an endless resource for free labor, and we have too few skilled and good-faith volunteers to handle our already enormous scope of work.
You are right, and thankfully this is not what is proposed. The proposal was to offer people who search for Commons media the (maybe optional) possibility to find more results by letting the search engine traverse the "more-general-than" links stored in Wikidata. People have discovered cases where some of these links are not correct (surprise! it's a wiki ;-), and the suggestion was that such glitches would be fixed with higher priority if there would be an application relying on it. But even with some wrong links, the results a searcher would get would still include mostly useful hits. Also, at least half of the currently observed problems with this approach would lead to fewer results (e.g., dogs would be hard to include automatically to a search for all mammals), but in such cases the proposed extension would simply do what the baseline approach (ignoring the links) would do anyway, so service would not get any worse. Also, the manual workarounds suggested by some (adding "mammal" to all pictures of some "dog") would be compatible with this, so one could do both to improve search experience on both ends.
Best regards,
Markus
Hi Markus, I seem to be missing something. Daniel said, "And I think the best way to achieve this is to start using the ontology as an ontology on wikimedia projects, and thus expose the fact that the ontology is broken. This gives incentive to fix it, and examples as to what things should be possible using that ontology (namely, some level of basic inference)." I think that I understand the basic idea behind structured data on Commons. I also think that I understand your statement above. What I'm not understanding is how Daniel's proposal to "start using the ontology as an ontology on wikimedia projects, and thus expose the fact that the ontology is broken." isn't a proposal to add poor quality information from Wikidata onto Wikipedia and, in the process, give Wikipedians more problems to fix. Can you or Daniel explain this?
Separately, someone wrote to me off list to make the point that Wikipedians who are active in non-English Wikipedias also wouldn't appreciate having their workloads increased by having a large quantity poor-quality information added to their edition of Wikipedia. I think that one of the person's concerns is that my statement could have been interpreted as implying something like "it's okay to insert poor-quality information on non-English Wikipedias because their standards are lower". I apologize if I gave the impression that I would approve of a non-English language edition of Wikipedia being on the receiving end of an unwelcome large addition of information that requires significant effort to clean up. Hopefully my response here will address the concerns that I heard off list, and if not then I welcome additional feedback.
Thanks,
Hi!
data on Commons. I also think that I understand your statement above. What I'm not understanding is how Daniel's proposal to "start using the ontology as an ontology on wikimedia projects, and thus expose the fact that the ontology is broken." isn't a proposal to add poor quality information from Wikidata onto Wikipedia and, in the process, give Wikipedians more problems to fix. Can you or Daniel explain this?
While I can not pretend to have expert knowledge and do not purport to interpret what Daniel meant, I think here we must remember that Wikipedia, while being of course of huge importance, is not the only Wikimedia project, so "start using it on Wikimedia projects" does not necessarily mean "start using it on Wikipedia", yet less "start adding bad information to Wikipedia" (there are other ways to use the data, including imperfect ontologies - e.g. for search, for bot guidance, for quality assurance and editor support, and many other ways) I am not prescribing a specific scenario here, just reminding that "using the ontology on wikimedia projects" can mean a wide variety of things.
Separately, someone wrote to me off list to make the point that Wikipedians who are active in non-English Wikipedias also wouldn't appreciate having their workloads increased by having a large quantity poor-quality information added to their edition of Wikipedia. I think
I am sure that would be a bad thing. But I don't think anything we are discussing here would lead to that happening.
Hi All,
Just to address what Markus was hinting at with inference rules. Both positive and negative rules could be stored. Back in the Freebase days, we had those and were called "mutex's". We used them for "type incompatible" hints to users and stored those "type incompatible" mutex rules in the knowledge graph. (Freebase being a Type based system along with having Properties under each Type)
Such as: ORGANIZATION != SPORT
You actually have all those type incompatible mutexs in the Freebase dumps handed to you where you could start there. The biggest one was called "Big Momma Mutex". Here is an archived email thread to give further context: https://freebase.markmail.org/thread/z5o7nlnb62n5t22o
Anyways, the point is that those rules worked well for us in Freebase and I can see rules also working wonders in various ways in Wikidata as well. Maybe its just a mutex at each class ? Where multiple statements could hold rules ?
Thad +ThadGuidry https://www.google.com/+ThadGuidry
There is already stuffs to handle this kind of « mutex » on Wikidata : "disjoint union of", see for example in usage on htps:// www.wikidata.org/wiki/Q180323 . The statements are used on the talk page by templates that uses them to generate queries to find instances that violate the mutex : https://www.wikidata.org/wiki/Talk:Q180323 (for example This query https://query.wikidata.org/#select%20%3Fitem%20where%20%7B%0A%09%3Fitem%20wdt%3AP31%2Fwdt%3AP279%2A%20wd%3AQ180323%20%20minus%20%7B%0A%09%09%7B%0A%09%09%09%3Fitem%20wdt%3AP31%2Fwdt%3AP279%2A%20wd%3AQ900457%20%0A%09%09%7D%20union%20%7B%0A%09%09%09%3Fitem%20wdt%3AP31%2Fwdt%3AP279%2A%20wd%3AQ578786%20%0A%09%09%7D%20union%20%7B%0A%09%09%09%3Fitem%20wdt%3AP31%2Fwdt%3AP279%2A%20wd%3AQ405478%20%0A%09%09%7D%20union%20%7B%0A%09%09%09%3Fitem%20wdt%3AP31%2Fwdt%3AP279%2A%20wd%3AQ46993066%20%0A%09%09%7D%20union%20%7B%0A%09%09%09%3Fitem%20wdt%3AP31%2Fwdt%3AP279%2A%20wd%3AQ2253183%20%0A%09%09%7D%0A%09%7D%0A%7D , that does not find anything unsurprisingly because I don’t expect to find a lot of vertebra instances on Wikidata :) )
Le sam. 20 oct. 2018 à 12:09, Thad Guidry thadguidry@gmail.com a écrit :
Hi All,
Just to address what Markus was hinting at with inference rules. Both positive and negative rules could be stored. Back in the Freebase days, we had those and were called "mutex's". We used them for "type incompatible" hints to users and stored those "type incompatible" mutex rules in the knowledge graph. (Freebase being a Type based system along with having Properties under each Type)
Such as: ORGANIZATION != SPORT
You actually have all those type incompatible mutexs in the Freebase dumps handed to you where you could start there. The biggest one was called "Big Momma Mutex". Here is an archived email thread to give further context: https://freebase.markmail.org/thread/z5o7nlnb62n5t22o
Anyways, the point is that those rules worked well for us in Freebase and I can see rules also working wonders in various ways in Wikidata as well. Maybe its just a mutex at each class ? Where multiple statements could hold rules ?
Thad +ThadGuidry https://www.google.com/+ThadGuidry
Wikidata mailing list Wikidata@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikidata
Hi Pine,
As I understood Daniel, he did not talk about inserting low quality content into any project, Wikipedia or other. What I believe he meant with "using the ontology" is to use it for improving search/discovery services that help editors to find something (i.e., technical infrastructure, not editorial content). Doing so could lead to an additional amount of mostly useful results, but it will not yet be enough to get all results that a user would intuitively expect. Maybe his wording made this sound a bit too dramatic -- I think he just wanted to emphasize the point that any actual use will immediately provide motivation and guidance for Wikidata editors to improve things that are currently imperfect.
I agree with him in that I think we need to identify ways of moving gradually forward, offering the small benefits we can already provide while creating an environment that allows the community to improve things step by step. If we ask for perfection before even starting, we will get into a deadlock where we bind editor resources in redundant tagging tasks instead of empowering the community to improve the situation in a sustainable way.
Cheers,
Markus
On 20/10/2018 06:51, Pine W wrote:
On Fri, Oct 19, 2018 at 9:47 AM Markus Kroetzsch <markus.kroetzsch@tu-dresden.de mailto:markus.kroetzsch@tu-dresden.de> wrote:
On 19/10/2018 07:09, Pine W wrote: > I would appreciate clarification what is proposed with regard to > exposing problematic Wikidata ontology on Wikipedia. If the idea > involves inserting poor-quality information onto English Wikipedia in > order to spur us to fix problems with Wikidata, then I am likely to > oppose it. English Wikipedia is not an endless resource for free labor, > and we have too few skilled and good-faith volunteers to handle our > already enormous scope of work. You are right, and thankfully this is not what is proposed. The proposal was to offer people who search for Commons media the (maybe optional) possibility to find more results by letting the search engine traverse the "more-general-than" links stored in Wikidata. People have discovered cases where some of these links are not correct (surprise! it's a wiki ;-), and the suggestion was that such glitches would be fixed with higher priority if there would be an application relying on it. But even with some wrong links, the results a searcher would get would still include mostly useful hits. Also, at least half of the currently observed problems with this approach would lead to fewer results (e.g., dogs would be hard to include automatically to a search for all mammals), but in such cases the proposed extension would simply do what the baseline approach (ignoring the links) would do anyway, so service would not get any worse. Also, the manual workarounds suggested by some (adding "mammal" to all pictures of some "dog") would be compatible with this, so one could do both to improve search experience on both ends. Best regards, Markus
Hi Markus, I seem to be missing something. Daniel said, "And I think the best way to achieve this is to start using the ontology as an ontology on wikimedia projects, and thus expose the fact that the ontology is broken. This gives incentive to fix it, and examples as to what things should be possible using that ontology (namely, some level of basic inference)." I think that I understand the basic idea behind structured data on Commons. I also think that I understand your statement above. What I'm not understanding is how Daniel's proposal to "start using the ontology as an ontology on wikimedia projects, and thus expose the fact that the ontology is broken." isn't a proposal to add poor quality information from Wikidata onto Wikipedia and, in the process, give Wikipedians more problems to fix. Can you or Daniel explain this?
Separately, someone wrote to me off list to make the point that Wikipedians who are active in non-English Wikipedias also wouldn't appreciate having their workloads increased by having a large quantity poor-quality information added to their edition of Wikipedia. I think that one of the person's concerns is that my statement could have been interpreted as implying something like "it's okay to insert poor-quality information on non-English Wikipedias because their standards are lower". I apologize if I gave the impression that I would approve of a non-English language edition of Wikipedia being on the receiving end of an unwelcome large addition of information that requires significant effort to clean up. Hopefully my response here will address the concerns that I heard off list, and if not then I welcome additional feedback.
Thanks,
Pine ( https://meta.wikimedia.org/wiki/User:Pine )
Wikidata mailing list Wikidata@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikidata
Hi Pine, sorry for the misleading wording. Let me clarify below.
Am 19.10.18 um 9:51 nachm. schrieb Pine W:
Hi Markus, I seem to be missing something. Daniel said, "And I think the best way to achieve this is to start using the ontology as an ontology on wikimedia projects, and thus expose the fact that the ontology is broken. This gives incentive to fix it, and examples as to what things should be possible using that ontology (namely, some level of basic inference)." I think that I understand the basic idea behind structured data on Commons. I also think that I understand your statement above. What I'm not understanding is how Daniel's proposal to "start using the ontology as an ontology on wikimedia projects, and thus expose the fact that the ontology is broken." isn't a proposal to add poor quality information from Wikidata onto Wikipedia and, in the process, give Wikipedians more problems to fix. Can you or Daniel explain this?
What I meant in concrete terms was: let's start using wikidata items for tagging on commons, even though search results based on such tags will currently not yield very good results, due to the messy state of the ontology, and hope people fix the ontology to get better search results. If people use "poodle" to tag an image and it's not found when searching for "dog", this may lead to people investigating why that is, and coming up with ontology improvements to fix it.
What I DON'T mean is "let's automatically generate navigation boxes for wikipedia articles based on an imperfect ontology, and push them on everyone". I mean, using the ontology to generate navigation boxes for some kinds of articles may be a nice idea, and could indeed have the same effect - that people notice problems in the ontology, and fix them. But that would be something the local wiki communities decide to do, not something that comes from Wikidata or the Structured Data project.
The point I was trying to make is: the Wiki communities are rather good in creating structures that serve their purpose, but they do so pragmatically, along the behavior of the existing tools. So, rather than trying to work around the quirks of the ontology in software, the software should use very simply rules (such as following the subclass relation), and let people adopt the data to this behavior, if and when they find it useful to do so. This approach, over time, provides better results in my opinion.
Also, keep in mind that I was referring to an imperfect *improvement* of search. the alternative being to only return things tagged with "dog" when searching for "dog". I was not suggesting to degrade user experience in order to incentivize editors. I'm rather suggesting the opposite: let's NOT give people a reason tag images that show poodles with "poodle" and "dog" and "mammal" and "animal" and "pet" and...
On Sat, Oct 20, 2018 at 4:41 PM Daniel Kinzler dkinzler@wikimedia.org wrote:
Hi Pine, sorry for the misleading wording. Let me clarify below.
Am 19.10.18 um 9:51 nachm. schrieb Pine W:
Hi Markus, I seem to be missing something. Daniel said, "And I think the
best
way to achieve this is to start using the ontology as an ontology on
wikimedia
projects, and thus expose the fact that the ontology is broken. This
gives
incentive to fix it, and examples as to what things should be possible
using
that ontology (namely, some level of basic inference)." I think that I understand the basic idea behind structured data on Commons. I also
think that I
understand your statement above. What I'm not understanding is how
Daniel's
proposal to "start using the ontology as an ontology on wikimedia
projects, and
thus expose the fact that the ontology is broken." isn't a proposal to
add poor
quality information from Wikidata onto Wikipedia and, in the process,
give
Wikipedians more problems to fix. Can you or Daniel explain this?
What I meant in concrete terms was: let's start using wikidata items for tagging on commons, even though search results based on such tags will currently not yield very good results, due to the messy state of the ontology, and hope people fix the ontology to get better search results. If people use "poodle" to tag an image and it's not found when searching for "dog", this may lead to people investigating why that is, and coming up with ontology improvements to fix it.
What I DON'T mean is "let's automatically generate navigation boxes for wikipedia articles based on an imperfect ontology, and push them on everyone". I mean, using the ontology to generate navigation boxes for some kinds of articles may be a nice idea, and could indeed have the same effect - that people notice problems in the ontology, and fix them. But that would be something the local wiki communities decide to do, not something that comes from Wikidata or the Structured Data project.
The point I was trying to make is: the Wiki communities are rather good in creating structures that serve their purpose, but they do so pragmatically, along the behavior of the existing tools. So, rather than trying to work around the quirks of the ontology in software, the software should use very simply rules (such as following the subclass relation), and let people adopt the data to this behavior, if and when they find it useful to do so. This approach, over time, provides better results in my opinion.
Also, keep in mind that I was referring to an imperfect *improvement* of search. the alternative being to only return things tagged with "dog" when searching for "dog". I was not suggesting to degrade user experience in order to incentivize editors. I'm rather suggesting the opposite: let's NOT give people a reason tag images that show poodles with "poodle" and "dog" and "mammal" and "animal" and "pet" and...
-- Daniel Kinzler Principal Software Engineer, Core Platform Wikimedia Foundation
Hi Daniel,
Thanks for the explanation. I think that I now better understand what you're proposing. This explanation of the proposal sounds reasonable to me in a way that my earlier understanding of the proposal did not.
By the way, I don't know what your normal work schedule is, but I usually don't expect staff to respond to non-urgent emails over the weekend, although I appreciate it. :) Waiting until Monday is usually fine.
Dear Wikibase Enthusiasts,
if you happen to speak German and if you feel intrigued about the Illuminati, this might be of interest to you:
https://blog.factgrid.de/archives/1151
We will use our upcoming Illuminati-Workshop on Nov. 16/17 to discuss how we can make better use of our Wikibase installation here at Gotha.
https://database.factgrid.de/wiki/Main_Page
The database is filled with metadata of Illuminati documents and (selected) membership information and is supposed to help us with complexities of our Illuminati wiki (https://projekte.uni-erfurt.de/illuminaten/Main_Page), but we do not yet have the clearest idea of what we have produced here or possibly can.
If you feel intrigued - we pay travel expenses and accommodation - contact me before Nov. 5, 2018.
Looking forward to an illuminating workshop, Olaf