To preface, an opportunity I see with this project given its particular nature (open, potential of large-scale collaboration) is to be able to engage with language in a manner more founded in a linguistic tradition (like a FrameNet) than many other NL*X* projects.
Well, there is a whole research community at the crossroad between
computer science and lingustics with Computational Linguistics. The annual ACL conference is taking place just this week: https://acl2020.org/ The CL community may have its own quirks but at least an understanding of both linguistic problems and issues of technical implementations should be there.
I think it is important to involve the computational linguistics community because they have much of the experience in natural language generation. However, I think it is also important to involve red-blooded linguists and *linguist* computational linguists when discussion of theory and linguistic problems come into play. There is much of computational linguistics that is quite divorced from linguistics and the linguistic tradition in general.
Thanks,
Chris Cooley
On Thu, Jul 9, 2020 at 4:54 AM Jakob Voß jakob.voss@gbv.de wrote:
Denny Vrandečić wrote:
Accordingly, when I talk with the professors and researchers in this area, also about the proposal here, they are more focussed on specific issues, and don't know that much about the concrete systems (which is understandable - the flow from research to practical systems is a more established flow in many areas). Never mind that when you get to the linguistic side of it, instead of the computer science part, there are even more competing theories, many of which are aimed toward much more encompassing goals and are about covering the whole of language and natural language understanding, which we want to be shying away from.
Well, there is a whole research community at the crossroad between computer science and lingustics with Computational Linguistics. The annual ACL conference is taking place just this week: https://acl2020.org/ The CL community may have its own quirks but at least an understanding of both linguistic problems and issues of technical implementations should be there.
As mentioned by Chris Cooley, the goal will be to create a new wiki, a library of functions, that can support any of these approaches. My dream would be - and I see that Chris had already suggested that - that experts like you and your colleagues create an overview of the state of the art that will be accessible to the community and that will allow us to make a well-informed decision when the time comes as to which path to explore first.
I cannot force anyone how to organize references to scholarly publications and software artifacts but I would at least recommend to use Wikidata to do so. We can get nice overviews with Scholia, once the references are collected and organized in Wikidata. The current coverage of natural language generation however is rather shallow:
https://scholia.toolforge.org/topic/Q1513879
Even if Wikidata is not the best tool to collect references, it will surely play some kind of role in Abstract Wikipedia, so it makes sense to get used to it.
Jakob
Abstract-Wikipedia mailing list Abstract-Wikipedia@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/abstract-wikipedia