Denny Vrandečić wrote:
Accordingly, when I talk with the professors and
researchers in this area, also about the proposal here, they are more
focussed on specific issues, and don't know that much about the
concrete systems (which is understandable - the flow from research to
practical systems is a more established flow in many areas). Never
mind that when you get to the linguistic side of it, instead of the
computer science part, there are even more competing theories, many
of which are aimed toward much more encompassing goals and are about
covering the whole of language and natural language understanding,
which we want to be shying away from.
Well, there is a whole research community at the crossroad between
computer science and lingustics with Computational Linguistics. The
annual ACL conference is taking place just this week:
https://acl2020.org/ The CL community may have its own quirks but at
least an understanding of both linguistic problems and issues of
technical implementations should be there.
As mentioned by Chris Cooley, the goal will be to
create a new wiki,
a library of functions, that can support any of these approaches. My
dream would be - and I see that Chris had already suggested that -
that experts like you and your colleagues create an overview of the
state of the art that will be accessible to the community and that
will allow us to make a well-informed decision when the time comes as
to which path to explore first.
I cannot force anyone how to organize references to scholarly publications
and software artifacts but I would at least recommend to use Wikidata to
do so.
We can get nice overviews with Scholia, once the references are collected
and organized in Wikidata. The current coverage of natural language
generation
however is rather shallow:
https://scholia.toolforge.org/topic/Q1513879
Even if Wikidata is not the best tool to collect references, it will
surely play
some kind of role in Abstract Wikipedia, so it makes sense to get used
to it.
Jakob