The on-wiki version of this newsletter can be found here:
We have a logo for Wikifunctions!
The logo vote closed in March
The vote was a tremendous success, with 561 contributors casting their
votes. Following the vote we had to go through several rounds involving the
Legal team and the Design department, with the goal throughout to ensure
that we follow the will of the community. After this long (and costly)
process we are exhausted and yet delighted to present the finalized logo
The original top-voted logo had to be modified as there was a high risk of
third parties opposing our use of it, and we learned of ongoing lawsuits
over other logos based on lambda characters. It seemed like it would be
only a matter of time for the original top-voted design to cause trouble.
So we took a close look at the top three voted community submissions (by NGC
54 <https://ro.wikipedia.org/wiki/Utilizator:NGC_54>, Jon Harald Søby
<https://meta.wikimedia.org/wiki/User:Jon_Harald_S%C3%B8by>, and Steven Liu
Yi <https://meta.wikimedia.org/wiki/User:Stevenliuyi> respectively):
The designer combined elements from each of those: the circumscribed lambda
from the first logo, the incorporation of the Wikipedia W from the second
logo, and the idea of combining the W and the Lambda from the third logo.
We used these elements to enrich the winning proposal and at the same time
increase its distinctiveness and add an already protected and very
Looking back at Wikidata, this is a similar process that took the original
submission and turned it into the final logo.
We hope that you like the logo! We have tested it with a few community
members and have seen positive reactions so far. But we have not started
the process of registering the mark yet, or printing T-Shirts, stickers,
and buttons. We will also create the other assets - a scalable version, the
favicon, etc. But we wanted to wait for your reaction as well before we do
We are very excited to hear from you.
Congratulations again to NGC 54
<https://ro.wikipedia.org/wiki/Utilizator:NGC_54> for creating the winning
proposal, and to Jon Harald Søby
<https://meta.wikimedia.org/wiki/User:Jon_Harald_S%C3%B8by> and Steven Liu
Yi <https://meta.wikimedia.org/wiki/User:Stevenliuyi> for contributing the
elements that became part of the final design.
The on-wiki version of this newsletter is available here:
Common wisdom has it that skills with numbers and programming go
hand-in-hand. If someone is not good in mathematics, then they’ll be no
good in natural sciences, technology, or engineering. These skills go so
tightly together that people came up with a short acronym for their
Given the frequent use of formulas in science, technology, and engineering,
this seems to make sense: if you have a good instinct for numbers, units,
and relations between quantities, then you will more easily intuit
equations and scientific laws. Galileo said
all science is written in the language of mathematics, after all.
So, how would that not be true for programming a computer? They are called
computers, after all, because they compute numbers so well. The foundations
of computers are the two numbers 1 and 0 and the very fast and repeated
processing of operations on long strings of these two numbers.
Last year, a paper in Nature
<https://www.nature.com/articles/s41598-020-60661-8> actually tested this
wide-spread assumption. And, rather surprisingly, it discovered that there
is no correlation between STEM skills and the ability to learn to program.
Instead, it found a strong correlation between learning to program and
natural language aptitude.
I was very worried about the effort that we would need to undertake in
order to identify and recruit the right people for Wikifunctions: people
who can build a library of natural language generation functions for
hundreds of languages. Where would we find people skilled in both
under-represented languages and programming? Would there be enough of them?
Would they have the time to contribute to Wikifunctions or would they be
busy due to their rare combination of skills?
But as we can infer from the result in the Nature paper, this should turn
out to be easier than I initially feared. All we need to look for is
natural language aptitude, and through that we will cover all necessary
It shouldn’t have come as a surprise. Lady Ada Lovelace
<https://en.wikipedia.org/wiki/Ada_Lovelace>, widely known as the world’s
first programmer, proclaimed that we would use programming to work with
art, and that numbers were not the only domain that computers could work
with. She likened programming to poetry. As a counter-point, Donald Knuth
<https://en.wikipedia.org/wiki/Donald_Knuth>, author of The Art of Computer
estimated that only about 2% of the population are what he calls “geeks”,
with the mindset necessary for programming. He based this on his own
observations and his life-long attempts at educating and reaching out about
But in many of Knuth’s writings, just as in many other introductions to
programming, you will start with examples in mathematics. The first example
in The Art of Computer Programming is Euclid’s algorithm
<https://en.wikipedia.org/wiki/Euclidean_algorithm> to determine the
greatest common divisor, and even before you get to the first section
heading, entitled “Mathematical preliminaries”, he has already talked about
prime numbers and averages, asked you to give a mathematical proof, and had
you formulate a set-theoretic definition. Many other books introducing
programming are no different, often assuming fluency in at least high
school mathematics and sometimes beyond.
Is it possible that by relying so much on a strong mathematical foundation
the field of computer science has systematically, if unintentionally,
excluded a large number of people who would otherwise be active
contributors to the world of programming? Can we imagine a more inclusive
approach to programming?
This is the community we should be aiming to grow and foster for
Wikifunctions: one where we do not exclude people because of their lack of
certain skills, such as mathematics. We want to give everyone the ability
to effectively use functions, to create functions, to share and talk about
functions. We should allow for people with different skill sets to
collaborate and reach more than any one of us can do. That is, and always
has been, the special advantage of the Wikimedia projects. Let us make a
concentrated effort to be open and welcoming.
And I think we can do so. To give one example: when Jeff Howard performed
Wikifunctions, he identified that many people didn’t really get what we
were aiming for with Wikifunctions. He cited existing Wikimedia
contributors such as Vigneron
<https://meta.wikimedia.org/wiki/User:VIGNERON> who said that, while they
were excited about using Wikifunctions, they didn’t think they would
necessarily contribute to it. They didn’t think of themselves as
Earlier this year, we were talking about morphological paradigms
create plurals in English. After we published that newsletter, one user saw
it and created a function <https://notwikilambda.toolforge.org/wiki/Z10148>,
tests <https://notwikilambda.toolforge.org/wiki/Z10150>, and an
implementation <https://notwikilambda.toolforge.org/wiki/Z10149> to do the
same thing in French. It was Vigneron!
It will be challenging. It will require new and inclusive ways of product
development to thoughtfully and intentionally ensure Wikifunctions is a
welcoming and inclusive community. But let us all commit to it. Let us be
mindful in the examples we choose, in the tutorials we write, in the
language we use.
I have not been mindful of this concern in many of my talks. My examples
were often drawn from mathematics, and the very first implementation I
presented was a recursive application of addition, using it to calculate a
product. I will aim to do better, and I plan to draw my examples from other
domains, in particular from natural language generation. And whereas I
fully expect us to quickly build up a library of functions in different
areas of STEM, which is of course important, let us be especially mindful
to not emphasize these to the exclusion of other areas, skill sets, and
The insight from the Nature paper is a gift to our project. Let us be
careful not to squander it.
(The weekly newsletter is always a collaborative effort by the whole team.
This week’s newsletter in particular benefitted from discussions,
contributions, editing, questions, and comments by James Forrester, Cory
Massaro, Aishwarya Vardhana, Adam Baso, and Nick Wilson. -- Denny)
The recording about Wikifunctions and Abstract Wikipedia with a Russian
translation <https://www.youtube.com/watch?v=x9NnGIXlvnI&t=20727s> at the
in Moscow, Russia, organized by Wikimedia RU
is now available on YouTube. Thanks to Gulnara for the translation!
The video recording from the Data Con LA 2021 Panel on Structured Data
<https://www.youtube.com/watch?v=W3KqygL7yqQ> with Wikifunction’s Denny
Vrandečić, Heather Hedden, and Karen Lopez, hosted by Joe Devon is now
The Arab presentation slides about Abstract Wikipedia and Wikifunction at
by Houcemeddine Turki <https://meta.wikimedia.org/wiki/User:Csisc> are now
online on Meta. The video recording is expected to be online later.
Houcemeddine will also present an English version of that talk at
WikidataCon <https://www.wikidata.org/wiki/Wikidata:WikidataCon_2021> next
Talking about WikidataCon
<https://www.wikidata.org/wiki/Wikidata:WikidataCon_2021>! Next weekend we
celebrate the ninth anniversary of Wikidata! From the 29th to the 31st of
October we have three days full of program, community, and data. This
year’s WikidataCon is accessible online and will be co-hosted by Wikimedia
Deutschland <https://www.wikimedia.de/> and Wiki Movimento Brasil
for WikidataCon 2021 <https://pretix.eu/WDCon21/WDCon21/> for free!
At WikidataCon, on Friday
Tochi Precious of the Igbo community is joined by Denny Vrandečić in “Igbo
and Abstract Wikipedia - a conversation” hosted by Silvia Gutiérrez.
Also, we are looking forward to the fiftieth newsletter next week. Expect
something long in the making.
It is not my book, but I think it will interest people around here. I think it is an easy ready, it meant to be read by professionals and hobbyist alike. There is no code.
A human-inspired, linguistically sophisticated model of language understanding for intelligent agent systems.
The open access edition of this book was made possible by generous funding from Arcadia – a charitable fund of Lisbet Rausing and Peter Baldwin.
One of the original goals of artificial intelligence research was to endow intelligent agents with human-level natural language capabilities. Recent AI research, however, has focused on applying statistical and machine learning approaches to big data rather than attempting to model what people do and how they do it. In this book, Marjorie McShane and Sergei Nirenburg return to the original goal of recreating human-level intelligence in a machine. They present a human-inspired, linguistically sophisticated model of language understanding for intelligent agent systems that emphasizes meaning—the deep, context-sensitive meaning that a person derives from spoken or written language.
With Linguistics for the Age of AI, McShane and Nirenburg offer a roadmap for creating language-endowed intelligent agents (LEIAs) that can understand,explain, and learn. They describe the language-understanding capabilities of LEIAs from the perspectives of cognitive modeling and system building, emphasizing “actionability”—which involves achieving interpretations that are sufficiently deep, precise, and confident to support reasoning about action. After detailing their microtheories for topics such as semantic analysis, basic coreference, and situational reasoning, McShane and Nirenburg turn to agent applications developed using those microtheories and evaluations of a LEIA's language understanding capabilities.
McShane and Nirenburg argue that the only way to achieve human-level language understanding by machines is to place linguistics front and center, using statistics and big data as contributing resources. They lay out a long-term research program that addresses linguistics and real-world reasoning together, within a comprehensive cognitive architecture.
Sorry, we are a bit swamped with work. The originally planned update did
not work out and had to be postponed, and we didn't have the time to write
another one instead. So we decided to skip it for this week. See you again
The on-wiki version of this newsletter is available here:
This week we are happy to welcome Cai Blanton
<https://meta.wikimedia.org/wiki/User:CBlanton_(WMF)> to the Wikimedia
Foundation and to the Abstract Wikipedia team! I will let Cai introduce
herself with her own words.
“I am thrilled to be joining WMF as the Senior Engineering Manager for
Abstract Wikipedia. From my beginnings as a full-stack UX-focused software
engineer, I have focused my career on building products that make people’s
lives better, spanning from education to employment technology. At the
heart of it all lies my passion for DIBE (Diversity, Inclusion, Belonging,
and Equity) and drive to create an environment where collaboration is
personal and fun.
“Languages and the nuances of cross-cultural communication have fascinated
me since grade school when I took my first Spanish class. This interest has
only grown through my further language studies, stint as a linguistic
major, and time living and working abroad in a multinational environment in
Western Europe and Scandinavia. The Abstract Wikipedia vision is
particularly compelling to me for these reasons.
“I look forward to working together with the community to advance global
We are also happy to welcome Adesoji Temitope to our team. Adesoji joins
joined us together with Lindsay
from ThisDot. Adesoji <https://twitter.com/temitopedavid_> is on Twitter.
Here is his introduction in his own words.
“I am Adesoji Temitope, a software developer at Thisdot. I am currently
based in Lagos, Nigeria.
“I love to play football and first-person shooter games.
“Started learning to code in my last year in high school and worked on a
lot of personal projects using PHP. I luckily got into a school that had a
lab setup by one of the lecturers and I was able to really start learning
to work with people and had my first live code. I owe a lot of my early
assistance to my brother.
“Really love researching about communities and I am excited about the
opportunity to join the Wikimedia team.”
Please join us in welcoming Cai and Adesoji to the team!
On October 15, Houcemeddine Turki
<https://meta.wikimedia.org/wiki/User:Csisc> will present Wikifunctions
<https://www.wikiarabia2021.com/en_GB/event/propose-2/agenda> at next
week’s WikiArabia conference <https://www.wikiarabia2021.com/en_GB/> organized
by the Wikimedia Algeria User Group
<https://meta.wikimedia.org/wiki/Wikimedia_Algeria>. The presentation will
be in Arabic.
We also presented Wikifunctions and Abstract Wikipedia at the Russian
Moscow, Russia, organized by Wikimedia RU
The presentation was translated live into Russian. Thanks to Gulnara for
the translation! We will link to the recordings when they are available.
We also presented Wikifunctions and Abstract Wikipedia at the German WikiCon
<https://de.wikipedia.org/wiki/Wikipedia:WikiCon_2021> in Erfurt, Germany,
organized by the German-speaking Wikimedia community with support from
Deutschland <https://meta.wikimedia.org/wiki/Wikimedia_Deutschland>. The
presentation was given in German. We will link to the recordings when they
Thanks to the communities and the organizations for organizing these hybrid
events. It is beautiful to see the communities come together again, but
also the effort to continue to allow people to participate online. It is a
lot of work, and thank you all for your efforts.
I thank you for your contributions to the Wikifunctions Project. As an end user of the Wikifunctions Project, I have been invited to speak at WikiArabia about Wikifunctions and Abstract Wikipedia in Arabic. That is why I developed and implemented several linguistic functions for Arabic Languages:
* Root and Pattern-Based Generator of Lexemes for Arabic Languages (Z10157)
* Pattern-Root Compatibility Verifier for Arabic Languages (Z10160)
* IPA Generator for Diacritized Arabic Script Texts in Tunisian Arabic (Z10163)
This implies the creation of Python codes for the three functions, the development of test functions and the description of the developed functions. When developing the functions, I have found several matters that can be solved in the next few months:
1. When a word assigns two Arabic Diacritics to a letter, this can cause a deficiency to the system. For example, كَرَّر has two Arabic diacritics (a shaddah and a fatha) on its second letter. The shaddah should be below the Fatha as its effect should come first. The Wikifunctions compilers do not efficiently consider that and this can harm the processing of the languages using the Arabic Script. This should be fixed.
2. The identation of the source code should be done by hand after pasting the code into the field. There is no automatic identation for pasted source codes. This can alter the user experience.
3. The mobile edition of the website does not work. Lucas Werkmeister has raised a ticket about this (T291325).
4. All these linguistic functions are taken from reference grammar books. It will be interesting to have a function that assigns a Wikidata item as a reference of a Wikifunctions function.
5. The runtime of the website is signficantly important. Several efforts should be done to make this project quicker.
6. It will be interesting to align inputs with their corresponding Wikidata items to have better semantics for the functions.
7. System messages are not absolutely user-friendly. This can be fixed.
8. The token for the connection to NotWikiLambda does not allow a long connection. It almost disconnects every fifteen minutes.
Hello. I am recently thinking about objectivity and subjectivity with respect to natural language generation, in particular in the contexts of story generation using historical data .
In the near future, digital humanities scholars – in particular historians – could modify collections of data and finetune generation-related parameters, watching as resultant multimodal historical narratives emerged and varied. In these regards, we can envision both computer-aided and automated historical narrative generation tools and technologies.
Could AI be a long-sought objective narrator for historians? Is all narration, or all language use, inherently subjective? What might the nature of “generation-related parameters” and “finetuning” be for style and subjectivity  when generating natural language and multimodal historical narratives from historical data ?
Thank you. Hopefully, these topics are interesting.
 Metilli, Daniele, Valentina Bartalesi, and Carlo Meghini. "A Wikidata-based tool for building and visualising narratives." International Journal on Digital Libraries 20, no. 4 (2019): 417-432.
 Metilli, Daniele, Valentina Bartalesi, Carlo Meghini, and Nicola Aloia. "Populating narratives using Wikidata events: An initial experiment." In Italian Research Conference on Digital Libraries, pp. 159-166. Springer, Cham, 2019.