The on-wiki version of this newsletter can be found here:
https://meta.wikimedia.org/wiki/Abstract_Wikipedia/Updates/2021-05-28
We have talked a lot in the past about what Wikifunctions aims to become: a
Wikimedia project for everyone to collaboratively create and maintain a
library of code functions to support the Wikimedia projects and beyond, for
everyone to call and re-use in the world's natural and programming
languages.
Today, in the tradition of the influential WP:NOT policy on English
Wikipedia <https://en.wikipedia.org/wiki/Wikipedia:What_Wikipedia_is_not>,
we publish an essay on what Wikifunctions aims not to be. WP:NOT was
started back in 2001, and was an important influence on the early
development of the English Wikipedia - evidenced by the fact that more than
2 million links to that page exist within the English Wikipedia.
So, without further ado — what Wikifunctions is not:
*Wikifunctions is not an encyclopædia of algorithms* in the sense that we
will have pages for famous and not-so famous algorithms such as Euclid’s
<https://en.wikipedia.org/wiki/Euclidean_algorithm>, Newton’s
<https://en.wikipedia.org/wiki/Newton%27s_method>, or Dijkstra’s algorithm
<https://en.wikipedia.org/wiki/Dijkstra%27s_algorithm>, aiming to represent
all existing algorithms faithfully and in their historical context. Yes, we
expect to have a function for the greatest common divisor
<https://en.wikipedia.org/wiki/Greatest_common_divisor> (GCD) of two
integers. And there might or might not be one or more implementations which
are based on Euclid’s algorithm to calculate the GCD. But Wikifunctions
would not be incomplete if it didn’t, and if, instead, we had alternative
algorithms to calculate the GCD. If you are looking for that, many
Wikipedias are actually great resources
<https://en.wikipedia.org/wiki/List_of_algorithms>.
Unlike an encyclopedic overview of existing algorithms, Wikifunctions will
also invite original work. We will not be restricted to functions that have
been published elsewhere first, and we do not require for every function
and implementation to be based on previously published work. Wikifunctions,
much like Wikibooks and very unlike Wikipedia, will be open to novel
contributions. The main criteria for implementations will be: under which
conditions can we run a given implementation, and what resources is it
expected to take?
*Wikifunctions is not an app development site*. We do not expect to make it
possible to create full-fledged, stand-alone apps within Wikifunctions -
there will be no place to store state, we don’t aim to allow calling
external APIs or directly cause changes to other sites, and we don’t aim to
package up apps with icons and UX, etc.. We absolutely expect Wikifunctions
to be a very useful resource for app developers, and I can very much
imagine apps that are basically wrappers around one or more functions from
Wikifunctions, but these would still need code and other assets which
wouldn’t be part of Wikifunctions. We are not competing in the area of
no-code or low-code development sites.
*Wikifunctions is not a code hosting service*. Yes, sure, Wikifunctions
will host code, but not for whole projects, merely for individual
functions. There won’t be libraries, apps, or services developed on
Wikifunctions with bug-trackers, forums, etc.. There won’t be a Web-based
version control system such as mercurial or git running against
Wikifunctions. Again, we hope that there will be libraries, apps, and
services that will rely on functions available in Wikifunctions, but they
would be developed on a different site, such as Gerrit, GitHub, or GitLab.
*Wikifunctions is not a programming language*, nor trying to evangelise a
particular language. In fact, Wikifunctions will allow for functions to be
implemented in a multitude of programming languages. The possibility to
compose functions together to create higher level functions may look a
little bit like a new programming language, but it will be extremely
limited compared to most other programming languages, since we only allow
for nested function calls and that’s it.
*Wikifunctions is not an Integrated Development Environment*. We won't
provide you with an interface for creating and developing software
projects, interfacing with build, testing, and source control systems.
*Wikifunctions is not a question-and-answer Website*. We are not competing
with StackOverflow and similar Websites, where a developer would ask how to
achieve a certain task and have community members discuss and answer the
question. We won’t contain code snippets to help answer the question, but
we will organize code within our Website to enable the evaluation of
functions within a library of functions.
*Wikifunctions is not a cloud computing platform*. We do not provide
computing resources and access to services and APIs so that you can run
your computational needs on our platform, either for money or for free. Use
of Wikifunctions's evaluation platform is to improve access to knowledge
for everyone.
*Wikifunctions is not a code snippet Website*. We are not competing with
sites such as gist, or sites such as rosettacode.org, esolangs.org, or
helloworldcollection.de, where code snippets are collected either to share
them quickly with others or around a specific theme in different
programming languages. The reason for having functions be implemented in
multiple programming languages is not to contrast them and compare them for
the education of the users of Wikifunctions, but in order to be able to
efficiently and effectively evaluate functions in different environments
and to improve the reliability of Wikifunctions as a whole.
*Wikifunctions is not a code education platform*. We are not in the
business of teaching people how to code, the material in Wikifunctions will
not be laid out in a pedagogical order, and we also won’t make sure to
comprehensively cover all topics important for coding. In fact, we aim for
Wikifunctions to be usable for people who don’t know how to code and who
don’t need to learn how to code to use most of Wikifunctions effectively.
Though the Wikifunctions community may well help each other in sharing best
practices, style guides, and tips on how to use the site in different
languages, these will be aimed at the purpose of serving the world's
knowledge.
Wikifunctions is, as far as we can tell, a new kind of Website, aiming for
a new community. We very much hope to work together with many of the tools,
sites, communities, and kind of systems we have mentioned above: we want to
play together with IDEs, with cloud computing platforms, with app
development sites, and many more of the systems and tools we mentioned —
but we aim to be a novel thing and we hope to shape a new unique space out
for us: A Wikimedia project for everyone to collaboratively create and
maintain a library of code functions to support the Wikimedia projects and
beyond, for everyone to call and re-use in the world's natural and
programming languages.
--
In related news, the recording of the keynote about Abstract Wikipedia and
Wikifunctions at this year’s Web Conference is now available online:
http://videolectures.net/www2021_vrandecic_knowledge_equity/
The on-wiki version of this newsletter can be found here:
https://meta.wikimedia.org/wiki/Abstract_Wikipedia/Updates/2021-05-06
In 2018, Wikidata launched a project to collect lexicographical knowledge
<https://www.wikidata.org/wiki/Wikidata:Lexicographical_data>. Several
hundred thousand Lexemes have been created since then, and this year the
tools will be further developed by Wikimedia Deutschland to make the
creation and maintenance of the lexicographic knowledge in Wikidata easier.
The lexicographic extension to Wikidata was developed with the goal that
became Abstract Wikipedia in mind, but a recent discussion within the
community showed me that I have not made the possible connection between
these two parts clear yet. Today, I would like to sketch out a few ideas on
how Abstract Wikipedia and the lexicographic data in Wikidata could work
together.
There are two principal ways to organize a dictionary: either you organize
the entries by ‘lexemes’ or ‘words’ and describe their senses (this is
called the semasiological <https://en.wikipedia.org/wiki/Semasiology>
approach), or you organize the entries by their ‘senses’ or ‘meanings’
(this is called the onomasiological
<https://en.wikipedia.org/wiki/Onomasiology> approach). Wikidata has
intentionally chosen the semasiological approach: the entries in Wikidata
are called Lexemes, and contributors can add Senses and Forms to the
Lexemes. Senses stand for the different meanings that a Lexeme may
regularly invoke, and the Forms are the different ways the Lexeme may be
expressed in a natural language text, e.g. in order to be in agreement with
the right grammatical number, case, tense, etc. The Lexeme “mouse” (L1119
<https://www.wikidata.org/wiki/Lexeme:L1119>) thus has two senses, one for
the small rodent, one for the computer input device, and two forms, “mouse”
and “mice”. For an example of a multilingual onomasiological collaborative
dictionary, one can take a look at the OmegaWiki <http://www.omegawiki.org/>
project, which is primarily organized around (currently 51,000+) Defined
Meanings <http://www.omegawiki.org/Help:DefinedMeaning> and how these are
expressed in different languages.
The reason why Wikidata chose the semasiological approach is based on the
observation that it is much simpler for a crowd-sourced collaborative
project, and has much less potential to be contentious. It is much easier
to gather a list of words used in a corpus than to gather a list of all the
meanings referred to in the same corpus. And whereas it is 'simpler', it is
still not trivial. We still want to collect a list of Senses for each
Lexeme, and we want to describe the connections between these Senses:
whether two Lexemes in a language have the same Sense, how the Senses
relate to the large catalog of items in Wikidata, and how Senses of
different languages relate to each other. These are all very difficult
questions that the Wikidata community is still grappling with (see also the
essay on Making Sense <https://www.wikidata.org/wiki/Wikidata:Making_sense>
).
Let’s look at an example.
“Stubbs was probably one of the youngest mayors in the history of the
world. He became mayor of Talkeetna, Alaska, at the age of three months and
six days, and retained that position until his death almost four years ago.
Also, Stubbs <https://en.wikipedia.org/wiki/Stubbs_(cat)> was a cat."
If we want to express that last sentence - “Stubbs was a cat” - we will
have to be able to express the meaning “cat” (here, we will focus entirely
on the lexical level, and will not discuss grammatical and idiomatic
issues; we will leave those for another day). How do we refer to the idea
for cat in the abstract content? How do we end up, in English, eventually
with the word form “cat” (L7-F4 <https://www.wikidata.org/wiki/Lexeme:L7#F4>)?
In French with the word form “chat” (L511-F4
<https://www.wikidata.org/wiki/Lexeme:L511#F4>)? And in German with the
form “Kater” (L303326-F1 <https://www.wikidata.org/wiki/Lexeme:L303326#F1>)?
Note that these three words commonly do not have the same meaning. The
English word cat refers to both male or female cats equally; and whereas
the French word could refer to a cat generically, for example if we
wouldn’t know Stubbs’ gender, the word is male, but a female cat would
usually be referred to using the word “chatte”. The German word, on the
other hand, may only refer to a male cat. If we wouldn’t know whether
Stubbs is male or female, we would need to use the word “Katze” in German
instead, whereas in French, as said, we still could use “chat”. And English
also has words for male cats, e.g. “tom” or “tomcat”, but these are much
less frequently used. Searching the Web for “Stubbs is a cat” returns more
than 10,000 hits, but not a single one for “Stubbs is a tom” nor “Stubbs is
a tomcat”.
In comparison, for Félicette <https://en.wikipedia.org/wiki/F%C3%A9licette>,
the first and so far only cat in space, the articles indeed use the words
“chatte” in French and “Katze” in German.
Here we are talking about three rather closely related languages, we are
talking about a rather simple noun. This should have been a very simple
case, and yet it is not. When we talk about verbs, adjectives, or nouns
about more complex concepts (for example different kinds of human
settlements or the different ways human body parts are conceptualized in
different languages, e.g. arms and hands <https://wals.info/chapter/129>,
terms for colors), it gets much more complicated very quickly. If we were
to require that all words we want to use in Abstract Wikipedia first must
align their meanings, then that would put a very difficult task in our
critical path. So whereas it would indeed have been helpful to Abstract
Wikipedia to have followed an onomasiological approach (how wonderful would
it be to have a comprehensive catalog of meanings!), that approach was
deemed too difficult and a semasiological approach was chosen instead.
Fortunately, a catalog of meanings is not necessary. The way we can avoid
that is because Abstract Wikipedia only needs to generate text, and neither
parse nor understand it. This allows us to get by using a Constructor that,
for each language, uses a Renderer to select the correct word (or other
lexical representation). For example, we could have a Constructor that may
take several optional further pieces of information: the kind of animal,
the breed, the color, whether it is an adult, whether it is neutered, the
gender, the number of them, etc. For each of these pieces of information,
we could mark whether that information must be expressed in the Rendering,
or whether this information is optional and can be ignored, and thus what
is available for those Renderers to choose the most appropriate word. Note,
this is not telling the community how to do it, merely sketching out one
possible approach that would avoid to rely on a catalog of meanings.
Each language Renderer could then use the information it needs to select
the right word. If a language has a preference to express the gender (such
as German) it can do so, whereas a language that prefers not to (such as
English) can do so. If for a language the age of the cat matters for the
selection of the word, it can look it up. If the color of the animal
matters (as it does for horses in German
<https://de.wikipedia.org/wiki/Fellfarben_der_Pferde#Die_einzelnen_Fellfarben>),
the respective Renderer can use the information. If a required information
is missing, we could add this to a maintenance queue so that contributors
can fill it out. If a language should happen not to have a word, a
different noun phrase can be chosen, e.g. a less specific word such as
”animal” or “pet”, or a phrase such as “male kitten”, or “black horse” for
the German word “Rappen”.
But the important design feature here is that we do not need to ensure and
agree on the alignment of meanings of words across different languages. We
do not need a catalog of meanings to achieve what we want.
Now, there are plenty of other use cases for having such a catalog of
meanings. It would be a tremendously valuable resource. And even without
such a catalog, the statements connecting Senses and Items in Wikidata can
be very helpful for the creation and maintenance of Renderers, but these do
not need to be used when the natural text for Wikipedia is created.
This suggestion is not meant to be prescriptive, as said. It will be up to
the community to decide on how to implement the Renderers and what
information to use. In this, I am sketching out an architecture that allows
us to avoid blocking on the availability of a (valuable but very difficult
to create) resource, a comprehensive catalog of meanings aligning words
across many different languages.
Had some fun today with an often used international hero (I never knew he
was so popular worldwide!) who seems to rise up in solving sticky
situations with creative solutions.
https://www.wikidata.org/wiki/Lexeme:L488777
Thanks to Jan Ainali and Jon Harald Søby for assistance on Telegram chat.
https://t.me/c/1325756915/8982
I guess the learning here is:
that any word or phrase, even a person or org name, could potentially be
used as a Verb, and that's fine and we should capture that as a Lexeme when
there is usage and understanding of it in a language. It's also helpful to
give as much context as possible with additional statements to those
namesake Lexeme so that folks 10,000 years from now might know what we were
talking about. :-)
Thad
https://www.linkedin.com/in/thadguidry/https://calendly.com/thadguidry/
The on-wiki version of this newsletter can be found at
https://meta.wikimedia.org/wiki/Abstract_Wikipedia/Updates/2021-05-11
When we started the development effort towards the Wikifunctions site, we
subdivided the work leading up to the launch of Wikifunctions into eleven
phases, named after the first eleven letters of the Greek alphabet.
-
With Phase α (alpha) completed, it became possible to create instances
of the system-provided Types in the wiki.
-
With Phase β (beta), it became possible to create Types on-wiki and to
create instances of these Types.
-
With Phase γ (gamma), all the main Types of the pre-generic function
model were available.
-
This week, we completed Phase δ (delta).
The goal of Phase δ was to provide the capability to evaluate built-in
implementations.
What does this mean? Every function in Wikifunctions can have several
implementations. There are three different ways to express an
implementation:
1.
As some code in a programming language, written by the users of
Wikifunctions: the implementation of a function can be given in any
programming language that Wikifunctions supports. Eventually we aim to
support a large number of programming languages, but we will start small.
2.
As a built-in function, written by the development team: this means that
the implementation is handled by the evaluator as a black box. We hope to
rely on only a very small number of required built-in functions, since each
evaluator needs to implement all built-in functions to be usable, and we
want to make adding new evaluators (and so new programming languages) as
easy as possible. A list of built-in functions currently available is given
below. This list is likely not final, but we hope it won't grow.
3.
As a composition of other functions: this means that we use existing
functions as building blocks in order to implement new capabilities. We
have published a few examples of composed implementations
<https://meta.wikimedia.org/wiki/Abstract_Wikipedia/Examples/Function_compos…>.
The example implementation of the Boolean functions might be particularly
instructive.
In Phase δ, we created the infrastructure and interfaces to evaluate
function calls at all, and allowed for built-in implementations. The built-in
functions
<https://meta.wikimedia.org/wiki/Abstract_Wikipedia/Reserved_ZIDs#Core_funct…>
that are currently available are the following:
-
If : Boolean, Any, Any ➝ Any — returns second argument if the first is
true, else the third; if has two letters
-
Value by key : Key reference, Any ➝ Any — returns the value of the given
key of the given object
-
Reify : Any ➝ List(Pair(Key reference, Any)) — transforms any object
into a list of key-value pairs; it deconstructs the object
-
Abstract : List(Pair(Key reference, Any)) ➝ Any — transform a list of
key-value pairs into an object; it constructs an object
-
Cons : Any, List ➝ List — insert an object to the front of an ordered
list of objects
-
Head : List ➝ Any — get the first item of a list
-
Tail : List ➝ List — get the list with the first item removed
-
Empty : List ➝ Boolean — return if a list does not contain any item
-
First : Pair ➝ Any — extract the first value of a pair
-
Second : Pair ➝ Any — extract the second value of a pair
-
Convert : String ➝ List(Character) — convert a string into a list of
Characters
-
Convert : List(Character) ➝ String — convert a list of Characters into a
string
-
Same : Character, Character ➝ Boolean — compare two characters and
return if they are equal
-
Unquote : Quote ➝ Any — unquotes a Quote
All of the implementations of these built-ins, though simple, are first
drafts, and currently only lightly tested. If you test them and find
issues, please report them on Phabricator or send an email to
abstract-wikipedia(a)lists.wikimedia.org. We will improve these over the
following weeks.
The two screenshots show a new special page that lets you evaluate a
function call. Here we offer two screenshots with examples. The first one
shows the call to the If function. The condition is set to true, and thus
the function call should return the consequent (given as the String “this”)
and not the alternative (given as the String “that”). And indeed - the
Orchestration result below the function call shows a normalized result
representing the String “this”:
The second example is taken straight from the phase completion condition on
the Phases planning. Here we check whether an empty list is indeed empty
(we are calling the Z813/Empty function, and the argument, Z813K1/list is
an empty list). The result is true (i.e. the Z40/Boolean with the
Z40K1/identity Z41/true):
We promise to improve the UX before launch! This raw JSON output is mostly
for debugging and internal development purposes as we work on a design
language for the user experience.
We are now moving on to Phase ε (epsilon). In this phase we aim to support
user-written implementations in a programming language. Our initial plan is
to support code written in JavaScript and Python.
Since running arbitrary code written by anyone on the Internet has major
security and performance risks, we will follow up the work in this phase
with thorough security and performance reviews working with colleagues
across the Foundation.
We currently do not have an official public test instance of the WikiLambda
system running. Lucas Werkmeister has, in his volunteer capacity, so far
provided us with a regularly updated public test instance, notwikilambda,
for which we are deeply grateful, but that instance has not yet been
updated to support the orchestrator backend (as we still need to document
how to do so). We will continue to not run an instance of our own until
after the security and performance reviews have concluded (but we certainly
won’t stop anyone else from doing so, and can provide some support on the
usual channels if someone wants to set it up).
Following the conclusion of the current Phase, we will move on to Phase ζ
(zeta), which will allow for the third type of implementations,
compositions.
Thanks to the team, thanks to the volunteers, for their great effort in
bringing us so far, and I am excited for the next steps of the project!
(Note: there will not be a newsletter next week.)
The on-wiki version of this newsletter can be found here:
https://meta.wikimedia.org/wiki/Abstract_Wikipedia/Updates/2021-04-29
This week, I want to start with a shoutout to our phenomenal volunteers.
Lexicographical coverage
My thanks to Nikki <https://www.wikidata.org/wiki/User:Nikki> and their
updates on the dashboards about lexicographical coverage
<https://www.wikidata.org/wiki/Wikidata:Lexicographical_coverage>. Since
the first publication of the dashboard, Nikki has kept the dashboards up to
date, re-running them from time to time and updating the page on Wikidata.
They and others have also fixed numerous issues, created more actionable
lists, and added more languages based on other corpora than Wikipedia (most
notably from the Leipzig Corpora Collection
<https://wortschatz.uni-leipzig.de/en>). Thanks also to Mahir
<https://www.wikidata.org/wiki/User:Mahir256>, who also contributed to the
dashboard, particularly covering Bengali, one of our focus languages.
In fact, thanks to Nikki and Mahir, the four main focus languages are now
all covered: we have numbers for Bengali, Malayalam, Hausa, and Igbo. We
are still missing our stretch focus language, Dagbani, because we could not
find yet a corpus. We have reached out to a researcher who has compiled a
Dagbani corpus
<https://www.aflat.org/content/corpus-building-predominantly-oral-culture-no…>,
and we also are exploring how we could use the Dagbani Wikipedia
<https://incubator.wikimedia.org/wiki/Wp/dag> on Incubator
<https://incubator.wikimedia.org/wiki/Incubator:Main_Page>. In the
meantime, we are pleased to see that the Dagbani community has put in a request
for a new Wikipedia edition
<https://meta.wikimedia.org/wiki/Requests_for_new_languages/Wikipedia_Dagbani>
and that they feel that they are ready to graduate from incubator!
Congratulations!
Some of the results of highlighting the dashboard, and particularly the
list of most frequent missing lexemes, were very promising: coverage in a
number of languages has increased considerably. To just list a few
examples: Polish went from 16% to 32% coverage, German from 53% to 67%,
Czech from 44% to 57% — and Hindi went from a mere 1% to 15%, and Malay
from 15% to an astonishing 53%! Congratulations to those communities and
others for such visible progress.
With an eye on our focus languages, Bengali went from 18% to 28%, Malayalam
is at 21%, whereas Hausa and Igbo both have coverages of below 1%.
Another great tool to see the progress in lexicographical knowledge
coverage in Wikidata is Ordia <https://ordia.toolforge.org/>, developed by Finn
Årup Nielsen <https://meta.wikimedia.org/wiki/User:Fnielsen>. Ordia is a
holistic user experience that allows users to browse and slice and dice the
lexicographic data in Wikidata in real time. We can take a look at the 11,400
Malayalam Lexemes <https://ordia.toolforge.org/language/Q36236>, the 8,724
Bengali Lexemes <https://ordia.toolforge.org/language/Q9610>, 53 Dagbani
Lexemes <https://ordia.toolforge.org/language/Q32238>, 15 Hausa Lexemes
<https://ordia.toolforge.org/language/Q56475>, and the single Lexeme in Igbo
<https://ordia.toolforge.org/language/Q33578>, mmiri, the Igbo word for
water. Thanks to Finn for Ordia!
Making the state of the lexicographical coverage visible shows us that
there is still a lot to do — but also that we are already achieving
noticeable progress! Thanks to everyone contributing.
By the way, the annotation wiki <https://annotation.wmcloud.org/> is
currently having issues. If you would like to help us with running it and
have experience with Vagrant and Cloud VPS based wikis, please drop me a
line on my talk page <https://meta.wikimedia.org/wiki/User_talk:Denny>.
A first running function call!
Lucas Werkmeister <https://meta.wikimedia.org/wiki/User:Lucas_Werkmeister>
consistently keeps being amazing. He is working on GraalEneyj
<https://github.com/lucaswerkmeister/graaleneyj>, a GraalVM
<https://www.graalvm.org/>-based evaluation engine for Wikifunctions,
written in Java. Lucas re-wrote GraalEneyj to be able to call a function
all directly from the notwikilambda test-wiki — the very first time that
one of our functions is being evaluated! You can watch that moment in a
Twitch video <https://www.twitch.tv/videos/975239172>.
We are still working on replicating that feat in what will be our
production codebase, and hope to soon connect our backend evaluating
functions with the wiki — this is our goal for the ongoing Phase δ
<https://meta.wikimedia.org/wiki/Abstract_Wikipedia/Phases#Phase_%CE%B4_(del…>
(delta). Congratulations to Lucas for achieving this step!
Delay on logo
There will be a delay on the logo finalization. Please expect another month
or two before we will have news to share about the logo. Due to the legal
nature of some of the involved issues, we have decided to not share details
in public. Sorry for the delay, and I am looking forward to sharing the
next steps in this process.
New documents
We have been working for a while with the Wikimedia Architecture Team on a
number of artefacts around Abstract Wikipedia and Wikifunctions. We have
now published and shared these documents in the Architecture repository
<https://www.mediawiki.org/wiki/Architecture_Repository/Strategy/Goals_and_i…>.
We are aiming to keep publishing our design documents and related
development artefacts, and are happy to invite you to this set of documents.
Based on requests from the community, we also worked on a new example of an
article in abstract content
<https://meta.wikimedia.org/wiki/Abstract_Wikipedia/Examples/Jupiter>. The
example is not complete, and is open to being edited and discussed. Note
that this is not meant to be prescriptive of how abstract content should
look like, but merely a more concrete hypothetical example of what it could
look like. I am confident that the community as a whole will come up with
better abstractions than I did. Please do edit or fork that page.
There will be three approaches towards creating an implementation for a
function in Wikifunctions, and the current and following two phases of
development are each dedicated to one of those approaches: (1) allow to
call a built-in implementation in the evaluator engine, (2) allow to call
native code in a programming language, and (3) compose other functions to
implement a new function. In preparation for the upcoming Phase ζ
<https://meta.wikimedia.org/wiki/Abstract_Wikipedia/Phases#Phase_%CE%B6_(zet…>
(zeta), we have created a few examples of function composition
<https://meta.wikimedia.org/wiki/Abstract_Wikipedia/Examples/Function_compos…>
.