As presented at last year's WikidataCon
<https://www.youtube.com/watch?v=e_VxTlBNkyk>, Wikimedia Deutschland has
set out to find new ways for collaboration around Wikidata software
development to enhance the diversity of our movement, increase Wikibase’s
scalability and robustness and breathe life into our movement principles of
knowledge equity. With a grant from Arcadia
<https://www.arcadiafund.org.uk/>, a charitable fund administered by Lisbet
Rausing and Peter Baldwin, we will be able to implement such a
collaboration in the next two years.
Today, we are happy to share an exciting update on the progress of this
project with all of you. After spending the last few months with
conversations with the movement groups who were interested in joining such
a partnership, we have now reached a point where we can spread the news
about the future partners and projects that will shape this Wikidata
Wikimedia Indonesia, the Igbo Wikimedians User Group and Wikimedia
Deutschland will be joining forces to advance the technical capacities of
the movement around Wikidata development and with this, make the software
and tools more usable by cultures underrepresented in technology, people of
the Global South and speakers of minority languages.
Wikimedia Indonesia, a non-profit organization based in Jakarta, Indonesia
and established in 2008, is dedicated to encouraging the growth,
development & dissemination of knowledge in Indonesian and other languages
spoken in Indonesia. Since then, Wikimedia Indonesia has supported the
development of 14 Wikipedias in the languages spoken in Indonesia, 12
regional Wikimedian communities spread across the country, and two
Wikimedia project-based communities.
For this project, in collaboration with Wikimedia Deutschland, Wikimedia
Indonesia wants to build up a software team of their own in the course of
the next 2 years. The tools will hopefully help under-resourced language
communities contributing to the flourishing of their languages online
through lexicographical data, and also involving the local language
communities in contributing to lexemes in Wikidata.
Igbo Wikimedians is a group of Wikimedians that are committed to working on
various wiki projects related to Igbo language
<https://en.wikipedia.org/wiki/Igbo_language> and culture. The user group
is organizing projects around community building in the Igbo community,
content improvement for Wikipedia and its sister project and has
established its own Wikidata hub in 2021.
The Igbo Wikimedia User Group and their program of the Wiki Mentor Africa
<https://m.wikidata.org/wiki/Wikidata:Wiki_Mentor_Africa> is aiming at
building up technical capacity in African Wikimedia communities by
mentoring African developers for Wikidata Tool Development. Wikimedia
Deutschland will support the user group in the implementation of their
project and mentoring program.
Wikimedia Deutschland has been founded in 2004 as a member’s association
and is located in Berlin, Germany. Wikimedia Deutschland support
communities like the Wikipedia community, develop software for Wikimedia
projects and the ecosystem of Free Knowledge, and wants to improve the
political and legal framework for Wikipedia and for Free Knowledge in
Specifically, Wikimedia Deutschland has been working on the development of
Wikidata since 2012. Since then, an active and vibrant community of
volunteer editors and programmers, re-users, data donors, affiliates and
more has formed around Wikidata.
Wikimedia Deutschland will be responsible for the administrative setup of
those collaborations and the communication with Arcadia. We are also happy
to share our experiences and knowledge about establishing software teams,
software development in the Wikidata/Wikibase environment, the Wikidata
community and providing support for emerging tech communities.
If you want to find out more about the partnership, you can read up on this
on our project page on Meta
where we will keep updating the community on the progress of this
collaboration. If you have any comments, suggestions or questions please
use the talk page there to get in contact with us.
We are all excited to see those collaborations coming to life!
With kind regards,
Igbo Wikimedians User Group
Wikimedia Deutschland e. V. | Tempelhofer Ufer 23-24 | 10963 Berlin
Tel. (030) 219 158 26-0
Unsere Vision ist eine Welt, in der alle Menschen am Wissen der Menschheit
teilhaben, es nutzen und mehren können. Helfen Sie uns dabei!
Wikimedia Deutschland — Gesellschaft zur Förderung Freien Wissens e. V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg unter
der Nummer 23855 B. Als gemeinnützig anerkannt durch das Finanzamt für
Körperschaften I Berlin, Steuernummer 27/029/42207.
The Community Affairs Committee of the Wikimedia Foundation Board of
Trustees would like to thank everyone who participated in the recently
concluded community vote on the Enforcement Guidelines for the Universal
Code of Conduct (UCoC)
The volunteer scrutinizing group has completed the review of the accuracy
of the vote and has reported the total number of votes received as 2,283.
Out of the 2,283 votes received, 1,338 (58.6%) community members voted for
the enforcement guidelines, and a total of 945 (41.4%) community members
voted against it. In addition, 658 participants left comments, with 77% of
the comments written in English.
We recognize and appreciate the passion and commitment that community
members have demonstrated in creating a safe and welcoming culture.
Wikimedia community culture stops hostile and toxic behavior, supports
people targeted by such behavior, and encourages good faith people to be
productive on the Wikimedia projects.
Even at this incomplete stage, this is evident in the comments received. The
Enforcement Guidelines did reach a threshold of support necessary for the
Board to review. However, we encouraged voters, regardless of how they were
voting, to provide feedback on the elements of the enforcement guidelines.
We asked the voters to inform us what changes were needed and in case it
was prudent to launch a further round of edits that would address community
Foundation staff who have been reviewing comments have advised us of the
emerging themes. As a result, as Community Affairs Committee, we have
decided to ask the Foundation to reconvene the Drafting Committee. The
Drafting Committee will undertake another community engagement to refine
the enforcement guidelines based on the community feedback received from
the recently concluded vote.
For clarity, this feedback has been clustered into four sections as follows:
To identify the type, purpose, and applicability of the UCoC training;
To simplify the language for more accessible translation and
comprehension by non-experts;
To explore the concept of affirmation, including its pros and cons;
To review the conflicting roles of privacy/victim protection and the
right to be heard.
Other issues may emerge during conversations, particularly as the draft
Enforcement Guidelines evolve, but we see these as the primary areas of
concern for voters. Therefore, we are asking staff to facilitate a review
of these issues. Then, after the further engagement, the Foundation should
re-run the community vote to evaluate the redrafted Enforcement Outline to
see if the new document is ready for its official ratification.
Further, we are aware of the concerns with note 3.1 in the Universal Code
of Conduct Policy. Therefore, we are directing the Foundation to review
this part of the Code to ensure that the Policy meets its intended purposes
of supporting a safe and inclusive community without waiting for the
planned review of the entire Policy at the end of the year.
Again, we thank all who participated in the vote and discussion, thinking
about these complex challenges and contributing to better approaches to
working together well across the movement.
*Rosie Stephenson-Goodknight *(she/her)
Acting Chair, Community Affairs Committee
Wikimedia Foundation <https://wikimediafoundation.org/> Board of Trustees
Short version : We need to find solutions to avoid so many africans
being globally IP blocked due to our No Open Proxies policy.
Long version :
I'd like to raise attention on an issue, which has been getting worse in
the past couple of weeks/months.
Increasing number of editors getting blocked due to the No Open Proxies
In particular africans.
In February 2004, the decision was made to block open proxies on Meta
and all other Wikimedia projects.
According to theno open proxiespolicy : Publicly available proxies
(including paid proxies) may be blocked for any period at any time.
While this may affect legitimate users, they are not the intended
targets and may freely use proxies until those are blocked [...]
Non-static IP addresses or hosts that are otherwise not permanent
proxies should typically be blocked for a shorter period of time, as it
is likely the IP address will eventually be transferred or dynamically
reassigned, or the open proxy closed. Once closed, the IP address should
According to the policy page, « the Editors can be permitted to edit by
way of an open proxy with the IP block exempt flag. This is granted on
local projects by administrators and globally by stewards. »
I repeat -----> ... legitimate users... may freely use proxies until
those are blocked. the Editors can be permitted to edit by way of an
open proxy with the IP block exempt flag <------ it is not illegal to
edit using an open proxy
Most editors though... have no idea whatsoever what an open proxy is.
They do not understand well what to do when they are blocked.
In the past few weeks, the number of African editors reporting being
blocked due to open proxy has been VERY significantly increasing.
New editors just as old timers.
Unexperienced editors but also staff members, president of usergroups,
organizers of edit-a-thons and various wikimedia initiatives.
At home, but also during events organized with usergroup members or
trainees, during edit-a-thons, photo uploads sessions etc.
It is NOT the occasional highly unlikely situation. This has become a
There are cases and complains every week. Not one complaint per week.
Several complaints per week.
*This is irritating. This is offending. This is stressful. This is
disrupting activities organized in _good faith_ by _good people_,
activities set-up with _our donors funds. _**And the disruption**is
primarlly taking place in a geographical region supposingly to be
nurtured (per our strategy for diversity, equity, inclusion blahblahblah). *
The open proxy policy page suggests that, should a person be unfairly
blocked, it is recommended
* * to privately email stewards(_AT_)wikimedia.org.
* * or alternatively, to post arequest (if able to edit, if the editor
doesn't mind sharing their IP for global blocks or their reasons to
desire privacy (for Tor usage)).
* * the current message displayed to the blocked editor also suggest
contacting User:Tks4Fish. This editor is involved in vandalism
fighting and is probably the user blocking open proxies IPs the
most. See log
Option 1: contacting stewards : it seems that they are not answering. Or
not quickly. Or requesting lengthy justifications before adding people
to IP block exemption list.
Option 2: posting a request for unblock on meta. For those who want to
look at the process, I suggest looking at it  and think hard about
how a new editor would feel. This is simply incredibly complicated
Option 3 : user:TksFish answers... sometimes...
As a consequence, most editors concerned with those global blocks...
stay blocked several days.
We do not know know why the situation has rapidly got worse recently.
But it got worse. And the reports are spilling all over.
We started collecting negative experiences on this page .
Please note that people who added their names here are not random
newbies. They are known and respected members of our community, often
leaders of activities and/or representant of their usergroups, who are
confronted to this situation on a REGULAR basis.
I do not know how this can be fixed. Should we slow down open proxy
blocking ? Should we add a mecanism and process for an easier and
quicker IP block exemption process post-blocking ? Should we improve a
process for our editors to pre-emptively be added to this IP block
exemption list ? Or what ? I do not know what's the strategy to fix
that. But there is a problem. Who should that problem be addressed to ?
Who has solutions ?
Over the last few months, a small team at the Wikimedia Foundation has been
working on a project that has been discussed by many people in our movement
for many years: building ‘enterprise grade’ services for the high-volume
commercial reusers of Wikimedia content. I am pleased to say that in a
remarkably short amount of time (considering the complexity of the issues:
technical, strategic, legal, and financial) we now have something worthy of
showing to the community, and we are asking for your feedback. Allow me to
introduce you to the Wikimedia Enterprise API project – formerly codenamed
While the general idea for Wikimedia Enterprise predates the current
movement strategy process, its recommendations identify an enterprise API
as one possible solution to both “Increase the sustainability of our
movement” and “Improve User Experience.” That is, to simultaneously
create a new revenue stream to protect Wikimedia’s sustainability, and
improve the quality and quantity of Wikimedia content available to our many
readers who do not visit our websites directly (including more consistent
attribution). Moreover, it does so in a way that is true to our movement’s
culture: with open source software, financial transparency, non-exclusive
contracts or content, no restrictions on existing services, and free access
for Wikimedia volunteers who need it.
The team believes we are on target to achieve those goals and so we have
written a lot of documentation to get your feedback about our progress and
where it could be further improved before the actual product is ‘launched’
in the next few months. We have been helped in this process over the last
several months by approximately 100 individual volunteers (from many
corners of the wikiverse) and representatives of affiliate organisations
who have reviewed our plans and provided invaluable direction, pointing out
weaknesses and opportunities, or areas lacking clarity and documentation in
our drafts. Thank you to everyone who has shared your time and expertise to
help prepare this new initiative.
A essay describing the “why?” and the “how?” of this project is now on
Also now published on Meta are an extensive FAQ, operating principles, and
technical documentation on MediaWiki.org. You can read these at   and
 respectively. Much of this documentation is already available in
French, German, Italian, and Spanish.
The Wikimedia Enterprise team is particularly interested in your feedback
on how we have designed the checks and balances to this project - to ensure
it is as successful as possible at achieving those two goals described
above while staying true to the movement’s values and culture. For example:
Is everything covered appropriately in the “Principles” list? Is the
technical documentation on MediaWiki.org clear? Are the explanations in the
“FAQ” about free-access for community, or project’s legal structure, or the
financial transparency (etc.) sufficiently detailed?
Meet the team and Ask Us Anything:
The central place to provide written feedback about the project in general
is on the talkpage of the documentation on Meta at:
On this Friday (March 19) we will be hosting two “Office hours”
conversations where anyone can come and give feedback or ask questions:
13:00 UTC via Zoom at https://wikimedia.zoom.us/j/95580273732
22:00 UTC via Zoom at https://wikimedia.zoom.us/j/92565175760 (note:
this is Saturday in Asia/Oceania)
Other “office hours” meetings can be arranged on-request on a technical
platform of your choosing; and we will organise more calls in the future.
We will also be attending the next SWAN meetings (on March 21)
also the next of the Wikimedia Clinics
Moreover, we would be very happy to accept any invitation to attend an
existing group call that would like to discuss this topic (e.g. an
affiliate’s members’ meeting).
On behalf of the Wikimedia Enterprise team,
Peace, Love & Metadata
-- Liam Wyatt [Wittylama], Wikimedia Enterprise project community liaison.
*Liam Wyatt [Wittylama]*
WikiCite <https://meta.wikimedia.org/wiki/WikiCite> Program Manager & Wikimedia
Enterprise <https://meta.wikimedia.org/wiki/Okapi> Community Liaison
Please join me in welcoming Luis Bitencourt-Emilio to the Wikimedia
Foundation Board of Trustees. Luis was unanimously appointed to a 3-year
term and replaces a board-selected Trustee, Lisa Lewin, whose term ended in
November 2021 .
Currently based in São Paulo, Luis is the Chief Technology Officer at Loft,
a technology startup in the real-estate industry. He brings product and
technology experience from a globally diverse career that has spanned large
technology companies including Microsoft, online networking sites like
Reddit, and a series of entrepreneurial technology ventures focused in the
USA and Latin America. Luis has led product and technology teams across
Latin America, the United States, Europe and Asia. He is passionately
involved in building and promoting the entrepreneurial ecosystem for Latin
Luis has more than two decades of experience across product development,
software engineering, and data science. At Microsoft, he led engineering
teams shipping multiple Microsoft Office products. At Reddit, he led the
Knowledge Group, an engineering team that owned critical functions such as
data, machine learning, abuse detection and search. He was deeply involved
in Reddit’s growth stage and worked closely with Reddit’s communities in
that evolution. Luis also co-founded a fintech startup to help millennials
manage and automate their finances.
His career has also been shaped by a visible commitment to recruiting
diverse leaders. At Reddit, Luis was a key member of the recruitment
efforts that achieved equal representation of women engineering directors.
Luis says his proudest achievement at Microsoft was building their
Brazilian talent pipeline by working closely with local universities to
place thousands of engineering candidates at Microsoft, as well as his
involvement in expanding global recruitment to markets including Ukraine,
Poland, Great Britain, the EU and Mexico.
Luis was educated in Brazil and the United States, receiving a Bachelor of
Science in Computer Engineering with Honors from the University of
Maryland. He is fluent in Portuguese, Spanish and English. He is also a
proud father and dog lover.
I would like to thank the Governance Committee, chaired by Dariusz
Jemielniak, for this nomination process as well as volunteers in our
Spanish and Portuguese speaking communities who also met with Luis or
shared their experiences.
You can find an official announcement here .
PS. You can help translate or find translations of this message on
 Lisa Lewin served from January 2019 till November 2021:
antanana / Nataliia Tymkiv
Chair, Wikimedia Foundation Board of Trustees
*NOTICE: You may have received this message outside of your normal working
hours/days, as I usually can work more as a volunteer during weekend. You
should not feel obligated to answer it during your days off. Thank you in
The Wiki Loves Women team launched a podcast a few weeks ago.
We have released 5 episodes so far, with a frequency of two episodes per
All episodes are available on the usual podcast platforms, or may be
accessed on Wiki Loves Women website with additional notes about each
The latest episode features Angela Lungati, current CEO of Ushaidi.
If you are interested to receive a brief message on your talk each time
a new episode is published, please drop your name here :
About Inspiring Open
Inspiring Open is a podcast series about women from Wiki Loves Women
that celebrates the inspirational women whose careers and personal
ethics intersect with the Open movement. Each episode features a dynamic
woman from Africa who has pushed the boundaries of what it means to
build communities and succeed as a collective. As a podcast series, it
is available at anytime, anywhere to amplify the motivational stories of
each guest, as spoken in their own voice. Listen to their personal
journeys in conversation with host Betty Kankam-Boadu.
Join Inspiring Open as we raise the global visibility and profiles of
women who are redefining and reclaiming the Open sector.
Be inspired • Be challenged • Be bold!
This paper (first reference) is the result of a class project I was part of
almost two years ago for CSCI 5417 Information Retrieval Systems. It builds
on a class project I did in CSCI 5832 Natural Language Processing and which
I presented at Wikimania '07. The project was very late as we didn't send
the final paper in until the day before new years. This technical report was
never really announced that I recall so I thought it would be interesting to
look briefly at the results. The goal of this paper was to break articles
down into surface features and latent features and then use those to study
the rating system being used, predict article quality and rank results in a
search engine. We used the [[random forests]] classifier which allowed us to
analyze the contribution of each feature to performance by looking directly
at the weights that were assigned. While the surface analysis was performed
on the whole english wikipedia, the latent analysis was performed on the
simple english wikipedia (it is more expensive to compute). = Surface
features = * Readability measures are the single best predictor of quality
that I have found, as defined by the Wikipedia Editorial Team (WET). The
[[Automated Readability Index]], [[Gunning Fog Index]] and [[Flesch-Kincaid
Grade Level]] were the strongest predictors, followed by length of article
html, number of paragraphs, [[Flesh Reading Ease]], [[Smog Grading]], number
of internal links, [[Laesbarhedsindex Readability Formula]], number of words
and number of references. Weakly predictive were number of to be's, number
of sentences, [[Coleman-Liau Index]], number of templates, PageRank, number
of external links, number of relative links. Not predictive (overall - see
the end of section 2 for the per-rating score breakdown): Number of h2 or
h3's, number of conjunctions, number of images*, average word length, number
of h4's, number of prepositions, number of pronouns, number of interlanguage
links, average syllables per word, number of nominalizations, article age
(based on page id), proportion of questions, average sentence length. :*
Number of images was actually by far the single strongest predictor of any
class, but only for Featured articles. Because it was so good at picking out
featured articles and somewhat good at picking out A and G articles the
classifier was confused in so many cases that the overall contribution of
this feature to classification performance is zero. :* Number of external
links is strongly predictive of Featured articles. :* The B class is highly
distinctive. It has a strong "signature," with high predictive value
assigned to many features. The Featured class is also very distinctive. F, B
and S (Stop/Stub) contain the most information.
:* A is the least distinct class, not being very different from F or G. =
Latent features = The algorithm used for latent analysis, which is an
analysis of the occurence of words in every document with respect to the
link structure of the encyclopedia ("concepts"), is [[Latent Dirichlet
Allocation]]. This part of the analysis was done by CS PhD student Praful
Mangalath. An example of what can be done with the result of this analysis
is that you provide a word (a search query) such as "hippie". You can then
look at the weight of every article for the word hippie. You can pick the
article with the largest weight, and then look at its link network. You can
pick out the articles that this article links to and/or which link to this
article that are also weighted strongly for the word hippie, while also
contributing maximally to this articles "hippieness". We tried this query in
our system (LDA), Google (site:en.wikipedia.org hippie), and the Simple
English Wikipedia's Lucene search engine. The breakdown of articles occuring
in the top ten search results for this word for those engines is: * LDA
only: [[Acid rock]], [[Aldeburgh Festival]], [[Anne Murray]], [[Carl
Radle]], [[Harry Nilsson]], [[Jack Kerouac]], [[Phil Spector]], [[Plastic
Ono Band]], [[Rock and Roll]], [[Salvador Allende]], [[Smothers brothers]],
[[Stanley Kubrick]]. * Google only: [[Glam Rock]], [[South Park]]. * Simple
only: [[African Americans]], [[Charles Manson]], [[Counterculture]], [[Drug
use]], [[Flower Power]], [[Nuclear weapons]], [[Phish]], [[Sexual
liberation]], [[Summer of Love]] * LDA & Google & Simple: [[Hippie]],
[[Human Be-in]], [[Students for a democratic society]], [[Woodstock
festival]] * LDA & Google: [[Psychedelic Pop]] * Google & Simple: [[Lysergic
acid diethylamide]], [[Summer of Love]] ( See the paper for the articles
produced for the keywords philosophy and economics ) = Discussion /
Conclusion = * The results of the latent analysis are totally up to your
perception. But what is interesting is that the LDA features predict the WET
ratings of quality just as well as the surface level features. Both feature
sets (surface and latent) both pull out all almost of the information that
the rating system bears. * The rating system devised by the WET is not
distinctive. You can best tell the difference between, grouped together,
Featured, A and Good articles vs B articles. Featured, A and Good articles
are also quite distinctive (Figure 1). Note that in this study we didn't
look at Start's and Stubs, but in earlier paper we did. :* This is
interesting when compared to this recent entry on the YouTube blog. "Five
Stars Dominate Ratings"
I think a sane, well researched (with actual subjects) rating system
well within the purview of the Usability Initiative. Helping people find and
create good content is what Wikipedia is all about. Having a solid rating
system allows you to reorganized the user interface, the Wikipedia
namespace, and the main namespace around good content and bad content as
needed. If you don't have a solid, information bearing rating system you
don't know what good content really is (really bad content is easy to spot).
:* My Wikimania talk was all about gathering data from people about articles
and using that to train machines to automatically pick out good content. You
ask people questions along dimensions that make sense to people, and give
the machine access to other surface features (such as a statistical measure
of readability, or length) and latent features (such as can be derived from
document word occurence and encyclopedia link structure). I referenced page
262 of Zen and the Art of Motorcycle Maintenance to give an example of the
kind of qualitative features I would ask people. It really depends on what
features end up bearing information, to be tested in "the lab". Each word is
an example dimension of quality: We have "*unity, vividness, authority,
economy, sensitivity, clarity, emphasis, flow, suspense, brilliance,
precision, proportion, depth and so on.*" You then use surface and latent
features to predict these values for all articles. You can also say, when a
person rates this article as high on the x scale, they also mean that it has
has this much of these surface and these latent features.
= References =
- DeHoust, C., Mangalath, P., Mingus., B. (2008). *Improving search in
Wikipedia through quality and concept discovery*. Technical Report.
- Rassbach, L., Mingus., B, Blackford, T. (2007). *Exploring the
feasibility of automatically rating online article quality*. Technical
Tomorrow at the HOPE 2022 conference, I'm giving a talk titled, "How to
Run a Top-10 Website, Publicly and Transparently", discussing the impact
of transparency in Wikimedia's technical spaces. A number of people have
expressed interest in watching, including non-technical users, so I'm
advertising it a bit more broadly.
I apologize for the short notice, I didn't realize the stream would be
free to watch until yesterday (thanks Ori!).
Time: 2022-07-23 17:00 UTC (1pm ET) -
If you can't watch it live, a recording will be uploaded later on.
I've documented all of this on-wiki, including the full abstract:
I am of course happy to answer any questions people might have after the
-- Kunal / Legoktm
The sixth workshop on the topic of "How to maintain bots" is coming up - it
will take place on Friday, July 29th at 16:00 UTC. You can find more
details on the workshop and a link to join here: <
This session will focus on best practices for maintaining bots and tools in
the Wikimedia ecosystem. It will cover a few practices that can help
developers run a bot or a tool with help from others, such as picking a
license, adding co-maintainers to the project, publishing source code,
writing docs, and much more.
To participate in this workshop, you would need basic familiarity with bots
or tools development. You can add your discussion ideas in the etherpad doc
linked from the workshops page.
We look forward to your participation!
On behalf of the SWT Workshops Organization team
Senior Developer Advocate
Wikimedia Foundation <https://wikimediafoundation.org/>