Forwarding this invite to additional people who may wish to participate or
watch the recording.
Thanks, Kinneret. These presentations sound very interesting.
Pine🌲
---------- Forwarded message ---------
From: Kinneret Gordon via Analytics <analytics(a)lists.wikimedia.org>
Date: Mon, Feb 23, 2026 at 5:23 AM
Subject: [Analytics] [Wikimedia Research Showcase] AI and Communities -
February 25 at 17:30 UTC
To: <wiki-research-l(a)lists.wikimedia.org>, <
foundation-optional(a)wikimedia.org>, <wikimedia-l(a)lists.wikimedia.org>, <
analytics(a)lists.wikimedia.org>
Cc: Kinneret Gordon <kgordon(a)wikimedia.org>
Hi everyone,
The February 2026 Research Showcase will be live-streamed this Wednesday,
February 25, at 9:30 AM PT / 17:30 UTC. Find your local time here
<https://zonestamp.toolforge.org/1772040600>. Our theme this month is *AI
and Communities*.
*We invite you to watch via the YouTube
stream: https://www.youtube.com/live/qW5IQJv84HY
<https://www.youtube.com/live/qW5IQJv84HY>.* As always, you can join the
conversation in the YouTube chat as soon as the showcase goes live.
This month, we will have two presentations:
*LLMs in Wikipedia: Investigating How LLMs Impact Participation in
Knowledge Communities*By *Moyan Zhou (University of Minnesota)*Large
language models (LLMs) are reshaping knowledge production as community
members increasingly incorporate them into their contribution workflows.
However, participating in knowledge communities involves more than just
contributing content - it is also a deeply social process shaped by
members' level of expertise. While communities must carefully consider
appropriate and responsible LLM integration, the absence of concrete norms
has left individual editors to experiment and navigate LLM use on their
own. Understanding how LLMs influence community participation across
expertise levels is therefore critical in shaping future norms and
supporting effective adoption. To address this gap, we investigated
Wikipedia, one of the largest knowledge production communities, to
understand participation in three dimensions: 1) how LLMs influence the
ways editors gather knowledge, 2) how editors leverage strategies to align
LLM outputs with community norms, and 3) how other editors in the community
respond to LLM-assisted contributions. Through interviews with 16 Wikipedia
editors with different levels of expertise who had used LLMs for their
edits, we revealed a participation gap mediated by expertise in adopting
LLMs in knowledge contributions across knowledge gathering, alignment with
community norms, and peer responses. Based on these findings, we challenge
existing models of novice editors' involvement and propose design
implications for LLMs that support community engagement, highlighting
opportunities for LLMs to sustain mentorship, knowledge transmission, and
legitimacy building by scaffolding and feedback, process documentation, and
LLM disclosure by good-faith editors.*AI Didn't Start the Fire: Examining
the Stack Exchange Moderator and Contributor Strike*By *Yiwei Wu
(University of Texas at Austin)*Online communities and their host platforms
are mutually dependent yet conflict-prone. When platform policies clash
with community values, communities have resisted through strikes,
blackouts, and even migration to other platforms. Through such collective
actions, communities have sometimes won concessions, but these have
frequently proved to be temporary. Although previous research has
investigated strike events and migration chains, the processes by which
community-platform conflict unfolds remain obscure. How do
community-platform relationships deteriorate? How do communities organize
collective action? How do the participants proceed in the aftermath? We
investigate a conflict between the Stack Exchange platform and community
that occurred in 2023 around an emergency arising from the release of large
language models (LLMs). Based on a qualitative thematic analysis of 2,070
messages from Meta Stack Exchange and 14 interviews with community members,
we reveal how the 2023 conflict was preceded by a long-term deterioration
in the community-platform relationship, driven in particular by the
platform's disregard for the community's highly valued participatory role
in governance. Moreover, the platform's policy response to LLMs aggravated
the community's sense of crisis, triggering strike mobilization. We analyze
how the mobilization was coordinated through a tiered leadership and
communication structure, as well as how community members pivoted in the
aftermath. Building on recent theoretical scholarship in social computing,
we use Hirschman's exit, voice, and loyalty framework to theorize the
challenges of community-platform relations evinced in our data. Finally, we
recommend ways that platforms and communities can institute participatory
governance to be durable and effective.
Looking forward to seeing many of you,
Kinneret
--
Kinneret Gordon
Lead Research Community Officer
Wikimedia Foundation <https://wikimediafoundation.org/>
*Learn more about Wikimedia Research <https://research.wikimedia.org/>*
_______________________________________________
Analytics mailing list -- analytics(a)lists.wikimedia.org
To unsubscribe send an email to analytics-leave(a)lists.wikimedia.org
Hello,
Happy second week of February. This email describes two job postings that
may interest you or your colleagues. Earlier emails were sent to
Wiki-research-l
<https://lists.wikimedia.org/postorius/lists/wiki-research-l.lists.wikimedia…>
regarding these positions; please disregard the this email if you already
saw the info via Research-l. I'm consolidating info from both posts.
Position 1, posted at
https://www.demogr.mpg.de/en/career_6122/jobs_fellowships_1910/postdoctoral…,
is for a *postdoctoral researcher*. Quoting from the posting:
“Applications are invited from candidates *who have, or will soon obtain*,
a PhD in demography, sociology, epidemiology, medicine, health economics,
biostatistics, public health, or a related field.
"The successful candidate is expected to work within one or more of the
following areas:
1.
*Disease Presence and Disease Impact*
1.
Has disease presence become more or less predictive of disease
impact?
2.
How do the timing and patterns of disease accumulation shape the
onset of disease impact and individual health trajectories, including
pathways to death?
2.
*Diffusion of Medical Progress in Populations*
1.
How does medical progress shape health inequalities within
populations?
2.
How are changes in population health linked to how medical progress
is distributed within and across populations, for example through in- and
outpatient care?
3.
*Population Resilience and Vulnerability*
1.
How can population resilience and vulnerability be measured?
2.
Does medical progress make populations more or less vulnerable?”
Position 2 is for a *principal research scientist*. The job posting is at
https://job-boards.greenhouse.io/wikimedia/jobs/7597104. Quoting Leila (who
is CC'd on this email): “Please note
that if you have applied for the research scientist position which was
shared
<
https://lists.wikimedia.org/hyperkitty/list/wiki-research-l@lists.wikimedia…>
in December 2025, you are automatically considered for this new position
and there is no need to apply again. We will reach out to you if we need
additional information from you.”
Quoting from the job description:
“We are hiring a Principal Research Scientist to join the Wikimedia
Foundation’s Research team <https://research.wikimedia.org/team.html> to
support the Wikimedia communities and the Wikimedia Foundation in the
continued evolution of Wikimedia projects and their decentralized
governance model, ensuring that the projects become multigenerational
<https://meta.wikimedia.org/wiki/Strategy/Multigenerational>.
"Here are some things we’ve worked on that might give you a better sense of
the scope of the work you will be accountable for and part of:
-
Understanding Wikipedia administrator recruitment, retention, and
attrition (Learn more
<https://meta.wikimedia.org/wiki/Research:Wikipedia_Administrator_Recruitmen…>)
-
A set of recommendations for conducting NPOV research on Wikipedia (Learn
more
<https://meta.wikimedia.org/wiki/Research:Guidance_for_NPOV_Research_on_Wiki…>)
-
Developing a meta-method for analyzing the state of NPOV on Wikipedia (Learn
more
<https://meta.wikimedia.org/wiki/Research:A_meta-method_for_analyzing_NPOV_o…>)
"You can learn more about what the team has done in the past six months by
reading our biannual report <https://research.wikimedia.org/report.html>.
"Please note: this is a fully remote role that requires working with senior
leadership and stakeholders across the organization and the research
community in different timezones. It is expected that the Principal
Research Scientist be available for critical meetings and synchronous work
between 15:00 to 19:00 UTC. It is further expected that the candidate is
open to traveling up to four times per year.”
Leila posted some additional information on the Research mailing list. The
thread is viewable at
https://lists.wikimedia.org/hyperkitty/list/wiki-research-l@lists.wikimedia…
<https://lists.wikimedia.org/hyperkitty/list/wiki-research-l@lists.wikimedia…>
Good luck to any applicants.
(Disclaimer: I'm not an employee of either of the hiring organizations.
Please direct any questions to the applicable organization.)
Regards,
Pine🌲
Hello,
I am writing my PhD thesis on Wikipedia, and I found the content you have
written very interesting. Is it possible to have further information about
it?
Regards,
Sahire ARSLAN
On Fri, Jan 30, 2026 at 3:00 PM <ai-request(a)lists.wikimedia.org> wrote:
> Send AI mailing list submissions to
> ai(a)lists.wikimedia.org
>
> To subscribe or unsubscribe, please visit
>
> https://lists.wikimedia.org/postorius/lists/ai.lists.wikimedia.org/
>
> You can reach the person managing the list at
> ai-owner(a)lists.wikimedia.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of AI digest..."Today's Topics:
>
> 1. Fwd: [Wikimedia-l] Published: WMCH roundtable report on “Collective
> Intelligence vs Artificial Intelligence” – why we did it and what’s next
> (Pine W)
>
>
>
> ---------- Forwarded message ----------
> From: Pine W <wiki.pine(a)gmail.com>
> To: futureslab(a)wikimedia.de, Wikimedia AI Discussion List <
> ai(a)lists.wikimedia.org>, Wiki Research-l <
> wiki-research-l(a)lists.wikimedia.org>
> Cc: valdelli(a)gmail.com
> Bcc:
> Date: Thu, 29 Jan 2026 11:07:39 -0800
> Subject: [AI] Fwd: [Wikimedia-l] Published: WMCH roundtable report on
> “Collective Intelligence vs Artificial Intelligence” – why we did it and
> what’s next
> Forwarding an email that may be of interest to subscribers of additional
> email lists, and to the Futures Lab team.
>
> Pine🌲
>
>
> ---------- Forwarded message ---------
> From: Ilario valdelli via Wikimedia-l <wikimedia-l(a)lists.wikimedia.org>
> Date: Thu, Jan 29, 2026 at 9:01 AM
> Subject: [Wikimedia-l] Published: WMCH roundtable report on “Collective
> Intelligence vs Artificial Intelligence” – why we did it and what’s next
> To: Wikimedia Mailing List <wikimedia-l(a)lists.wikimedia.org>
> Cc: Ilario valdelli <valdelli(a)gmail.com>
>
>
> Dear all,
>
> I’d like to share that Wikimedia CH and Open Future have published the
> report from our Lausanne roundtable “Collective Intelligence vs Artificial
> Intelligence” (4 Nov 2025), co-organised with Open Future and IMD Business
> School.
>
> I also want to add a bit of context on *why* we did this, because it
> comes out of a very concrete moment for many of us: over the past year, it
> has sometimes felt like “Wikimedia + AI” discussions were happening
> everywhere, yet at the same time the community’s shared space for
> sensemaking was getting thinner — less clarity, less common ground, and
> more uncertainty about where to focus limited time and resources.
> Internally at WMCH, we reached a point where we needed to step back and
> ask: *what should we do in this period, and where should we allocate our
> resources in a way that is useful for the Movement?*
>
> From that starting point, we began working with external partners to
> improve our understanding of what is changing in the information ecosystem.
> Among others, we exchanged with the AI research ecosystem in Switzerland
> (e.g., IDSIA - Dalle Molle Institute for Artificial Intelligence of Lugano)
> and, together with input from Open Future, we converged on a simple idea:
> instead of debating AI only through the lens of “editing with AI tools”, we
> should widen the circle and ask experts from outside the Movement what they
> see coming — and what it could mean for a knowledge commons like Wikimedia.
>
> That led to the Lausanne roundtable: a deliberately “cross-ecosystem”
> discussion with people working in areas such as AI development and data
> science, journalism and media, research, and public-interest policy,
> alongside people from the Wikimedia ecosystem. The goal was not to produce
> a single answer, but to map the key tensions and update our collective
> assumptions about the near future — and to do it in a way that is
> transparent and shareable with the wider community.
>
> The report summarises those insights and frames the emergence of a “new
> knowledge loop”, in which AI services increasingly become interfaces to
> knowledge. One concern raised is disintermediation: Wikimedia content can
> be heavily used by machines while fewer humans visit Wikimedia projects
> directly — with potential consequences for participation, feedback loops,
> and long-term sustainability. At the same time, the report argues there is
> no “going back”: AI influence is not avoidable, and the open question is
> how collective intelligence and AI can be combined while keeping Wikimedia
> human-centred.
>
> A quick note on intent and framing: this is a *civic sensemaking* effort.
> It is not a proposal to restrict open access for people, and it is not an
> attempt to tell the Movement what to do. The underlying question is: *how
> do we keep the commons open and accessible to everyone, while making its
> large-scale reuse sustainable and accountable?* In other words, how do we
> avoid a future where knowledge remains technically “open” but becomes
> practically concentrated behind a few AI interfaces?
>
> This report is also meant as a first step. There is the plan to develop a
> draft white paper in 2026, building on these insights, to outline possible
> strategic directions and reactions — not only for WMCH, but as a
> contribution that other communities, affiliates, and interested Wikimedians
> can critique, improve, and build on.
>
> Links:
>
> - Meta page (context + deliverables):
> https://meta.wikimedia.org/wiki/Wikimedia_CH/Innovation/CI_vs_AI#Deliverabl…
> - PDF on Commons:
> https://commons.wikimedia.org/wiki/File:Wikimedia_CH_REPORT_AI_v4_web.pdf
>
> Feedback is very welcome.
> --
> Ilario Valdelli
> Innovation programme lead
> Wikimedia CH
> Verein zur Förderung Freien Wissens
> Association pour l’avancement des connaissances libre
> Associazione per il sostegno alla conoscenza libera
> http://www.wikimedia.ch
> _______________________________________________
> Wikimedia-l mailing list -- wikimedia-l(a)lists.wikimedia.org, guidelines
> at: https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines and
> https://meta.wikimedia.org/wiki/Wikimedia-l
> Public archives at
> https://lists.wikimedia.org/hyperkitty/list/wikimedia-l@lists.wikimedia.org…
> To unsubscribe send an email to wikimedia-l-leave(a)lists.wikimedia.org
> _______________________________________________
> AI mailing list -- ai(a)lists.wikimedia.org
> To unsubscribe send an email to ai-leave(a)lists.wikimedia.org
>