Congratulations to Thorsten Ruprechter, Manoel Horta Ribeiro, Bob West
and Denis
Helic for their paper "Protection from Evil and Good: The Differential
Effects of Page Protection on Wikipedia Article Quality" recognized as Best
Paper - Honorable Mention in ICWSM 2025 [1].
Paper: https://ojs.aaai.org/index.php/ICWSM/article/view/35896
Video: Listen in to Manoel sharing about the research in June 2025
Wikimedia Research Showcase https://www.youtube.com/live/GgYh6zbrrss
For those of you attending ICWSM, I hope you have a chance to get together
and celebrate this important achievement.
Best,
Leila
[1] https://www.icwsm.org/2025/
Hello wiki-research-l community!
We are researchers at Columbia University, Cornell University, Georgetown
University, Harvard University, and Northeastern University. We are
recruiting participants for a remote study to understand how data users
engage with privacy-noised Wikimedia data. We are looking for participants
who:
1. Are at least 18 years old
2. Have experience with quantitative data analysis
3. Are familiar with using Python and Jupyter notebooks for data analysis
In this study, you will conduct data analysis tasks, answer interview
questions, and share your perceptions about Wikimedia data that includes
privacy protections.
The session will take approximately 1 hour via a Zoom meeting, and it will
be recorded. Upon completion of the interview, you will receive a $50
Amazon gift card as a thank-you for your time.
If you are interested, please fill out this online eligibility survey:
https://neu.co1.qualtrics.com/jfe/form/SV_b3dVIHUf3ZKxaES
We will reach out if you are selected to participate via email, typically
within 2 weeks.
Thanks so much!
Hal Triedman, on behalf of the study team
PS: If you’re not available but know of others that may be a good fit, we’d
appreciate if you could forward this call to them!
====
DBpedia Day - Co-located with SEMANTiCS 2025
Vienna, Austria
September 3, 2025
Submission Deadline: July 15, 2025 (11:59 pm, Hawaii time)
Submission Form: https://forms.gle/6KNBMuRsyXs8RiD89
====
How can Large Language Models (LLMs) benefit from structured knowledge
like DBpedia? And how can we improve DBpedia to better serve the next
generation of AI systems?
This session invites talks on the intersection of LLMs and Knowledge
Graphs, with a special emphasis on DBpedia. Our goal is to understand
how to make Linked Data more useful, accessible, and trustworthy for
LLM-based applications—and how to evolve DBpedia in this new
AI-dominated landscape.
= Topics of Interest =
* Retrieval-Augmented Generation (RAG) with DBpedia
* Prompt engineering for KG-aware LLMs
* Query translation: From natural language to SPARQL using LLMs
* Using LLMs to summarize or explain DBpedia data
* LLMs as interfaces for Linked Data consumption
* Automatic ontology alignment and entity linking with LLMs
* Improving LLM factual accuracy with DBpedia as a trusted source
* Challenges in grounding LLM output in structured knowledge
* Scaling and performance considerations for hybrid KG–LLM systems
* Bias, hallucination, and verification in LLMs using DBpedia
* Use cases: e.g., chatbots, semantic search, Q&A systems powered by
DBpedia + LLMs
We welcome researchers, developers, and industry practitioners working
on concrete tools, early-stage ideas, or critical perspectives.
= Submission Guidelines =
Please submit your proposal by July 15, 2025 (AoE) via:
https://forms.gle/6KNBMuRsyXs8RiD89
Your proposal should include:
* Title
* Abstract (max. 300 words)
* Short biography of the speaker(s)
We are open to a wide range of talk formats: demos, position papers,
success stories, lessons learned, or short idea pitches.
Questions? Reach out to us at dbpedia(a)infai.org or check our event page
https://www.dbpedia.org/blog/dbpedia-day-2025/.
Join us to shape how LLMs and DBpedia can empower each other!
Best regards,
Julia, Milan & Sebastian
DBpedia Team
Hi Maryana,
On Thu, Feb 29, 2024 at 9:04 AM Maryana Iskander <miskander(a)wikimedia.org>
wrote:
> This message will be translated into other languages on Meta-wiki
> <https://meta.wikimedia.org/wiki/Special:MyLanguage/Wikimedia_Foundation_Chi…>
>
> العربية • español • français • português • Deutsch• 中文
> <https://meta.wikimedia.org/wiki/Special:MyLanguage/Wikimedia_Foundation_Chi…>
>
> You can help with more languages
> <https://meta.wikimedia.org/wiki/Special:MyLanguage/Wikimedia_Foundation_Chi…>
>
> Hi everyone,
>
> Since joining the Foundation I have tried to regularly write to you
> <https://meta.wikimedia.org/wiki/Special:MyLanguage/Wikimedia_Foundation_Chi…>
> here and elsewhere, and I wanted to share a few updates since my last
> letter. In October 2023
> <https://meta.wikimedia.org/wiki/Special:MyLanguage/Wikimedia_Foundation_Com…>,
> I reflected that we were in a period of compounded challenges across the
> world with escalating wars, conflict, and climate reminding us each week
> that global volatility and uncertainty was on the rise. That feels even
> more true now. My instinct then was to ask us to make more time to talk to
> each other and to try and pull closer together. This feels even more needed
> now.
>
> [...]
>
> Finally, our human-led values came up in several conversations about
> Wikimedia’s role in shaping the next generation of artificial intelligence,
> a topic of ongoing discussion in the world
> <https://www.nytimes.com/2023/07/18/magazine/wikipedia-ai-chatgpt.html?unloc…>,
> in our communities <https://meta.wikimedia.org/wiki/Talk:Future_Audiences>
> , and at the Foundation. This is complemented by ongoing discussions
> about the role of AI-generated content on our platform by various project
> communities.
> <https://en.wikipedia.org/wiki/Wikipedia_talk:Large_language_model_policy)>
> A recent effort to contribute to a shared research agenda on AI
> <https://meta.wikimedia.org/wiki/Special:MyLanguage/Artificial_intelligence/…>can
> be found here – including the need for more research to understand human
> motivation to contribute to the knowledge commons – it was created by a
> small group working in the open who rushed to publish a ‘bad first draft’
> that will benefit from more input.
> <https://meta.wikimedia.org/wiki/Special:MyLanguage/Talk:Artificial_intellig…>
>
> Thank you for acknowledging the limitations of this document. As we also
noted in the Signpost
<https://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/2024-03-02/News_…>
at the time, there was indeed some consternation about the lack of
involvement of the volunteer community:
*"While the announcement appears to be speaking on behalf of 'volunteer
contributors', the 'Wikimedians' involved in drafting the document appears
to have consisted exclusively of Wikimedia Foundation staff (largely from
its Research department), according to the attendee list."*
I have to ask though, are there still plans to solicit wider input on this
draft agenda, or at least incorporate more from the numerous related
discussions on AI that have been happening across the movement over the
last several years? (At
https://meta.wikimedia.org/wiki/Artificial_intelligence , some editors
including myself have been trying to keep a list of relevant links, but
it's surely not complete.) Again, I appreciate that your post here invited
"more input" on the agenda's talk page. But it seems that only a single
topic was added there afterwards, and in any case no content updates
<https://meta.wikimedia.org/w/index.php?title=Artificial_intelligence/Bellag…>
have been made to that "bad first draft" since February 2024.
Relatedly, given that the document states that *"Our hope is that many
researchers across industry, government, and nonprofit organizations will
adopt the final research agenda to help support and guide their own
research"*:
Are there plans to solicit input from such external researchers on the
draft? And once this research agenda is finalized, does the Foundation plan
to bring it to their attention? It doesn't seem to have made such efforts
yet, e.g. I can't find any mention
<https://lists.wikimedia.org/hyperkitty/search?mlist=wiki-research-l%40lists…>
of
it on the Wiki-research-l mailing list (CCing it now).
I thought that maybe this Bellagio document had been a tangential one-off
to make use of an external funding opportunity, and had been abandoned
afterwards. But then I saw that more recently Selena highlighted it in
the "Reflections
on 2025 from the Wikimedia Foundation Executive Team"
<https://diff.wikimedia.org/2025/01/29/reflections-on-2025-from-the-wikimedi…>
(as
the only concrete outcome regarding AI mentioned in this entire overview of
WMF accomplishments "Over the past year").
Regards, Tilman ([[User:HaeB]])
> [...]
>
> Maryana
>
>
> Maryana Iskander, Wikimedia Foundation CEO
>
>
Hi everyone,
The June 2025 Research Showcase will be live-streamed next Wednesday, June
18, at 9:30 AM PT / 16:30 UTC. Find your local time here
<https://zonestamp.toolforge.org/1750264200>. Our theme this month is *Ensuring
Content Integrity on Wikipedia*.
*We invite you to watch via the YouTube
stream: https://www.youtube.com/live/GgYh6zbrrss
<https://www.youtube.com/live/GgYh6zbrrss>.* As always, you can join the
conversation in the YouTube chat as soon as the showcase goes live.
Our presentations this month:
The Differential Effects of Page Protection on Wikipedia Article QualityBy
*Manoel Horta Ribeiro (Princeton University)*Wikipedia strives to be an
open platform where anyone can contribute, but that openness can sometimes
lead to conflicts or coordinated attempts to undermine article quality. To
address this, administrators use “page protection"—a tool that restricts
who can edit certain pages. But does this help the encyclopedia, or does it
do more harm than good? In this talk, I’ll present findings from a
large-scale, quasi-experimental study using over a decade of English
Wikipedia data. We focus on situations where editors requested page
protection and compare the outcomes for articles that were protected versus
similar ones that weren’t. Our results show that page protection has mixed
effects: it tends to benefit high-quality articles by preventing decline,
but it can hinder improvement in lower-quality ones. These insights reveal
how protection shapes Wikipedia content and help inform when it’s most
appropriate to restrict editing, and when it might be better to leave the
page open.
Seeing Like an AI: How LLMs Apply (and Misapply) Wikipedia Neutrality Norms
By
*Joshua Ashkinaze (University of Michigan)*Large language models (LLMs) are
trained on broad corpora and then used in communities with specialized
norms. Is providing LLMs with community rules enough for models to follow
these norms? We evaluate LLMs' capacity to detect (Task 1) and correct
(Task 2) biased Wikipedia edits according to Wikipedia's Neutral Point of
View (NPOV) policy. LLMs struggled with bias detection, achieving only 64%
accuracy on a balanced dataset. Models exhibited contrasting biases (some
under- and others over-predicted bias), suggesting distinct priors about
neutrality. LLMs performed better at generation, removing 79% of words
removed by Wikipedia editors. However, LLMs made additional changes beyond
Wikipedia editors' simpler neutralizations, resulting in high-recall but
low-precision editing. Interestingly, crowdworkers rated AI rewrites as
more neutral (70%) and fluent (61%) than Wikipedia-editor rewrites.
Qualitative analysis found LLMs sometimes applied NPOV more comprehensively
than Wikipedia editors but often made extraneous non-NPOV-related changes
(such as grammar). LLMs may apply rules in ways that resonate with the
public but diverge from community experts. While potentially effective for
generation, LLMs may reduce editor agency and increase moderation workload
(e.g., verifying additions). Even when rules are easy to articulate, having
LLMs apply them like community members may still be difficult.
Best,
Kinneret
--
Kinneret Gordon
Lead Research Community Officer
Wikimedia Foundation <https://wikimediafoundation.org/>
*Learn more about Wikimedia Research <https://research.wikimedia.org/>*
* Apologies if you receive multiple copies of this call *
==================================================================
======================== Call for Papers ==============================
11th Workshop on Formal and Cognitive Reasoning (FCR-2025)
Location: Potsdam, Germany
Deadline for submission: July 4th, 2025
Workshop: September 16, 2025
https://fcr.krportal.org/2025/
Co-located with the 48th German Conference on Artificial Intelligence
(KI 2025),
September 16, 2025, Potsdam, Germany
==================================================================
Aims and Scope
----------------------
In real-life AI applications, information is usually pervaded by
uncertainty and subject to change, and thus requires non-classical
systems. At the same time, psychological findings indicate that human
reasoning cannot be completely described by classical logical systems.
Sources of explanations are incomplete knowledge, incorrect beliefs, or
inconsistencies. A wide range of reasoning mechanisms, such as
analogical or defeasible reasoning, have to be considered, possibly in
combination with machine learning methods. The field of knowledge
representation and reasoning offers a rich palette of methods for
uncertain reasoning, both to describe human reasoning and to model AI
approaches.
This series of workshops aims to address recent challenges and to
present novel approaches to uncertain reasoning and belief change in
their broad senses, and in particular, provide a forum for research work
linking different paradigms of reasoning. A special focus is on papers
that provide a base for connecting formal-logical models of knowledge
representation and cognitive models of reasoning and learning,
addressing formal and experimental or heuristic issues. Previous events
of the Workshop on “Formal and Cognitive Reasoning” and joint workshops
took place in Dresden (2015), Bremen (2016), Dortmund (2017), Berlin
(2018), Kassel (2019), Bamberg (2020, online), Berlin (2021, online),
Trier (2022, online), Berlin (2023), and Würzburg (2024).
We welcome papers on the following and any related topics:
Action and change
Agents and multi-agent systems
Analogical reasoning
Argumentation theories
Belief change and belief merging
Cognitive modelling and empirical data
Common sense and defeasible reasoning
Computational thinking
Decision theory and preferences
Inductive reasoning and cognition
Knowledge representation in theory and practice
Learning and knowledge discovery in data
Neuro-symbolic AI
Nonmonotonic and uncertain reasoning
Ontologies and description logics
Probabilistic approaches of reasoning
Syllogistic reasoning
Keynote
------------
Barabara Kaup, University of Tübingen, Germany
Workshop Organizers and Co-Chairs
--------------------------------------------------
Özgür Lütfü Özçep Universität Hamburg, Germany
Nele Rußwinkel Universität zu Lübeck, Germany
Kai Sauerwald FernUniversität in Hagen, Germany
Diedrich Wolter Universität zu Lübeck, Germany
Important Dates
----------------------
Deadline for Submission: July 4, 2025
Notification of Authors: August 16, 2025
Camera-ready Paper: September 1, 2025
Workshop: September 16, 2025
Submission and Publication Details
-----------------------------------------------
Long technical papers as well as short position papers and abstracts of
published works are welcome.
Further submission details:
- Papers should be formatted in CEUR style (1-column style) without
enabled header and footer. The author kit can be found at
http://ceur-ws.org/Vol-XXX/CEURART.zip. The length of each paper is
limited to 20 pages (including references and acknowledgements).
- All papers must be written in English and submitted in PDF format via
the EasyChair system:https://easychair.org/conferences/?conf=fcr2025
- One of the authors is expected to participate in the workshop and
present their paper.
Publication details:
- The accepted papers will be made available entirely to the workshop
participants. As in previous years, we plan to release the informal
workshop proceedings with CEUR.
- The authors may decide to include only an abstract of their paper
(instead of their full paper) in the informal workshop proceedings
available via CEUR. For that, a request must be posted to the workshop
organizers shortly after the acceptance notification.
Call for Participation
21st Reasoning Web Summer School
September 25-28, 2025, Istanbul, Turkey
https://2025.declarativeai.net/events/reasoning-web
************************************************************************************************************
We are happy to announce that the 21st edition of the Reasoning Web
Summer School (RW 2025) will take place from September 25-28, 2025 in
Istanbul, Turkey. RW 2025 is part of Declarative AI 2025, which also
includes the 9th International Joint Conference on Rules and Reasoning
(RuleML+RR) and DecisionCAMP 2025, both held from September 22-24, 2025.
The purpose of the Reasoning Web Summer School is to disseminate recent
advances in reasoning techniques and relevant topics related to
ontologies, rules, logic, the semantic web, linked data, and knowledge
graph applications. The summer school is primarily intended for
individuals who are currently pursuing or have recently completed
postgraduate degrees (PhD or MSc). However, the school also welcomes the
participation of researchers at later career stages who wish to become
acquainted with the area or deepen their understanding of recent
developments. The RW school is a great venue for meeting like-minded
researchers and exchanging with an engaging and approachable group of
international lecturers!
*** Summer School Program ***
As in previous years, the summer school will feature 8 tutorials
delivered by researchers who are experts in the area. Here are the
confirmed speakers and topics for this year's school:
* Camille Bourgaux: Inconsistency-Tolerant Semantics Based on
Preferred Repairs
* Esra Erdem, Aysu Bogatarkan, Muge Fidan: Human-Centered ASP
Applications: Representation and Reasoning
* Patrick Koopmann: Explaining Reasoning Results for Description
Logic Ontologies
* Markus Krötzsch: Modern Datalog: Concepts, Methods, Applications
* Antonella Poggi: From One-Level to Multi-Level Ontology-Based Data
Access
* Francesco Ricca and Giuseppe Mazzotta: ASP Essentials: Modelling
and Efficient Solving
* Luciano Serafini: Neuro-Symbolic Artificial Intelligence
* Przemyslaw Walega: Reasoning about Time in DatalogMTL
Tutorial abstracts and speaker bios can be found on the RW 2025 website:
https://2025.declarativeai.net/events/reasoning-web/program
*** Applications & Registration ***
To participate in RW 2025, you will need to submit a short application,
with information on your academic and research background and motivation
for attending the school. You can do so by filling out the following form:
https://docs.google.com/forms/d/1aIIeJdHS2zash1gCMmqw2_UnFMNO7DMA7X8SyI8x3b…
or alternatively, by sending the organizers an email with all of the
information requested on the form.
Applications will be reviewed on a rolling basis, and notifications will
be sent within 1-2 weeks from the time of application. Successful
applicants will receive information on how to pay the registration fee
to confirm their spot in the school.
The regular fee of 300€ (incl. VAT) applies to applications received on
or after June 1st (a lower fee of 240€ applies to local participants
with a Turkish affiliation). The registration fee includes access to the
lectures, lunches, and coffee breaks for the four days, as well as a
social event.
Note that students who participate in RW 2025 are also encouraged to
apply to the Rule ML+RR Doctoral Consortium. Summer school participants
who have a paper accepted at the Doctoral Consortium can attend the
relevant Doctoral Consortium session without registering for the conference.
***
If you require additional information, please get in touch with the chairs.
* Alessandro Artale, Free University of Bozen-Bolzano, Italy
artale(a)inf.unibz.it
* Meghyn Bienvenu, CNRS & University of Bordeaux, France
meghyn.bienvenu(a)u-bordeaux.fr
****************************************************************************************
FCAI 2025 @ ECAI 2025
Foundations and Future of Change in Artificial Intelligence
October 25/26, Bologna, Italy
https://fcai2025.machine-reasoning.org/
Deadline: July 13, 2025
Workshop co-located with the
28th European Conference on Artificial Intelligence (ECAI 2025)
****************************************************************************************
Changing information transversely affects nearly any task and process
that we aim to formalize computationally. Consequently, making sense of
how to change information is a central aspect and precursor for further
advancements in many domains. Naturally, approaches to describe changes,
to deal with change, and to conduct changes have been developed in very
different areas of artificial intelligence. These approaches generally
consider changing from different angles and highlight diverse aspects
that sometimes complement each other. For instance, in database theory,
much work has been devoted to transactions as the main representation of
change and the study of how that affects the computational complexity of
querying such databases. On the other hand, researchers in belief change
investigated the axiomatic and semantics of different kinds of changes
in formal theories. Recent advancements in Machine Learning pose new and
exciting challenges in formal approaches to change, which seem
conceptually different from classical approaches to change.
This workshop aims to bring together researchers from different areas of
AI and beyond who work on change in their respective areas and see
potential in bridging approaches or for radically advanced existing
approaches to change to be combined with new ideas and perspectives. We
also invite works that provide general insights on change that are
important for multiple areas of artificial intelligence or even for
computer science in general.
**********************
*** List of Topics ***
The workshop welcomes contributions on every topic related to the formal
treatment of change, the evolution of representations in artificial
intelligence, and approaches that implement such approaches. The
following lists potential topics (but is not limited to these):
• Position papers on the foundations and future of change
• Logics for the representations of changes or reasoning about changes
• Belief change theory
• Repair in databases and ontologies
• Database update and querying
• Dynamic complexity theory
• Approaches to the meaning and semantics of change, e.g., conditionals
and plausibility
• Alternative meanings of change
• Theories of aspects and kinds of changes, like inconsistency, time or
ontologies of change
• Foundations of editing, retraining or learning of subsymbolic
representations
• Learning as a change process
• Algorithms to compute changes
• Approaches to track changes
• Philosophical aspects of change
• Updating incomplete information
• Dynamics of logic and database systems
• Evolution and versioning
• Reasoning about update programs
********************************
*** Deadlines and Submission ***
• Paper submission: July 13, 2025
• Notification: August 3, 2025
• Workshop: October 25/26, 2025 (tentative)
There are two types of submissions:
• Full papers. Full papers should be at most 18 pages (one column
format), excluding references and acknowledgments. Papers already
published or accepted for publication at other conferences are also
welcome, provided that the original publication is mentioned in a
footnote on the first page and the submission at FCAI falls within the
authors’ rights. In the same vein, papers under review for other
conferences can be submitted with a similar indication on their front page.
• Extended Abstracts. Extended abstracts should be at most 5 pages
(one column format), excluding references and acknowledgments. The
abstracts should introduce work that has recently been published, is
under review, or is ongoing research at an advanced stage. We highly
encourage attaching to the submission a preprint/postprint or a
technical report. Such extra material will be read at the discretion of
the reviewers. Submitting already published material may require
permission by the copyright holder.
Submission will be through the EasyChair conference system:
https://easychair.org/my/conference?conf=fcai2025
The accepted papers will be made available electronically in the CEUR
Workshop Proceedings series as informal proceedings
(http://ceur-ws.org/). The copyright of the papers remains with the
authors. Full papers will be indexed by dblp.org; but extended abstracts
published on CEUR proceedings will not be indexed by dblp.org.
*****************
*** PC Chairs ***
• Maria Vanina Martinez (Artificial Intelligence Research Institute
(IIIA-CSIC), Barcelona, Spain)
• Nina Pardal (University of Huddersfield, UK)
• Kai Sauerwald (FernUniversität in Hagen, Germany)
***************************
*** Programme committee ***
• Theofanis Aravanis (University of the Peloponnese, Greece)
• Franz Baader (TU Dresden, Germany)
• Giovanni Casini (CNR - ISTI, Italy)
• Thomas Eiter (TU Wien, Austria)
• Eduardo Fermé (University of Madeira, Portugal)
• Giorgos Flouris (FORTH-ICS, Greece)
• Laura Giordano (Università del Piemonte Orientale, Italy)
• Miika Hannula (University of Helsinki, Finland)
• Andreas Herzig (IRIT,Université Paul Sabatier, France)
• Anthony Hunter (University College London, UK)
• Gabriele Kern-Isberner (University of Dortmund, Germany)
• Phokion Kolaitis (University of California, USA)
• Juha Kontinen (University of Helsinki, Finland)
• Arne Meier (University of Hannover, Germany)
• Tommie Meyer (University of Cape Town and CAIR, South Africa)
• Rafael Penaloza (University of Milano-Bicocca, Italy)
• Leon van der Torre(University of Luxembourg, Luxembourg)
• Matthias Thimm (University of Hagen, Germany)
• Ivan Varzinczak (Université Sorbonne, France)
• Frank Wolter (University of Liverpool, UK)
***************************
*** Further Information ***
For further information, please visit the FCAI webpage:
https://fcai2025.machine-reasoning.org/
Please feel free to contact the organizer of FCAI 2025.
Information on the venue and registration can be obtained from the ECAI
2025 website:
https://ecai2025.org/
"Gender, Technology & Power" conference
Warsaw, 2-4 September 2025
-
Dear Colleagues,
The abstract submission deadline for the "Gender, Technology & Power"
conference has been extended until June 25, 2025.
We invite you to submit your work to this interdisciplinary event designed
as a meeting point for different perspectives – academic, practitioner and
activist.
You can find the call for abstracts and detailed information about the
event on the conference website:
https://gentechpower2025.uw.edu.pl/
The conference is organized with the support of COST Action CA21118
"Platform Work Inclusion Living Lab" and the International Sociological
Association Working Group 10 "Digital Sociology".
Best,
Rosie
rosiestep <https://en.wikipedia.org/wiki/User:Rosiestep> / Rosie
Stephenson-Goodknight (she/her)
Pacific time zone (UTC−07:00 / UTC−08:00)