Dear Wikibase and Wikidata Communities,
As mentioned in our latest announcements about upcoming strategic changes,
we decided to pour our energy into ecosystem-related activities such as
federation and data governance. As you might know, our Linked Open Data
Strategy
<https://meta.wikimedia.org/wiki/LinkedOpenData/Strategy2021/Wikibase> is
built on the idea that our products are capable of building synergies as
well as growing sustainably.
With this research project, we hope to get a more nuanced understanding of
which projects are perceived to belong into which product category. We aim
to synthesize and map how people currently use the three LOD products, what
their desired use cases are, and where the boundaries between these
platforms may need clarification or additional support.
This process is part of a broader strategy to ensure our hosting
infrastructure remains sustainable and mission-aligned.
A key outcome will be a clearer understanding of:
-
What kinds of projects are best suited for each LOD product
-
Where overlap, duplication, or ambiguity exists
-
How we can make more mission-aligned, predictable decisions on growth
and hosting
Process Design
You will be invited to participate in an asynchronous card sort exercise
where you will be asked to sort imagined projects into four categories:
-
1 - Wikibase Cloud
-
2 - Wikidata
-
3 - Suite or other ways of Wikibase hosting
-
4 - Wikibase is not a suitable software for this use case
Participants will be asked to explain their thought process aloud so that
we can also ensure that we’re considering the full dimensions of each use
case (e.g., technical needs, scope, resource requirements). The results
will inform product strategy across WBC, WBS, and WD—while also offering
key insights into ambivalent or complex areas that require further
discussion.
This approach ensures everyone can participate in their own time, without
needing to attend multiple meetings.
We’re offering two ways to participate:
1. Card Sort Research Activity (Primary Method)
-
Asynchronous (early- mid-July)
-
Approx. 30 minutes
-
No technical expertise required
-
Focused on sorting and classifying use cases
-
Recorded screen and microphone
-
Optional: follow-up interview (if you’d like to talk more)
2. Extended Stakeholder Group (Optional Additional Involvement)
Includes:
-
Asynchronous activity (above)
-
Research insights discussion (synchronous, 30-45 minutes, end of July)
-
Async policy draft review (end of August)
-
Group session (August, 1.5–2 hours)
Total time: ~3.5 hours
Compensation: 35 EUR (if eligible)
Participation
We aim to recruit at least 30, ideally up to 50 community members across
the LOD ecosystem to participate in the research activities, including:
-
Wikibase Cloud users
-
Self-hosted Wikibase users
-
Wikidata users who are invested in the broader Wikibase ecosystem.
-
At least 10 marginalized knowledge holders
Join us!
We will send a link to participate in the card-sort within the next week.
If you would like to participate in the Extended Stakeholder Group, please
reach out to us via email: annie.kim(a)wikimedia.de. We will share the
opportunities as they come up, likely at the end of July and the end of
August, respectively.
Thank you for your continued care and contributions. We're looking forward
to exploring this research with you! :)
On behalf of the Linked Open Data Ecosystem,
Valerie Wollinger (She/Her)
Community Communications Manager Wikibase
Wikimedia Deutschland e. V. | Tempelhofer Ufer 23-24 | 10963 Berlin
Phone: +49 (0)30-577 11 62-0https://wikimedia.de
Keep up to date! Current news and exciting stories about Wikimedia,
Wikipedia and Free Knowledge in our newsletter (in German): Subscribe
now <https://www.wikimedia.de/newsletter/>.
Imagine a world in which every single human being can freely share in
the sum of all knowledge. Help us to achieve our
vision!https://spenden.wikimedia.de
Wikimedia Deutschland — Gesellschaft zur Förderung Freien Wissens e.
V. Eingetragen im Vereinsregister des Amtsgerichts Charlottenburg, VR
23855 B. Als gemeinnützig anerkannt durch das Finanzamt für
Körperschaften I Berlin, Steuernummer 27/029/42207. Geschäftsführende
Vorständin: Franziska Heine
On Thu, Sep 5, 2024 at 8:15 PM Physikerwelt <wiki(a)physikerwelt.de> wrote:
>
> Dear Luca,
>
> the communication was good I think.
>
> However, I don't understand the decision process. Who is the
> responsible person, e.g., the product manager, who eventually decided
> to cut WDQS into pieces?
>
> Moritz
The decision was defined together by the Search Team at Wikimedia
Foundation and the people in charge of Wikidata at Wikimedia
Deutschland. In particular, Lydia Pintscher is ultimately responsible
for all Wikidata product decisions at WMDE.
I would like to stress how this decision was not taken lightheartedly.
It's been three years that we know that Blazegraph is on the verge of
failing, we wrote a playbook in case of dramatic failure,[1] and we
evaluated several alternatives to Blazegraph,[2] each with its pros
and cons.
While evaluating which way to go, knowing that no solution would be a
"magic wand" that would magically solve all our problems - in fact, no
"magic solution" exists, each comes with its load of problems and
costs - we came to terms with the fact that we needed more time and
the split was a harsh, but ultimately effective solution to buy some
time in the transition to the next backend.
Hope this helps.
L.
[1] https://www.wikidata.org/wiki/Wikidata:SPARQL_query_service/WDQS_backend_up…
[2] https://www.wikidata.org/wiki/Wikidata:SPARQL_query_service/WDQS_backend_up…
Happy Halloween, Samhain (Q207365) <https://www.wikidata.org/wiki/Q207365> and
spooky season! The *WikidataCon 2025* event is finally here and I've
compiled a handy list of links for those who attend...so find the
tricks'n'treats
below:
- Event Page: WikidataCon 2025
<https://www.wikidata.org/wiki/Event:WikidataCon_2025> - you can
register to the event to receive further updates and information
- Schedule: Available here on-wiki
<https://www.wikidata.org/wiki/Event:WikidataCon_2025/Program>, or on
Pretalx <https://pretalx.com/wikidatacon-2025/schedule/>
- Meeting room: Jitsi Meet: WikidataCon 2025
<https://meet.jit.si/WikidataCon2025>
- Live captions: startEVE <https://wmde.org/uEu8U> app. Open up in a
separate browser tab or from your smartphone device. Languages
available: *Arabic
(Egypt), Chinese (Cantonese, Traditional), English (US), French, German,
Hindi, Portuguese (Brazil)*.
- You can also join us from the YouTube Livestream
<https://www.youtube.com/live/zUnDL8jZU5M>, but please note; you won't
be able to directly interact with the Speaker or Jitsi room attendees.
- Collaborative Notetaking - help us document the event with our WikidataCon
2025/Etherpad <https://etherpad.wikimedia.org/p/WikidataCon_2025#L5>
- Sessions will be archived on Commons:
- Presentation slides
<https://commons.wikimedia.org/wiki/Category:WikidataCon_2025_presentations>
- Session video recordings
<https://commons.wikimedia.org/wiki/Category:WikidataCon_2025_videos>
to be uploaded shortly after the event
We can't wait to see you there!
On behalf of the WikidataCon 2025 Organising Team,
--
*Danny Benjafield*
Community Communications Manager
Wikidata For Wikimedia Projects
Wikimedia Deutschland e. V. | Tempelhofer Ufer 23-24 | 10963 Berlin
Phone: +49 (0)30-577 11 62-0
https://wikimedia.de
Keep up to date! Current news and exciting stories about Wikimedia,
Wikipedia and Free Knowledge in our newsletter (in German): Subscribe now.
<https://www.wikimedia.de/newsletter/>
Imagine a world in which every single human being can freely share in the
sum of all knowledge. Help us to achieve our vision!
https://spenden.wikimedia.de
Wikimedia Deutschland — Gesellschaft zur Förderung Freien Wissens e. V.
Eingetragen im Vereinsregister des Amtsgerichts Charlottenburg, VR 23855 B.
Als gemeinnützig anerkannt durch das Finanzamt für Körperschaften I Berlin,
Steuernummer 27/029/42207. Geschäftsführende Vorstände: Franziska Heine
Hi everyone,
I’m thrilled to let you know that Wikidata has been recognized as a
digital public good. With this, Wikidata joins Wikipedia and
Govdirectory as officially recognized digital public goods in the
Wikimedia movement.
Wikidata is always the mostly-invisible powerhouse in the background,
which makes this recognition especially meaningful. It’s an
acknowledgement of all the work we are doing here to provide the world
with open, reliable and trustworthy data to make technology more open
and inclusive. Thank you to all of you and everyone who has been along
for the ride for the past 13 years. Here is to many more years to
come.
https://diff.wikimedia.org/2025/10/29/building-an-internet-for-everyone-wik…
Cheers
Lydia
--
Lydia Pintscher - https://lydiapintscher.de - WD:Q18016466
Portfolio Lead for Wikidata
Wikimedia Deutschland e. V. | Tempelhofer Ufer 23-24 | 10963 Berlin
Phone: +49 30 5771162-0
https://wikimedia.de
Imagine a world in which every single human being can freely share in
the sum of all knowledge. Help us to achieve our vision!
https://spenden.wikimedia.de
Wikimedia Deutschland — Gesellschaft zur Förderung Freien Wissens e.
V. Eingetragen im Vereinsregister des Amtsgerichts Charlottenburg, VR
23855 B. Als gemeinnützig anerkannt durch das Finanzamt für
Körperschaften I Berlin, Steuernummer 27/029/42207. Geschäftsführende
Vorständin: Franziska Heine.
Hi everyone,
Wikidata turns 13 this October, and it’s time to start preparing for the
birthday celebrations! Since launching on 29 October 2012, Wikidata has
grown into a thriving open-source knowledge base, thanks to all of you.
Each year, we come together to mark this milestone. Events and
contributions happen throughout October and November 2025, across regions
and languages, online and offline.
Why we celebrate:
-
To recognize the work and impact of the Wikidata community
-
To connect and engage with others who care about open knowledge
-
To share Wikidata with new audiences and grow the movement
How you can take part:
🎉 Join an Event: Birthday celebrations are already being planned. Whether
you’re after a hands-on editathon or an informal hangout, there are many
ways to get involved. 👉 Keep an eye on the list of events
<https://www.wikidata.org/wiki/Wikidata:Thirteenth_Birthday/Calendar> and
find one near you.
🎈 Host Your Own: Organize something with your local community. A short
Wikidata intro, an editathon, or even a cake-cutting meetup -- every
contribution counts. 👉 Visit the birthday page for support and resources
<https://www.wikidata.org/wiki/Wikidata:Thirteenth_Birthday/Run_an_event>
to help you promote it.
🎁 Prepare a Birthday Gift: Every year, volunteers create “birthday
presents” for the community; new features, playful content, documentation,
and more. 👉 Take a look at previous gifts
<https://www.wikidata.org/wiki/Wikidata:Twelfth_Birthday/Presents> and add
your idea to this year’s list.
💬 Join the Online Call On October 29*:* we’ll host a call where people can
present their gifts, share updates, and celebrate live with the global
community. Details will be published soon.
💡 Need support for your event or gift idea? Microgrants are available
through Wikimedia Deutschland to help cover costs. You can apply until
September
1st via this Lime Survey form
<http://lime.wikimedia.de/index.php/897743?lang=en>.
For questions, feel free to reply here, contact me directly, or leave a
note on the talk page
<https://www.wikidata.org/wiki/Wikidata_talk:Thirteenth_Birthday>. If you'd
like to discuss WikidataCon 2025, which will take place during the birthday
period, feel free to reach out to my colleague Danny
<https://meta.wikimedia.org/wiki/User:Danny_Benjafield_(WMDE)>.
Looking forward to celebrating together again!
Cheers,
--
Mohammed S. Abdulai
*Community Communications Manager, Wikidata*
Wikimedia Deutschland e. V. | Tempelhofer Ufer 23-24 | 10963 Berlin
Phone: +49 (0) 30 577 116 2466
https://wikimedia.de
Grab a spot in my calendar for a chat: cal.com/masssly.
A lot is happening around Wikidata - Keep up to date!
<https://www.wikidata.org/wiki/Wikidata:Status_updates> Current news and
exciting stories about Wikimedia, Wikipedia and Free Knowledge in our
newsletter (in German): Subscribe now <https://www.wikimedia.de/newsletter/>
.
Imagine a world in which every single human being can freely share in the
sum of all knowledge. Help us to achieve our vision!
https://spenden.wikimedia.de
Wikimedia Deutschland — Gesellschaft zur Förderung Freien Wissens e. V.
Eingetragen im Vereinsregister des Amtsgerichts Charlottenburg, VR 23855 B.
Als gemeinnützig anerkannt durch das Finanzamt für Körperschaften I Berlin,
Steuernummer 27/029/42207. Geschäftsführende Vorstände: Franziska Heine
Dear Wikidata community,
As a Wikidata birthday present, we're excited to announce the release of
*Broomstick*, a new tool to uncover Lexemes that can be improved on
Wikidata. Broomstick makes it easy for anyone interested in completing and
improving Wikidata’s lexicographical data across different languages.
Try Broomstick: https://broomstick.toolforge.org
Documentation: https://www.wikidata.org/wiki/Wikidata:Broomstick
What can you do with Broomstick?
Broomstick currently uncovers:
- General (empty Lexemes, Lexemes without Senses, Lexemes without Forms,
missing external identifiers, missing usage example)
- Senses issues (e.g. missing predicate for (P9970) or troponym of
(P5975))
- Forms issues (e.g. missing grammatical features, missing pronunciation
audio (P443))
- Misplaced statements (e.g. item for this sense (P5137) that located on
Lexeme level instead of Senses level)
Broomstick supports languages available on Wikidata. You can request new
languages or suggest queries through the project's talk page on Wikidata,
by including the following information:
- Language request: autonym, QID, and Wikimedia language code
- Query request: scope (all languages/specific languages), and the
proposed query
We'll also be joining Wikidata's 13th Birthday Presents call on October
29th at 17:00 UTC with a 2-minute demo of Broomstick. You can find the link
to join and more details about the call here:
https://www.wikidata.org/wiki/Event:Wikidata_Thirteenth_Birthday/Presents_%…
Happy birthday, Wikidata!
Cheers,
Kartika
--
*Kartika Sari*
Community Communication Staff
Wikicollabs
TLDR: We present Nemo as a new Wikidata query tool that can answer
queries, extracts subsets, and perform analyses in ways that SPARQL
alone can't. It also lets you combine Wikidata with other data sources.
Dear all,
Nemo [1] is a graph rule engine that can be used to query and process
data (in many forms, online or offline). It's free and open source [2],
and there is a no-install Web application to use it:
https://tools.iccl.inf.tu-dresden.de/nemo/
As an early birthday present, we have just released Nemo v0.9, which
adds features that make Nemo a useful tool for working with Wikidata
content in new ways. This email is a short(ish) intro and teaser towards
this -- feedback is very welcome.
## What does Nemo do?
Think of it as an upgrade to the SPARQL query service, with the
following differences:
- You can do more powerful data transformations that would timeout in
SPARQL or not be possible at all
- You can use and combine data from multiple sources (Wikidata SPARQL
results, RDF, CSV, local files or online data)
- Processing in part happens on your computer, avoiding timeouts
- You can run Nemo in a browser (easy) or on the command line (for
heavier jobs)
Nemo still lets you focus on the data, hiding technicalities and
low-level issues. It's more than SPARQL, but much simpler than Python.
## How does that work?
You write "queries" -- or rather little "programs" -- in a simple
language based on if-then rules. Here is an example that uses no
external data at all:
https://tinyurl.com/2muju6sy (find common ancestors of two people)
Technically, this is a logic program in (a variant of) Datalog. Using a
few more Nemo features, you can use such rules with Wikidata content:
https://tinyurl.com/2mzfutcj (find common ancestors of Ada and Moby)
Btw you can share any Nemo program by sharing a link (the URL updates as
you type).
## Slow down, I never heard of "Datalog". How do I read this?
It's actually quite simple. Data is represented in "facts" such as
"father(Alice, Bob)", which we could use to say that Alice has father
Bob. A bit like triples in RDF/SPARQL, but you can have any number of
parameters (as in, say, "degree(Alice, MSc, Physics, 2025, TUDresden)").
Facts are used to compute new facts using rules like this:
uncle(?child, ?bro) :- parent(?child, ?p), brother(?p, ?bro) .
The ?... parts are variables, ":-" means "IF", and "," means "AND". So
the rule says:
?child has uncle ?bro IF
?child has a parent ?p AND ?p has a brother ?bro.
In a way, rules are like simple SPARQL query patterns, the result of
which you store as new facts. The power of Datalog is that you can use
these facts in future rule applications, producing more information step
by step rather than in one huge SPARQL query.
## Why not just use SPARQL?
The Ada/Moby example above can also be solved by a SPARQL query, though
the query will time out on WDQS. However, Nemo can also do things that
are outright impossible even with the most powerful SPARQL services.
The "Examples" button on the Web app shows some of the possibilities:
- Query for things that SPARQL cannot do in principle, such as the
longest winning streak of your favourite sports team ("Winning streaks
in sports")
- Combine third-party data with Wikidata on the fly ("Old trees", "CO2
emitting countries")
- Do multi-step analyses that would be very complex to express in SPARQL
("Empty classes in Wikidata")
- Directly query RDF data without a SPARQL service ("Wikipedia articles
vs. labels")
## What's behind it?
At its heart, Nemo is an in-memory data processing engine, written in
Rust. The data model is relational, but weakly typed (like RDF, CSV, and
JSON) rather than strongly typed (like SQL).
The Web app runs locally, in your browser. Your program and any local
data you might use (with "Add input files") will not be uploaded
anywhere [3]. Even in the browser, it is feasible to work with larger
files (millions of facts), but there are limits (don't try to import the
whole Wikidata dump there). For SPARQL, Nemo tries to optimise by
querying only for the values that your program needs. This is why some
of the examples can import from SPARQL queries like "?s ?p ?o" without
actually downloading all of Wikidata.
Nemo runs an extension of Datalog enriched with SPARQL-style datatypes
and "filter" functions, aggregates, and negation (both must be
stratified, i.e., used in non-recursive ways). As usual in Datalog, the
order of rules does not matter at all (although the examples are all
ordered following the "natural" processing pipeline). This
"declarativity" allows Nemo to automatically optimise rule applications
and data imports.
Some more academic documentation is found on our publication page:
https://github.com/knowsys/nemo/wiki/Publications
## Limitations? Future plans?
Loads (of both). Key limitations from a Wikidata perspective include
missing support for dates and geocoordinates (workaround: use SPARQL to
decompose these into several numbers). You might also find that more
data processing functions should be implemented (let us know). The web
app could benefit from richer result display and downloading options.
In the mid term, we plan to support more data formats, notably JSON, for
native import. We also look into programming features to structure
longer programs. However, we would also like to hear back from you to
decide where to go next.
We have a detailed handbook [4] but more Wikidata-related materials and
tutorials might be desirable. Again, let us know what you think.
Nemo is a university-based OSS project and still a prototype, so bear
with us if you discover bugs. We will try to answer your queries asap,
and we also have a public user chatroom [5]. Thanks are due to all
contributors [6], and for v0.9.0 especially to Alex Ivliev, Lukas
Gerlach, and Maximilian Marx.
Cheers,
Markus
[1] https://knowsys.github.io/nemo-doc/
[2] https://github.com/knowsys/nemo
[3] However, if you use Nemo with data from SPARQL, then some data might
be sent to the SPARQL endpoint (your SPARQL query for a start, but
possibly also specific data values your program needs data for).
[4] https://knowsys.github.io/nemo-doc/
[5] https://gitter.im/nemo/community or simply #nemo_community:gitter.im
[6] https://github.com/knowsys/nemo/graphs/contributors
--
Prof. Dr. Markus Kroetzsch
Knowledge-Based Systems Group
Faculty of Computer Science
TU Dresden
+49 351 463 38486
https://kbs.inf.tu-dresden.de/
Hi everyone,
The next Wikidata + Wikibase Office Hour
<https://www.wikidata.org/wiki/Wikidata:Events#Office_hours> will be held
on Wednesday, October 15th, at 16:00 UTC (18:00 Berlin) on the Wikidata
Telegram channel <https://t.me/joinchat/IeCRo0j5Uag1qR4Tk8Ftsg>.
*What’s the Wikidata + Wikibase Office Hour?*
It’s your chance to:
✅ Hear about what the WMDE development team has been working on and our
plans for the year ahead
✅ Ask questions, discuss topics that matter to you, and share your thoughts
on all things Wikidata and Wikibase
✅ Connect with fellow community members and discover what everyone’s been
up to.
📅 Make sure to mark your calendars!
Cheers,
--
Mohammed S. Abdulai
*Community Communications Manager, Wikidata*
Wikimedia Deutschland e. V. | Tempelhofer Ufer 23-24 | 10963 Berlin
Phone: +49 (0) 30 577 116 2466
https://wikimedia.de
Grab a spot in my calendar for a chat: cal.com/masssly.
A lot is happening around Wikidata - Keep up to date!
<https://www.wikidata.org/wiki/Wikidata:Status_updates> Current news and
exciting stories about Wikimedia, Wikipedia and Free Knowledge in our
newsletter (in German): Subscribe now <https://www.wikimedia.de/newsletter/>
.
Imagine a world in which every single human being can freely share in the
sum of all knowledge. Help us to achieve our vision!
https://spenden.wikimedia.de
Wikimedia Deutschland — Gesellschaft zur Förderung Freien Wissens e. V.
Eingetragen im Vereinsregister des Amtsgerichts Charlottenburg, VR 23855 B.
Als gemeinnützig anerkannt durch das Finanzamt für Körperschaften I Berlin,
Steuernummer 27/029/42207. Geschäftsführende Vorstände: Franziska Heine