On Thu, Sep 5, 2024 at 8:15 PM Physikerwelt <wiki(a)physikerwelt.de> wrote:
>
> Dear Luca,
>
> the communication was good I think.
>
> However, I don't understand the decision process. Who is the
> responsible person, e.g., the product manager, who eventually decided
> to cut WDQS into pieces?
>
> Moritz
The decision was defined together by the Search Team at Wikimedia
Foundation and the people in charge of Wikidata at Wikimedia
Deutschland. In particular, Lydia Pintscher is ultimately responsible
for all Wikidata product decisions at WMDE.
I would like to stress how this decision was not taken lightheartedly.
It's been three years that we know that Blazegraph is on the verge of
failing, we wrote a playbook in case of dramatic failure,[1] and we
evaluated several alternatives to Blazegraph,[2] each with its pros
and cons.
While evaluating which way to go, knowing that no solution would be a
"magic wand" that would magically solve all our problems - in fact, no
"magic solution" exists, each comes with its load of problems and
costs - we came to terms with the fact that we needed more time and
the split was a harsh, but ultimately effective solution to buy some
time in the transition to the next backend.
Hope this helps.
L.
[1] https://www.wikidata.org/wiki/Wikidata:SPARQL_query_service/WDQS_backend_up…
[2] https://www.wikidata.org/wiki/Wikidata:SPARQL_query_service/WDQS_backend_up…
Hello, we would like to speak with AI or ML developers, data scientists,
engineers or those with experience with embeddings, LLMs, knowledge graphs,
or any kind of semantic or meaning-based search.
We are arranging *30-minute interviews* to better understand:
- What do AI/ML developers need from a meaning-based search for Wikidata
- Which features and filters matter most to you
- How would you use the results in your AI or data workflows
*Interested?* Then please book a chat with us:
https://greatquestion.co/wikimediadeutschland/sa00xu39
After completion, someone will shortly be in touch.
No interview preparation is required (just your experience with AI, ML, or
data workflows). While helpful, no prior knowledge of Wikidata or vector
search is necessary.
("What's a Wikidata Vector Database?" I hear you say...)
Play with it here: Wikidata Search <https://wd-vectordb.wmcloud.org/>
Read about it here: Wikidata:Embedding Project
<https://www.wikidata.org/wiki/Wikidata:Embedding_Project>
Thank you!
--
*Danny Benjafield*
Community Communications Manager
Wikidata For Wikimedia Projects
Wikimedia Deutschland e. V. | Tempelhofer Ufer 23-24 | 10963 Berlin
Phone: +49 (0)30-577 11 62-0
https://wikimedia.de
Keep up to date! Current news and exciting stories about Wikimedia,
Wikipedia and Free Knowledge in our newsletter (in German): Subscribe now.
<https://www.wikimedia.de/newsletter/>
Imagine a world in which every single human being can freely share in the
sum of all knowledge. Help us to achieve our vision!
https://spenden.wikimedia.de
Wikimedia Deutschland — Gesellschaft zur Förderung Freien Wissens e. V.
Eingetragen im Vereinsregister des Amtsgerichts Charlottenburg, VR 23855 B.
Als gemeinnützig anerkannt durch das Finanzamt für Körperschaften I Berlin,
Steuernummer 27/029/42207. Geschäftsführende Vorstände: Franziska Heine
Dear community members,
We would like to inform you that the *Revitalising UK History 2:
Multilingual Expansion
<https://meta.wikimedia.org/wiki/Event:Revitalizing_UK_History_2>* online
event has been *rescheduled* due to ongoing technical issues with the Programs
& Events Dashboard. <https://outreachdashboard.wmflabs.org/>
*New Date & Time:*
*Saturday, 17 January 2026*
4:00–6:00 PM (Nigeria / Africa–Lagos)
3:00–5:00 PM (UK time)
This upcoming session will focus on improving the multilingual coverage of
underrepresented UK historical figures on Wikidata. We will be working on
enhancing descriptions in languages such as Welsh, Scots, Cornish, Gaelic,
Manx Gaelic, Igbo, Hausa, Yoruba, French, Spanish, and more.
You can find the updated event page and registration details here:
https://meta.wikimedia.org/wiki/Event:Revitalizing_UK_History_2
Thank you for your understanding, and we look forward to having you join us
for this important editing and learning session.
Warm regards,
*Josef Anthony*
Event Coordinator
Idoko Joseph (JosefAnthony)
Wikimedia Volunteer | Open Knowledge Advocate
*View my contributions
<https://meta.wikimedia.org/wiki/User:JosefAnthony/Contributions>*
https://github.com/WolfgangFahl/snapquery provides
FAIR Management of Query Sets to Mitigate Query Rot in Knowledge Graphs.
snapquery hides the technical details of queries.
You can just run
snapquery cats --limit 3
[
{
"item": "http://www.wikidata.org/entity/Q378619",
"itemLabel": "Q378619"
},
{
"item": "http://www.wikidata.org/entity/Q498787",
"itemLabel": "Muezza"
},
{
"item": "http://www.wikidata.org/entity/Q677525",
"itemLabel": "Orangey"
}
]
https://github.com/WolfgangFahl/snapquery/issues/56 calls for
implementing tiny subsets of Wikidata such as
# countries and capitals
# timezones
# airports
# languages
# units
as https://phabricator.wikimedia.org/T329368 outlines.
With snapquery, each of these subsets just needs a proper name and query
definition, and you can run the query to get the list and cache it
locally as YAML, JSON, CSV, etc.
Creating the envisioned GitHub project should be much easier these days,
and with combined community and AI support, we should quickly be able to
pull this off.
If you'd like to participate, please comment on the Phabricator task.
Wolfgang and Tim
--
BITPlan - smart solutions
Wolfgang Fahl
Web:http://www.bitplan.de
BITPlan GmbH, Willich - HRB 6820 Krefeld, Steuer-Nr.: 10258040548, Geschäftsführer: Wolfgang Fahl