(Note: This is only an early heads-up, to be prepared. Google Code-in
has NOT been announced yet, but last year, GCI mentors asked for more
time in advance to identify tasks to mentor. Here you are. :)
* You have small, self-contained bugs you'd like to see fixed?
* Your documentation needs specific improvements?
* Your user interface has some smaller design issues?
* Your Outreachy/Summer of Code project welcomes small tweaks?
* You'd enjoy helping someone port your template to Lua?
* Your gadget code uses some deprecated API calls?
* You have tasks in mind that welcome some research?
Google Code-in (GCI) is an annual contest for 13-17 year old students.
GCI 2019 has not yet been announced but usually takes place from late
October to December. It is not only about coding: We also need tasks
about design, docs, outreach/research, QA.
Read https://www.mediawiki.org/wiki/Google_Code-in/Mentors , add
your name to the mentors table, and start tagging tasks in Wikimedia
Phabricator by adding the #gci-2019 project tag.
We will need MANY mentors and MANY tasks, otherwise we cannot make it.
Last year, 199 students successfully worked on 765 tasks supported by
39 mentors. For some achievements from the last round, see
Note that "beginner tasks" (e.g. "Set up Vagrant") and generic
tasks are very welcome (like "Choose and replace 2 uses of
Linker::link() from the list in T223010" style).
We also have more than 400 unassigned open #good-first-bug tasks:
Can and would you mentor some of these tasks in your area?
Please take a moment to find / update [Phabricator etc.] tasks in your
project(s) which would take an experienced contributor 2-3 hours. Read
, ask if you have any questions, and add your name to
Thanks (as we will not be able to run this without your help),
Andre Klapper (he/him) | Bugwrangler / Developer Advocate
[Please forward to interested colleagues]
We are proud to announce that the DBpedia Databus website
<https://databus.dbpedia.org/> and the SPARQL API
(_docu_ <http://dev.dbpedia.org/Download_Data>) are in public beta now.
The system is usable (eat-your-own-dog-food tested) following a “working
software over comprehensive documentation” approach. Due to its many
components (website, sparql endpoints, keycloak, mods, upload client,
download client, and data debugging), we estimate approximately six
months in beta to fix bugs, implement all features and improve the
details. If you have any feedback or questions, please use
<https://forum.dbpedia.org/>, the “report issues” button, or
The full document is available at:
We are looking forward to the feedback and discussion at the_14th
DBpedia Community Meeting at SEMANTiCS 2019 in Karlsruhe_
on September 12th or online.
The DBpedia Databus is a platform to capture invested effort by data
consumers who needed better data quality (fitness for use) in order to
use the data and give improvements back to the data source and other
consumers. DBpedia Databus enables anybody to build an automated
DBpedia-style extraction, mapping and testing for any data they need.
Databus incorporates features from DNS, Git, RSS, online forums and
Maven to harness the full workpower of data consumers.
Professional consumers of data worldwide have already built stable
cleaning and refinement chains for all available datasets, but their
efforts are invisible and not reusable. Deep, cleaned data silos exist
beyond the reach of publishers and other consumers trapped locally in
*Data is not oil that flows out of inflexible pipelines*. Databus breaks
existing pipelines into individual components that together form a
decentralized, but centrally coordinated data network in which data can
flow back to previous components, the original sources, or end up being
consumed by external components,
The Databus provides a platform for re-publishing these files with very
little effort (leaving file traffic as only cost factor) while offering
the full benefits of built-in system features such as automated
publication, structured querying, automatic ingestion, as well as
pluggable automated analysis, data testing via continuous integration,
and automated application deployment *(software with data)*. The impact
is highly synergistic, just a few thousand professional consumers and
research projects can expose millions of cleaned datasets, which are on
par with what has long existed in deep silos and pipelines.
1 Billion interconnected, quality-controlled Knowledge Graphs until 2025
As we are inversing the paradigm form a publisher-centric view to a data
consumer network, we will open the download valve to enable discovery
and access to massive amounts of cleaner data than published by the
original source. The main DBpedia Knowledge Graph - cleaned data from
Wikipedia in all languages and Wikidata - alone has 600k file downloads
per year complemented by downloads at over 20 chapter,
<http://es.dbpedia.org/> as well as over 8 million daily hits on the
main Virtuoso endpoint. Community extension from the alpha phase such
<https://databus.dbpedia.org/propan/lhd/linked-hypernyms> are being
loaded onto the bus and consolidated and we expect this number to reach
over 100 by the end of the year. Companies and organisations who
have<https://github.com/dbpedia/links>_previously uploaded their
backlinks here_ <https://github.com/dbpedia/links> will be able to
migrate to the databus. Other datasets are cleaned and posted. In two of
our research projects_LOD-GEOSS_
and<http://plass.io/>_PLASS_ <http://plass.io/>, we will re-publish open
datasets, clean them and create collections, which will result in
DBpedia-style knowledge graphs for energy systems and supply-chain
The *full document* is available at:
At this time it is not feasible to edit Wikidata. Performance is abysmal,
timeouts occur all the time. Other websites are quite alright so it must be
Wikidata that has a problem.
In the weekend is my prime time to edit wikidata.. PLEASE...
I really try not to spam the chat too much with pointers to my work on the
Abstract Wikipedia, but this one is probably also interesting for Wikidata
contributors. It is the draft for a chapter submitted to Koerner and
Reagle's Wikipedia@20 book, and talks about knowledge diversity under the
light of centralisation through projects such as Wikidata.
Public commenting phase is open until July 19, and very welcome:
"Collaborating on the sum of all knowledge across languages"
About the book: https://meta.wikimedia.org/wiki/Wikipedia@20
Link to chapter: https://wikipedia20.pubpub.org/pub/vyf7ksah
We’re seeking volunteers with a wide variety of skills and backgrounds to join the Program Committee for the 2020 LD4 Conference: May 13th and 14th, 2020, in College Station, Texas, USA, at the Texas A&M Hotel and Conference Center<https://www.texasamhotelcc.com/> at Texas A&M University. The Linked Data for Production initiative<http://www.ld4p.org>, supported by the Andrew W. Mellon Foundation, is hosting this conference to bring together anyone passionate about the adoption of linked data in libraries. There were quite a few sessions on Wikidata during the 2019 conference, so we expect it will also be well-represented in 2020.
Our contemporary venue accommodates up to 200 people, with flexibility for multiple simultaneous activities and proximity to local cultural attractions. What kind of community meeting space would you like to create? How can this community gathering best advance the adoption of linked data in all kinds of libraries? Join the Program Committee and shape an event you are excited to participate in!
To serve on the Program Committee, you should plan to attend the conference in College Station, Texas on May 13th and 14th, 2020, and be available for bi-weekly teleconference calls beginning in September.
Fill in this short form<https://forms.gle/MfDNKiWLy5tBeRCAA> by September 6th to indicate your interest in joining the Program Committee. Committee members will be selected based on a diversity of skills, interests, and backgrounds and will be notified by September 13th.
For those not joining the Program Committee, keep an eye out for a call for proposals and other announcements about the 2020 LD4 Conference!
Hilary Thorsen (Linked Data for Production project) and Christine Fernsebner Eslao (Harvard University)
2020 LD4 Conference Co-Chairs
Wikimedian in Residence
Linked Data for Production Project
Digital Library Systems and Services
Stanford, CA 94305
I thank you for your efforts. I invite to see my Wikimania report at https://meta.m.wikimedia.org/wiki/Wikimedia_France/Micro-financement/Wikima…. Waiting for the video of my session entitled Wikidata and Health: Current situation and perspectives.
Houcemeddine Turki (he/him)
Medical Student, Faculty of Medicine of Sfax, University of Sfax, Tunisia
Undergraduate Researcher, UR12SP36
GLAM and Education Coordinator, Wikimedia TN User Group
Member, WikiResearch Tunisia
Member, Wiki Project Med
Member, WikiIndaba Steering Committee
Member, Wikimedia and Library User Group Steering Committee
Co-Founder, WikiLingua Maghreb
Sorry for cross-posting!
Reminder: Technical Advice IRC meeting this week **Wednesday 3-4 pm UTC**
Questions can be asked in English!
The Technical Advice IRC Meeting (TAIM) is a weekly support event for
volunteer developers. Every Wednesday, two full-time developers are
available to help you with all your questions about MediaWiki, gadgets,
tools and more! This can be anything from "how to get started" over "who
would be the best contact for X" to specific questions on your project.
If you know already what you would like to discuss or ask, please add your
topic to the next meeting:
Hope to see you there!
Wikimedia Deutschland e. V. | Tempelhofer Ufer 23-24 | 10963 Berlin
Phone: +49 (0)30 219 158 26-0
Imagine a world, in which every single human being can freely share in the
sum of all knowledge. That‘s our commitment.
Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e. V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg unter
der Nummer 23855 B. Als gemeinnützig anerkannt durch das Finanzamt für
Körperschaften I Berlin, Steuernummer 27/029/42207.