We invite all registered users to vote on the 2021 Community Wishlist
Survey[1]. You can vote until 21 December for as many different wishes as
you want.
In the Survey, wishes for new and improved tools for experienced editors
are collected. After the voting, we will do our best to grant your wishes.
We will start with the most popular ones.
We, the Community Tech[2], are one of the Wikimedia Foundation[3] teams. We
create and improve editing and wiki moderation tools. What we work on is
decided based on results of the Community Wishlist Survey. Once a year, you
can submit wishes. After two weeks, you can vote on the ones that you're
most interested in. Next, we choose wishes from the survey to work on. Some
of the wishes may be granted by volunteer developers or other teams.
We are waiting for your votes. Thank you!
[1]
https://meta.wikimedia.org/wiki/Special:MyLanguage/Community_Wishlist_Surve…
[2] https://meta.wikimedia.org/wiki/Special:MyLanguage/Community_Tech
[3] https://meta.wikimedia.org/wiki/Special:MyLanguage/Wikimedia_Foundation
Kind regards,
Szymon Grabarczuk (he/him)
Community Relations Specialist
Wikimedia Foundation <https://wikimediafoundation.org/>
Datavalues with complex structure (time, quantity, globecoordinate) have properties (e.g. latitude, longitude, precision, amount, unit, etc.) that link from a node that is the value of a statement, reference, or qualifier. In my question, I'm going to refer to those nodes as "value nodes".
When performing a SPARQL query at the WD Query Service (example: https://w.wiki/ptp), these value nodes are identified by an IRI such as wdv: 742521f02b14bf1a6cbf7d4bc599eb77 (http://www.wikidata.org/value/742521f02b14bf1a6cbf7d4bc599eb77). The local name part of this IRI seems to be a hash of something. However, when I compare the hash values from the snak JSON returned from the API for the same value node (see https://gist.github.com/baskaufs/8c86bc5ceaae19e31fde88a2880cf0e9 for the example), the hash associated with the value node (35976d7cb070b06a2dec1482aaca2982df3fedd4 in this case) does not have any relationship to the local name part if the IRI for that value node.
This situation differs from that of identifiers for references, whose IRIs (wdref:8eb6208639efa82b5e7e4c709b7d18cbfca67411 = http://www.wikidata.org/reference/8eb6208639efa82b5e7e4c709b7d18cbfca67411 in this example) are clearly formed from the hash associated with the reference hash in the snak JSON returned from the API (8eb6208639efa82b5e7e4c709b7d18cbfca67411).
I am using the JSON returned by the API after writing new data to track those data at a later time via the Query Service. So I would like to know if there is a way that the value node IRIs can be determined from information in the JSON returned from the API. This is easy to do for reference and statement IRIs, but is not obvious for value nodes.
Thanks!
Steve Baskauf
--
Steven J. Baskauf, Ph.D.
Data Science and Data Curation Specialist
Jean & Alexander Heard Libraries, Vanderbilt University
Nashville, TN 37235, USA
Office: Eskind Biomedical Library, EMB 111
Phone: (615) 343-4582
https://my.vanderbilt.edu/baskauf/
https://www.mediawiki.org/wiki/Scrum_of_scrums/2020-12-09
= 2020-12-09 =
== Callouts ==
* If you know anyone who might have something web performance related to
talk about (including outside of the Foundation), please point them to our
FOSDEM devroom CfP. Deadline is Dec 16:
https://github.com/wikimedia/fosdem21-web-performance-cfp Thanks!
* RelEng: 1.36.0-wmf.22 is the last train of the year. Next week is the
last deployment week of the year.
*No updates:* Tech Comm, Anti-Harassment Tools, Editing, Product
Infrastructure, Parsing, Inuka, UI Standardization, OKAPI, Analytics, Cloud
Services, Quality and Test Engineering, Machine Learning Platform,
Research, Search Platform, Security, SRE, Wikidata, German Technical
Wishlist
== SoS Meeting Bookkeeping ==
* Updates:
* asynchronous retrospective on value of this experiment.
* adding new section to notes template for cross-cutting work
== Product ==
=== Community Tech ===
* Blocked by:
* Blocking:
* Updates:
** We've now moved to the voting phase of the Community Wishlist Survey.
We've accepted 270 proposal and at the time of writting there are 2571
supporting votes (
https://meta.wikimedia.org/wiki/Community_Wishlist_Survey_2021/Tracking )
=== Growth ===
* Blocked by:
* Blocking:
* Updates:
** Working on various bits and pieces (frontend, backend, ops) of
https://wikitech.wikimedia.org/wiki/Add_Link
=== iOS native app ===
* Blocked by:
* Blocking:
* Updates:
** Recent release even more stable than last.
** Working on language variants and other bug fixes.
=== Android native app ===
* Blocked by:
* Blocking:
* Updates:
** Working on watchlist feature, which was well-scoped to be a well-sized
feature while between Product Managers.
=== Web ===
* Blocked by:
* Blocking:
* Updates:
** WVUI-Vector integration
*** Preparing for Security Readiness Review:
https://phabricator.wikimedia.org/T257579
*** Product metrics and performance instrumentation
*** Client-side error logging: https://phabricator.wikimedia.org/T249826
** Designs for language switching re-design(s) in Desktop Improvements
Program
*** https://phabricator.wikimedia.org/T268514
***
=== Structured Data ===
* Blocked by:
* Blocking:
* Updates:
** Working on Commons Special:MediaSearch
** Released a tool to assess quality of media search image results:
https://media-search-signal-test.toolforge.org/ (warning: you may see NSFW
images)
=== Abstract Wikipedia ===
* Updates:
** Continuing work on using ZType data to enforce structure when editing
ZObjects.
** Helping our Outreachy interns get started doing data anaylsis of
template/module usage.
** Great modelling conversations with SRE Service Ops and Architecture;
thank you.
=== Language ===
* Blocked by:
* Blocking:
* Updates:
** Apertium is now migrated to `deployment-pipeline` and available as a
service. Thanks Alexandros Kosiaris (SRE) for helping in the process!
=== Library ===
* Blocked by:
* Blocking:
* Updates:
** Wrapping up work on Wikilinks (we hope to get a PR merged this week)
== Technology ==
=== Fundraising Tech ===
* Blocked by:
* Blocking:
* Updates:
** Readying a CentralNotice feature for logged-in users to filter out of
banner types in user preferences, hoping to deploy very soon after current
fundraiser ends. Will be asking core team for feedback on user preference
UI change. https://phabricator.wikimedia.org/T268646,
https://gerrit.wikimedia.org/r/604279,
** fixing some session timeout bugs in the synchronization of data from our
bulk mail sender to CiviCRM
** form tweaks to help when we switch over from raising money for the
annual fund to raising money for the endowment
** More work on dockerized dev environment:
https://phabricator.wikimedia.org/T262975
=== Platform ===
* Blocked by:
* Blocking:
* Updates:
** API Portal bug unblocking (next week soft launch)
** ParserCache work (some bugs occuring from use of ParserOutput)
** Shellbox (MediaWiki on Kubernetes)
** Sockpuppet Detection API
** Task recommendations API
=== Engineering Productivity ===
==== Performance ====
* Blocked by:
* Blocking:
* Updates:
** Blog post:
https://calendar.perfplanet.com/2020/human-performance-metrics/
** Another one from Timo about Excimer will be published there in a few
days.
==== Release Engineering ====
* Blocked by:
**
* Blocking:
**
* Updates:
** Thanks to Andrew and Arturo for their help with nest VM support on WMCS
instances
** Deployments
*** Last week: 1.36.0-wmf.20 [[phab:T263186]] <!--
https://phabricator.wikimedia.org/T263186 -->
*** This week: 1.36.0-wmf.21 [[phab:T263187]] <!--
https://phabricator.wikimedia.org/T263187 -->
*** Next week: 1.36.0-wmf.22 [[phab:T263188]] <!--
https://phabricator.wikimedia.org/T263188 -->
*** Rest of the year: https://wikitech.wikimedia.org/wiki/Deployments
(nothing!)
Hi All,
Every year we stop deployments for the last full week of the year.
As we enter the last couple weeks of the year, I wanted to send out a
reminder that next week is the final deployment week of the year and that
wmf/1.36.0-wmf.22 will be the last train release of the year.
The deployment calendar on Wikitech
<https://wikitech.wikimedia.org/wiki/Deployments> is up-to-date and is the
canonical source for the deployment schedule.
Thank you!
-- Tyler
Forwarding this from upstream kubernetes mailing list.
The TL;DR is that with the release that is due for September 2021
(assuming that happens as planned), Docker will no longer be usable as
a Container Runtime Engine for vanilla Kubernetes. And that's it. All
other usages of Docker remain unchanged.
Given the support cycle of 12 months after a release is out, that
gives us something less than 2 years for having evaluated the
available replacements, settled on one, drafted and implemented a
migration plan. It's a pretty early warning, which is nice.
The above sounds more complicated than it will probably prove, for
what is worth (although the devil is always in the details). As far as
running services in our Wikimedia production kubernetes clusters goes,
we never invested in Docker specific features/customizations on
purpose, choosing to treat it as a replaceable part of the
infrastructure, which should make this easier than initially thought.
I 've created for tracking: https://phabricator.wikimedia.org/T269684
---------- Forwarded message ---------
Από: Davanum Srinivas <davanum(a)gmail.com>
Date: Κυρ, 6 Δεκ 2020, 05:53
Subject: Kubelet / Docker / dockershim
To: Kubernetes developer/contributor discussion
<kubernetes-dev(a)googlegroups.com>,
<kubernetes-sig-node(a)googlegroups.com>
Folks,
If you haven't seen the discussions around $SUBJECT, please see [1]
and [2]. Tl;dr Please evaluate and switch to CRI implementations that
are or will be available in the community (like containerd, cri-o
etc).
For those who want to continue to use docker as their runtime, please
see [3] and [4]. There will be changes to how you deploy/run your
clusters as and when Mirantis/Docker folks come up with a migration
plan for a separate (new!) external cri implementation. So watch that
space.
Issues, concerns, we can chat in sig-node slack channel or meetings
(or drop a reply to this note).
Thanks,
Dims
[1] https://kubernetes.io/blog/2020/12/02/dont-panic-kubernetes-and-docker/
[2] https://kubernetes.io/blog/2020/12/02/dockershim-faq/
[3] https://twitter.com/justincormack/status/1334976974083780609
[4] https://github.com/Mirantis/cri-dockerd
--
Davanum Srinivas :: https://twitter.com/dims
--
You received this message because you are subscribed to the Google
Groups "Kubernetes developer/contributor discussion" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to kubernetes-dev+unsubscribe(a)googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/kubernetes-dev/CANw6fcHRq%2BadjSkrt1dVQfF….
--
Alexandros Kosiaris
Principal Site Reliability Engineer
Wikimedia Foundation
Hi,
we've released OOUI v0.41.0 last Thursday.
It will rollout on the normal train tomorrow, Tuesday, 08 December.
Highlights in this release since v0.40.0:
- Accessibility enhancements on PopupWidget (keyboard tabbing order)
and ToggleSwitchWidget.
Thanks to volunteer contributor Edwin Tam.
- Icon optimization resulting in less data sent to our users in
several often used icons.
Thanks to Thiemo Kreuz for the contributions here.
- Additional 'volume*' & 'network', 'networkOff' icons.
Thanks Matthew Williams and Sudhanshu Gautam for the design work.
-It also contains a deprecating change. Passing a string to `OO.ui.infuse()` is
deprecated, use an HTMLElement or jQuery collection instead.
You can find details on additional new features, code-level, styling
and interaction design amendments, and all improvements since v0.38.0
in the full changelog[0].
If you have any further queries or need help dealing with deprecating
changes, please let me know.
As always, interactive demos[1] and library documentation is available
on mediawiki.org[2], there is comprehensive generated code-level
documentation and interactive demos and tutorials hosted on
doc.wikimedia.org[3].
OOUI version: 0.41.0
MediaWiki version: 1.36.0-wmf.21
Date of deployment to production: Regular train, starting Tuesday 08 December
[0] - https://gerrit.wikimedia.org/g/oojs/ui/+/v0.41.0/History.md
[1] - https://doc.wikimedia.org/oojs-ui/master/demos/#widgets-mediawiki-vector-ltr
[2] - https://www.mediawiki.org/wiki/OOUI
[3] - https://doc.wikimedia.org/oojs-ui/master/
Best,
Volker
Apologies for cross-posting
Dear all,
We are proud to announce DBpedia Archivo (https://archivo.dbpedia.org)
an augmented ontology archive and interface to implement FAIRer
ontologies. Each ontology is rated with 4 stars measuring basic FAIR
features. We discovered 890 ontologies reaching on average 1.95 out of 4
stars. Many of them have no or unclear licenses and have issues w.r.t.
retrieval and parsing.
# Community action on individual ontologies
We would like to call on all ontology maintainers and consumers to help
us increase the average star rating of the web of ontologies by fixing
and improving its ontologies. You can easily check an ontology at
https://archivo.dbpedia.org/info. If you are an ontology maintainer just
release a patched version - archivo will automatically pick it up 8
hours later. If you are a user of an ontology and want your consumed
data to become FAIRer, please inform the ontology maintainer about the
issues found with Archivo.
The star rating is very basic and only requires fixing small things.
However, theimpact on technical and legal usability can be immense.
# Community action on all ontologies (quality, FAIRness, conformity)
Archivo is extensible and allows contributions to give consumers a
central place to encode their requirements. We envision fostering
adherence to standards and strengthening incentives for publishers to
build a better (FAIRer) web of ontologies.
1.
SHACL (https://www.w3.org/TR/shacl/, co-edited by DBpedia’s CTO D.
Kontokostas) enables easy testing of ontologies. Archivo offers free
SHACL continuous integration testing for ontologies. Anyone can
implement their SHACL tests and add them to the SHACL library on
Github. We believe that there are many synergies, i.e. SHACL tests
for your ontology are helpful for others as well.
2.
We are looking for ontology experts to join DBpedia and discuss
further validation (e.g. stars) to increase FAIRness and quality of
ontologies. We are forming a steering committee and also a PC for
the upcoming Vocarnival at SEMANTiCS 2021. Please message
hellmann(a)informatik.uni-leipzig.de
<mailto:hellmann@informatik.uni-leipzig.de>if you would like to
join. We would like to extend the Archivo platform with relevant
visualisations, tests, editing aides, mapping management tools and
quality checks.
# How does Archivo work?
Each week Archivo runs several discovery algorithms to scan for new
ontologies. Once discovered Archivo checks them every 8 hours. When
changes are detected, Archivo downloads and rates and archives the
latest snapshot persistently on the DBpedia Databus.
# Archivo's mission
Archivo's mission is to improve FAIRness (findability, accessibility,
interoperability, and reusability) of all available ontologies on the
Semantic Web. Archivo is not a guideline, it is fully automated,
machine-readable and enforces interoperability with its star rating.
- Ontology developers can implement against Archivo until they reach
more stars. The stars and tests are designed to guarantee the
interoperability and fitness of the ontology.
- Ontology users can better find, access and re-use ontologies.
Snapshots are persisted in case the original is not reachable anymore
adding a layer of reliability to the decentral web of ontologies.
Let’s all join together to make the web of ontologies more reliable and
stable,
Johannes Frey, Denis Streitmatter, Fabian Götz, Sebastian Hellmann and
Natanael Arndt
Paper: https://svn.aksw.org/papers/2020/semantics_archivo/public.pdf