I will be upgrading the cloud-vps openstack install on Thursday,
beginning around 16:00 UTC. Here's what to expect:
- Intermittent Horizon and API downtime (maybe an hour or two total)
- Inability to schedule new VMs (also for an hour or two)
Toolforge users will be unaffected by this outage. Existing, running
services and VMs on cloud-vps should also be unaffected.
In case you want to follow along at home, this is tracked as
https://phabricator.wikimedia.org/T356287
-Andrew + the WMCS team
_______________________________________________
Cloud-announce mailing list -- cloud-announce(a)lists.wikimedia.org
List information: https://lists.wikimedia.org/postorius/lists/cloud-announce.lists.wikimedia.…
Hello everyone,
The third edition of the Language & Internationalization newsletter (April
2024) is available at this link: <
https://www.mediawiki.org/wiki/Wikimedia_Language_engineering/Newsletter/20…
>.
This newsletter is compiled by the Wikimedia Language team. It provides
updates from January–March 2024 quarter on new feature development,
improvements in various language-related technical projects and support
efforts, details about community meetings, and contributions ideas to get
involved in projects.
To stay updated, you can subscribe to the newsletter on its wiki page. If
you have any feedback or ideas for topics to feature in the newsletter,
please share them on the discussion page, accessible here: <
https://www.mediawiki.org/w/index.php?title=Talk:Wikimedia_Language_enginee…
>.
Cheers,
Srishti
On behalf of the WMF Language team
*Srishti Sethi*
Senior Developer Advocate
Wikimedia Foundation <https://wikimediafoundation.org/>
I've swapped the secondary dev.toolforge.org bastion to a new server
running Debian 12. As usual, the new SSH fingerprints have been
published on Wikitech[0].
The new bastion no longer has the full list of packages installed that
were required for Grid Engine usage. If there is a package missing
from the new bastion that you would find useful, please file a new
Phabricator task in the Toolforge project[1].
If there are no major issues found I will also swap the main
login.toolforge.org bastion to a new server in a few days. I'll send a
separate announcement when that happens.
[0]: https://wikitech.wikimedia.org/wiki/Help:SSH_Fingerprints/dev.toolforge.org
[1]: https://phabricator.wikimedia.org/tag/toolforge/
Taavi
--
Taavi Väänänen (he/him)
Site Reliability Engineer, Cloud Services
Wikimedia Foundation
_______________________________________________
Cloud-announce mailing list -- cloud-announce(a)lists.wikimedia.org
List information: https://lists.wikimedia.org/postorius/lists/cloud-announce.lists.wikimedia.…
(If you don’t work with pagelinks table, feel free to ignore this message)
Hello,
Here is an update and reminder on the previous announcement
<https://lists.wikimedia.org/hyperkitty/list/wikitech-l@lists.wikimedia.org/…>
regarding normalization of links tables that was sent around a year ago.
As part of that work, soon the pl_namespace and pl_title columns of
pagelinks table will be dropped and you will need to use pl_target_id
joining with the linktarget table instead. This is basically identical to
the templatelinks normalization that happened a year ago.
Currently, MediaWiki writes to both data schemes of pagelinks for new rows
in all wikis except English Wikipedia and Wikimedia Commons (we will start
writing to these two wikis next week). We have started to backfill the data
with the new schema but it will take weeks to finish in large wikis.
So if you query this table directly or your tools do, You will need to
update them accordingly. I will write a reminder before dropping the old
columns once the data has been fully backfilled.
You can keep track of the general long-term work in T300222
<https://phabricator.wikimedia.org/T300222> and the specific work for
pagelinks in T299947 <https://phabricator.wikimedia.org/T299947>. You can
also read more on the reasoning in T222224
<https://phabricator.wikimedia.org/T222224> or the previous announcement
<https://lists.wikimedia.org/hyperkitty/list/wikitech-l@lists.wikimedia.org/…>
.
Thank you,
--
*Amir Sarabadani (he/him)*
Staff Database Architect
Wikimedia Foundation <https://wikimediafoundation.org/>
Is https://wikitech.wikimedia.org/wiki/Operating_system_upgrade_policy
accurate?
I shows support ending as follows:
Buster: September 2023
Bullseye: September 2025
I have the impression that VPS support for Buster is ending in May or June
of this year.
Also, if I look at an instance's OS in Horizon I see
debian-12.0-bookworm (deprecated 2024-04-10)
I'm not clear why this would be deprecated already.
Thanks for clarifying.
TL;DR: If you start to notice new or noisy puppet failures on your VMs,
please notify me directly or open a phab ticket and assign it to me
(Andrew).
==
What's happening:
Over the last few weeks I've been upgrading cloud-vps puppet servers to
newer builds that support the latest version of the puppet config
language, version 7. That's done for almost all cases; there are a few
project-local puppetmasters that I've been nervous about messing with
directly; in those cases I've opened phabricator tickets and assigned
them to project admins. For clarity, I've been using 'puppetserver'
terminology for new servers, whereas older servers were generally called
'puppetmasters.' [0]
Now that most servers are upgraded, it's time for me to flip the setting
that causes them to actually use the version 7 parser and compiler. In
almost all cases this will be backwards-compatible with the existing
catalogs but we may turn up a few edge cases that require repair.
What you need to do:
If you have one of those phab tickets about puppetservers open for your
project, please respond on the ticket so I know you're there and know
what your plan is.
All other users, please reach out to me if you start seeing new or
surprising puppet failures and I'll help sort out the transition.
-Andrew
[0] https://wikitech.wikimedia.org/wiki/Help:Project_puppetserver
_______________________________________________
Cloud-announce mailing list -- cloud-announce(a)lists.wikimedia.org
List information: https://lists.wikimedia.org/postorius/lists/cloud-announce.lists.wikimedia.…
Hi all!
This is to let you know that Toolforge continuous jobs now support
health-checks!
To use it you need to provide `--health-check-script ./script.sh` while
creating
your job. You can also provide the script as a string like this
`--health-check-script "cat /etc/os-release"`. Toolforge will periodically
attempt
to execute your health-check script inside your running job and will
restart
your job if the script completes with an exit code of 1.
Note: if you use a script file for health-check, do not forget to make the
file
executable (chmod u+x script.sh). If toolforge can't execute your
health-check
script, your job will never start.
Also a reminder that you can find this and smaller user-facing updates about
the Toolforge platform features here:
https://wikitech.wikimedia.org/wiki/Portal:Toolforge/Changelog
Original task: https://phabricator.wikimedia.org/T335592
--
Ndibe Raymond Olisaemeka
Software Engineer - Technical Engagement
Wikimedia Foundation <https://wikimediafoundation.org/>
<https://wikimediafoundation.org>
_______________________________________________
Cloud-announce mailing list -- cloud-announce(a)lists.wikimedia.org
List information: https://lists.wikimedia.org/postorius/lists/cloud-announce.lists.wikimedia.…
Hi,
Toolforge's Harbor instance (image registry) will be down briefly for a
version upgrade from 2.9.0 to 2.10.1 tomorrow Thursday 4 April at 9:00 UTC.
https://phabricator.wikimedia.org/T354507
This should not affect any tools that are not using the new build service,
nor any tools that are already running.
https://wikitech.wikimedia.org/wiki/Help:Toolforge/Build_Service
If you are using the builds service, you will not be able to run any new
builds, or start a job or a webservice from an image built with the build
service while Harbor is down. The outage is expected to last a few minutes.
We will send an update before starting maintenance work, and once
everything is back up and running.
Cheers,
--
Slavina Stefanova (she/her)
Software Engineer | Developer Experience
Wikimedia Foundation
_______________________________________________
Cloud-announce mailing list -- cloud-announce(a)lists.wikimedia.org
List information: https://lists.wikimedia.org/postorius/lists/cloud-announce.lists.wikimedia.…
Hello!
In order to conserve resources and prevent bot-net hijacking, cloud-vps
users have a few maintenance responsibilities. This spring two of these
duties have come due: an easy one and a hard one. Tl;dr: visit
https://wikitech.wikimedia.org/wiki/News/Cloud_VPS_2024_Purge, claim
your projects, and replace any hosts still running Debian Buster.
-- #1: Claim your projects --
This one is easy. Please visit the following wiki page and make a small
edit in your project(s) section, indicating whether you are or aren't
still using your project:
https://wikitech.wikimedia.org/wiki/News/Cloud_VPS_2024_Purge
this serves a couple of purposes. It allows us to identify and shut down
abandoned or no-longer-useful projects, it provides us with some updated
info about who cares about a given project (often useful for future
contact purposes) and it increases visibility into projects that are
used but unmaintained.
Regarding that last item: if you know that you depend on a project but
are not an admin or member of that project, please make a note of that
on the above page as well!
-- #2: Replace Debian Buster --
This one may require some work. Long term support for the Debian Buster
OS release is quickly running out (ending June 30), so VMs running
Buster need to be replaced with hosts running a new Debian version. You
may or may not be responsible for Buster instances; you can see a break
down of remaining Buster hosts on either of these pages:
https://wikitech.wikimedia.org/wiki/News/Cloud_VPS_2024_Purge (you
should be visiting that page anyway, because of item 1)
https://os-deprecation.toolforge.org/
More details about this process can be found here:
https://wikitech.wikimedia.org/wiki/News/Buster_deprecation
Typically in-place upgrades of VMs don't work all that well, so my
advice is to start fresh with a new server running Bookworm and to
migrate workloads to the new host. I've found Cinder volumes to be a big
help in this process; once all of your persistent data and config is in
a detachable volume it's fairly straightforward to move and will make
future upgrades that much easier.
WMCS staff will be standing by to help with any quota changes you might
need to help with this move; you can open a quota request ticket at
https://phabricator.wikimedia.org/project/view/2880/ -- and, as always,
we'll do our best to support you on IRC and on the cloud mailing list.
Thank you for your support and attention!
-Andrew + the WMCS team
_______________________________________________
Cloud-announce mailing list -- cloud-announce(a)lists.wikimedia.org
List information: https://lists.wikimedia.org/postorius/lists/cloud-announce.lists.wikimedia.…
Quarry will move to k8s on Monday 2024-04-01. Part of this is going to
involve exporting and importing the database, as well as syncing the NFS.
To this end there may be some data loss of any queries run in the cutover
time. As always don't rely on quarry to save queries, keep any important
queries local to your system and copy them into quarry.
Thank you
--
*Vivian Rook (They/Them)*
Site Reliability Engineer
Wikimedia Foundation <https://wikimediafoundation.org/>
_______________________________________________
Cloud-announce mailing list -- cloud-announce(a)lists.wikimedia.org
List information: https://lists.wikimedia.org/postorius/lists/cloud-announce.lists.wikimedia.…