Similar to the earlier removal of text fields from the wiki replicas for comment storage refactors in Mediawiki, we are going to remove “user text” columns from the views that are deprecated in the Mediawiki schema to prepare for when they will actually be removed upstream. The column drops are tracked and explained here https://phabricator.wikimedia.org/T223406 <https://phabricator.wikimedia.org/T223406>. The tables with names such as <tablename>_compat will not see a difference in structure. The change is scheduled for Monday, May 27th.
The fields that are dropping from the views are:
revision: rev_user and rev_user_text.
archive: ar_user and ar_user_text.
ipblocks: ipb_by and ipb_by_text.
image: img_user and img_user_text.
oldimage: oi_user and oi_user_text.
filearchive: fa_user and fa_user_text.
recentchanges: rc_user and rc_user_text.
logging: log_user and log_user_text.
Ideally, tools that connect to the replicas should gather the information from the appropriate entries in the actor table instead, again, this is similar to the change for the comment table. The data is already there for you to start using. The alternative is to try using the related <tablename>_compat table, which won’t be changing in a user-visible way at this time.
Brooke Storm
Operations Engineer
Wikimedia Cloud Services
bstorm(a)wikimedia.org <mailto:bstorm@wikimedia.org>
IRC: bstorm_
Hi!
On 2019-06-03 UTC+2 14:00 (next monday) we will be rebuilding the
cloudservices1003 server,
that holds the designate service which serves DNS request for CloudVPS and
Toolforge.
We have a backup server -cloudservices1004-, so we don't expect a lot of
downtime. But DNS queries are really fast, and there may be a lot of them that
will fail while we stabilize the DNS service.
Please reach out to the WMCS team if you need more details or have any doubts.
regards.
--
Arturo Borrero Gonzalez
Operations Engineer / Wikimedia Cloud Services
Wikimedia Foundation
As part of the efforts to retire labstore1003 and move to more modern hardware with some redundancy, we will begin the process of switching the mounts for /data/scratch to the new server starting 2019-05-28. It is advised to not use the /data/scratch NFS mount during the maintenance, starting next Tuesday at 1800 UTC until after it is announced over, which should be around an hour, since the mount will be changing location and will be generally somewhat unstable.
NFS changes have occasionally caused more holistic problems within Toolforge in the past, but this should be fairly low-impact since it isn’t affecting /data/project or /home.
Brooke Storm
Operations Engineer
Wikimedia Cloud Services
bstorm(a)wikimedia.org <mailto:bstorm@wikimedia.org>
IRC: bstorm_
Good news from the Wikimedia Hackathon in Prague! We now have some
newer language runtimes for Node.js and Python3 available for
Kubernetes webservices. These newer versions match the versions that
were added for grid engine webservices when we upgraded to Debian
Stretch.
These new versions are available in parallel with the older Node.js
6.11 and Python 3.4 versions. This will be the pattern used in the
future when we add all newer language runtime versions so that
migrations are a bit easier for all existing users. The new type names
are:
* node10
* python3.5
== Node.js 10 ==
$ webservice --backend=kubernetes node10 shell
Defaulting container name to interactive.
Use 'kubectl describe pod/interactive -n bd808-test' to see all of
the containers in this pod.
If you don't see a command prompt, try pressing enter.
$ nodejs --version
v10.4.0
$ npm --version
6.5.0
$ logout
Session ended, resume using 'kubectl attach interactive -c
interactive -i -t' command when the pod is running
Pod stopped. Session cannot be resumed.
== Python 3.5 ==
$ webservice --backend=kubernetes python3.5 shell
Defaulting container name to interactive.
Use 'kubectl describe pod/interactive -n bd808-test' to see all of
the containers in this pod.
If you don't see a command prompt, try pressing enter.
$ python3 --version
Python 3.5.3
$ logout
Session ended, resume using 'kubectl attach interactive -c
interactive -i -t' command when the pod is running
Pod stopped. Session cannot be resumed.
Bryan, on behalf of the Toolforge admin team
--
Bryan Davis Wikimedia Foundation <bd808(a)wikimedia.org>
[[m:User:BDavis_(WMF)]] Manager, Technical Engagement Boise, ID USA
irc: bd808 v:415.839.6885 x6855
Hi!
on 2019-05-16 13:00 UTC there will be a maintenance operation in one of the
Wikimedia Foundation datacenter racks that affects 2 of our servers running
virtual machines [0]. There is a risk that this maintenance operation can result
in power loss of the servers, affecting the virtual machines running on it.
However, there is no way to know for sure if there will be any outage at all.
If you are an admin of any of the VMs in the list and you want the VM to be
reallocated into other servers previous to the operation, please get in touch
with us as soon as possible. Remember that, right now, reallocating the VM to
other server means shutting down the VM briefly.
Here is a list of affected virtual machines:
cloudvirt1028.eqiad.wmnet:
af-puppetdb01.automation-framework.eqiad.wmflabs
bastion-eqiad1-02.bastion.eqiad.wmflabs
fridolin.catgraph.eqiad.wmflabs
cloud-puppetmaster-02.cloudinfra.eqiad.wmflabs
cloudstore-dev-01.cloudstore.eqiad.wmflabs
commtech-nsfw.commtech.eqiad.wmflabs
clm-test-01.community-labs-monitoring.eqiad.wmflabs
cyberbot-exec-iabot-01.cyberbot.eqiad.wmflabs
deployment-db05.deployment-prep.eqiad.wmflabs
deployment-memc05.deployment-prep.eqiad.wmflabs
deployment-sca01.deployment-prep.eqiad.wmflabs
deployment-pdfrender02.deployment-prep.eqiad.wmflabs
ign.ign2commons.eqiad.wmflabs
integration-slave-docker-1050.integration.eqiad.wmflabs
integration-castor03.integration.eqiad.wmflabs
api.openocr.eqiad.wmflabs
osmit-umap.osmit.eqiad.wmflabs
builder-envoy.packaging.eqiad.wmflabs
jmm-buster.puppet.eqiad.wmflabs
a11y.reading-web-staging.eqiad.wmflabs
adhoc-utils01.security-tools.eqiad.wmflabs
util-abogott-stretch.testlabs.eqiad.wmflabs
canary1028-01.testlabs.eqiad.wmflabs
stretch.thumbor.eqiad.wmflabs
tools-worker-1023.tools.eqiad.wmflabs
tools-proxy-04.tools.eqiad.wmflabs
tools-docker-builder-06.tools.eqiad.wmflabs
tools-sgewebgrid-generic-0904.tools.eqiad.wmflabs
tools-sgeexec-0942.tools.eqiad.wmflabs
tools-sgeexec-0941.tools.eqiad.wmflabs
tools-sgeexec-0940.tools.eqiad.wmflabs
tools-sgeexec-0939.tools.eqiad.wmflabs
tools-sgeexec-0937.tools.eqiad.wmflabs
tools-sgeexec-0929.tools.eqiad.wmflabs
tools-sgeexec-0921.tools.eqiad.wmflabs
tools-sgeexec-0920.tools.eqiad.wmflabs
tools-sgeexec-0911.tools.eqiad.wmflabs
tools-sgeexec-0909.tools.eqiad.wmflabs
toolsbeta-proxy-01.toolsbeta.eqiad.wmflabs
vconverter-instance.videowiki.eqiad.wmflabs
perfbot.webperf.eqiad.wmflabs
wdhqs-1.wikidata-history-query-service.eqiad.wmflabs
cloudvirt1014.eqiad.wmnet:
commonsarchive-prod.commonsarchive.eqiad.wmflabs
deployment-imagescaler03.deployment-prep.eqiad.wmflabs
dumps-5.dumps.eqiad.wmflabs
dumps-4.dumps.eqiad.wmflabs
incubator-mw.incubator.eqiad.wmflabs
webperformance.integration.eqiad.wmflabs
saucelabs-01.integration.eqiad.wmflabs
integration-puppetmaster01.integration.eqiad.wmflabs
maps-puppetmaster.maps.eqiad.wmflabs
maps-wma.maps.eqiad.wmflabs
mwoffliner3.mwoffliner.eqiad.wmflabs
mwoffliner1.mwoffliner.eqiad.wmflabs
phlogiston-5.phlogiston.eqiad.wmflabs
discovery-testing-01.shiny-r.eqiad.wmflabs
snuggle-enwiki-01.snuggle.eqiad.wmflabs
canary-1014-01.testlabs.eqiad.wmflabs
tools-sgeexec-0901.tools.eqiad.wmflabs
wdqs-test.wikidata-query.eqiad.wmflabs
Toolforge won't be affected by this operation.
You can read more details about the datacenter operation itself in phabricator [1].
Sorry for the short notice,
regards.
[0] Cloud Services: reallocate workload from rack B5-eqiad
https://phabricator.wikimedia.org/T223148
[1] Install new PDUs into b5-eqiad https://phabricator.wikimedia.org/T223126
--
Arturo Borrero Gonzalez
Operations Engineer / Wikimedia Cloud Services
Wikimedia Foundation
To move maps project/home NFS and the scratch share off of the old labstore1003 machine and onto much faster, newer hardware, I’m going to begin doing rsyncs of data across.
This is just to announce this is starting soon and to encourage people to reach out on the #wikimedia-cloud channel if it is hitting performance hard on maps servers, in particular or on the scratch share.
Brooke Storm
Operations Engineer
Wikimedia Cloud Services
bstorm(a)wikimedia.org <mailto:bstorm@wikimedia.org>
IRC: bstorm_