On Mon, Feb 11, 2019 at 11:45 AM Jean-Frédéric
I have long been managing some of my tools using non-interactive provisioning scripts −
historically using Shell , and increasingly moving towards Ansible playbooks  .
Both methods boil down to:
* SSH onto bastion host
* `become tool`
* Execute steps: git pull, install dependencies,etc.
I have not always been able to fulfill my 'non-interactive' requirement.
For my projects which require Node dependencies, I did have to manually drop into a shell
(webservice --backend=kubernetes nodejs shell) in order to run npm.
(I also had a bit of a hard time when trying to manage crontabs using Ansible, as the
`crontab` executable override seems to be doing all kinds of magic − I kind of tinkered
until I reached a "looks like it works!" point ^_^)
As I was reading through the Trusty migration docs , it is somewhat hinted that
virtualens (for Python dependencies) should also be installed in an interactive container
shell, and not from the Bastion host .
Can someone help me with the following questions?
* Is it appropriate to create python virtual envs from the Bastion host?
"It depends." A python virtual environment should be built with the
same Python version and supporting libraries that it will be used
with. Currently our bastions have a runtime that matches the grid
engine. This runtime does not match the Docker containers that are
used on the Kubernetes cluster. If you are building/updating a venv
that will be used by a container on the Kubernetes cluster, you should
use `webservice shell` to get an environment that matches the expected
* Is there a recommended way to execute inside a
kubernetes container remotely / in a non-interactive fashion (eg using a tool like
There is an open Phabricator task about finding a way to do this
(<https://phabricator.wikimedia.org/T169695>), but there has not been
much activity there. I think this this core feature is possible when
using `kubectl` directly, so in theory we just need to figure out how
to adapt `webservice shell` to make it easier for the typical
Toolforge maintainer to use.
* in general, am I doing something fundamentally at
odds with the Toolforge environment with such configuration management?
I don't think you are fundamentally at odds with what we would like to
support. It is however probably a bit ahead of what our tooling easily
supports. Unfortunately we have not been able to put as much work into
our tooling and processes for the Kubernetes cluster as we would like
in the last year or two. I don't want to make grand promises, but I
think this will be changing in the coming months. We do have a
quarterly goal for the current quarter (January-March 2019) to upgrade
the core Kubernetes software we are using. Hopefully we will be able
to continue to build on this with additional changes to support our
Kubernetes users and remove some of the arbitrary barriers that
Toolforge maintainers are currently struggling with.
Bryan Davis Wikimedia Foundation <bd808(a)wikimedia.org>
[[m:User:BDavis_(WMF)]] Manager, Technical Engagement Boise, ID USA
irc: bd808 v:415.839.6885 x6855