Between steps 1 and 2, did you insert “webservice stop”?  If not, try that!  :-)

Sent from my iPhone

On Jan 11, 2020, at 5:08 PM, Maciej Jaros <> wrote:


I tried the migration path described here:

That doesn't seem to be working for me (or at least not for my dna tool).

Some problems:
  1. `webservice status` on grid engine doesn't show PHP version it shows "Your webservice of type lighttpd is running".
  2. When I do ` webservice --backend=kubernetes php7.3 start`
    1. nothing is shown in my error.log
    2. and main page of dna tool returns 503.
  3. I also tried with default setup:
    1. `echo -e "[Default]\n--backend=kubernetes" > $HOME/.webservicerc`
    2. `webservice start` -> not working 😞 (starts, but dna returns 503)
  4. Setting up default PHP version seem not to be allowed
    1. This is not working: `echo -e "[Default]\n--backend=kubernetes php7.3" > $HOME/.webservicerc`
    2. `webservice start` shows errors
    3. Would be nice to be able to setup PHP version somewhere so that I just do `webservice start/stop`.

Also not sure what is all that `kubectl config` and the kubctl alias supposed to do. I assume it is obvious for someone using kubectl, but I just don't know that tool. Never used this virtualization system. I guess I'm not the only one 😉
I did do that context switch and alias thing before starting the webservice like a nice user 🙂. It's just that I don't know if it is even required. I also don't know if the webservice need to be stopped when doing this or not. I guess some more information would be useful to keep migration time shorter for all tool owners.

Oh, I probably should mention that my service started on toolserver, so I was on Tool Labs from the start. I might have some leftovers config which I guess might cause some problems. I only found very basic lighttpd config though. The PHP code is very old, but to my knowledge it runs fine on PHP 7.


Bryan Davis (2020-01-09 22:57):
I am happy to announce that a new and improved Kubernetes cluster is
now available for use by beta testers on an opt-in basis. A page has
been created on Wikitech [0] outlining the self-service migration

* 2020-01-09: 2020 Kubernetes cluster available for beta testers on an
opt-in basis
* 2020-01-23: 2020 Kubernetes cluster general availability for
migration on an opt-in basis
* 2020-02-10: Automatic migration of remaining workloads from 2016
cluster to 2020 cluster by Toolforge admins

This new cluster has been a work in progress for more than a year
within the Wikimedia Cloud Services team, and a top priority project
for the past six months. About 35 tools, including, are currently running on what we are
calling the "2020 Kubernetes cluster". This new cluster is running
Kubernetes v1.15.6 and Docker 19.03.4. It is also using a newer
authentication and authorization method (RBAC), a new ingress routing
service, and a different method of integrating with the Developer
account LDAP service. We have built a new tool [1] which makes the
state of the Kubernetes cluster more transparent and on par with the
information that we already expose for the grid engine cluster [2].

With a significant number of tools managed by Toolforge administrators
already migrated to the new cluster, we are fairly confident that the
basic features used by most Kubernetes tools are covered. It is likely
that a few outlying issues remain to be found as more tools move, but
we have confidence that we can address them quickly. This has led us
to propose a fairly short period of voluntary beta testing, followed
by a short general availability opt-in migration period, and finally a
complete migration of all remaining tools which will be done by the
Toolforge administration team for anyone who has not migrated their

Please help with beta testing if you have some time and are willing to
get help on irc, Phabricator, and the
mailing list for early adopter issues you may encounter.

I want to publicly praise Brooke Storm and Arturo Borrero González for
the hours that they have put into reading docs, building proof of
concept clusters, and improving automation and processes to make the
2020 Kubernetes cluster possible. The Toolforge community can look
forward to more frequent and less disruptive software upgrades in this
cluster as a direct result of this work. We have some other feature
improvements in planning now that I think you will all be excited to
see and use later this year!


Bryan (on behalf of the Toolforge admins and the Cloud Services team)

Wikimedia Cloud Services mailing list (formerly