A lot of things are in the works for which I'll either add an agenda item to the weekly or will have a followup meeting but it has reached the point of a preface email to any discussion being more efficient. /Please/ 'ack' this with a response because there are things in here that affect everyone on the team and are difficult to rewind.
== On OpenStack and Ubuntu/Debian ==
In Austin we had said that the long tailed, delayed, and (some would say) tortuous march of Neutron should mean we stick on Liberty and Trusty for the time being to avoid the historic moving target problem. In making the annual plan and lining up the many changes that have to occur in the next 15 months it became clear that if we do all of this in series, instead of in parallel, we will never make it. We have to shift more sand under our feet than feels entirely comfortable. That means moving to Mitaka https://www.openstack.org/software/mitaka/ before/as-we target Neutron in order to mix in Jessie with backports (which also has Mitaka). The update to Mitaka has a few challenges -- primarily that the designate project made significant changes https://docs.openstack.org/designate/pike/admin/upgrades/mitaka.html. I think I would like to standup new hypervisors ASAP once the main deployment is running Mitaka so we can have customer workloads testing for as long as possible. This in theory sets us up for an N+1 upgrade path on Debian through Stretch and Pike. https://phabricator.wikimedia.org/T169099#3959060
== On monitoring and alerting ==
Last Oct I made a task https://phabricator.wikimedia.org/T178405 to update some of our alerting logic, and in Austin we talked about how to improve our coverage and move towards a rotation based workflow. The move to having a 'normal' on-call rotation, and especially one where we take better advantage of our time-zone spread, is going to require some more sophisticated management than we have now, primarily: escalations and complicated alerting and acknowledgement logic.
This came to the forefront again with the recent loss of labvirt1008. AFAICT the hypervisor rebooted in <=4m https://phabricator.wikimedia.org/T187292#3971877 and so did not alert. There is also the problem of it coming back up and not alerting on the "bad" state that has client instances shutdown. We reviewed that behavior and are in agreement that instances starting by default on hypervisor startup has more downsides than up, but it should still be an alert-able errant state. I created a wmcs-team https://gerrit.wikimedia.org/r/c/410525/ and added a check https://gerrit.wikimedia.org/r/c/413452/ that changes our new-normal to be some instance running on every active hypervisor. Then I proceeded adding a bunch of checks https://gerrit.wikimedia.org/r/q/topic:%2522openstack%2522+(status:open%20OR%20status:merged), adjusting existing checks to alert wmcs-team, and changing some checks to be 'critical' that were not.
The icinga setup in some ways makes single-tenant assumptions that we'll have to work through, such as: 'critical' alerts all opsen, and is the only way to override the configuration to never re-alert. At the moment none of the checks that alert purely wmcs-team, and not all of opsen, will re-alert. Some checks may double-alert where WMCS roots are in both groups. There is also the coverage issue where there are checks that may make sense for those us in this group to receive alerts 24/7, or at lower thresholds for warning, but it would cause fatigue to alert all of opsen. I have made a change for myself that has the following effect: regular critical alerts are on a standard 'awake' schedule and wmcs-team alerts are still 24/7. Andrew, Madhu, and myself have been on a 24/7 alerting schedule for a long time now, and I think shifting to 24/7 for wmcs-team things is an interim step for all of us. This has the side effect of requiring that all things we want to get alerted to 24/7 are alerting the wmcs-team contact group.
I am going to schedule a meeting to review what is currently alerting wmcs-team. This is both so that we can talk as a group about what should alert, and so that we can talk as a group about what does currently. I want everyone to walk away knowing what pages could be sent out and the basics of what they mean. I want everyone in the group to walk away feeling comfortable with our transitional strategy, and acknowledging as a group what things we need to know about 24/7. We can talk about how to take advantage of our time-zone spread in this arrangement, and briefly talk about what it would mean to move to something based on pagerduty/victorops.
The introduction of wmcs-team should allow us to also have our own IRC alerting in combination with #wikimedia-operations to #wikimedia-cloud-feed (or wherever). One of the complaints it seems we have all had is that while treating IRC as persistent for alerting is problematic, it is even more problematic in a channel as noisy as #wikimedia-operations.
Chico has expressed a desire to contribute while IRC is dormant and we have begun a series of 1:1 conversations about our environment. He has been working on logic to alert on a portion of puppet failures https://gerrit.wikimedia.org/r/c/411315rather than than every puppet failure. This, to my mind, does not mean we have solved the puppet flapping issue but it's also not doing us any good to be fatigued by an issue we do not have time to investigate that has been seemingly benign for a year. I am considering whether we should move this to tools.checker, increase retries on our single puppet alerting logic, and add alerting to the main icinga for it. Hopefully, we can talk abou this in our meeting.
== Naming (the worst of all things) ==
==== cloud ====
We have continued to phase out the word 'Lab', and even some networking equipment https://phabricator.wikimedia.org/T187933 has made the change. As part of the Debian and Neutron migrations we need to replace or re-image many of our servers, and it seems like the ideal time to acknowledge a 'cloud' variant naming replacement. In our weekly meeting I proposed 'cld' as a replacement to 'lab' outright. In discussions on ops-l it seems 'lab'=>'cloud' is most desired for simplicity and readability. 'cloud' as a prepend seems fine to me, and I don't anticipate objections within the team so I'm considering it decided (most of us are on ops-l).
==== labtest ====
Lab[test]* needs to be changed as well. The 'test' designation here has been confusing for everyone who is not Andrew and myself numerous times over the last year(s). For clarity, the lab[test] environment is a long lived staging and PoC grounds for openstack provider testing where we need actual integration into hardware, or where functionality cannot be tested in an openstack-on-openstack way. Testing VXLAN overlay for instance is in this category. Migration strategy for upgrade paths of Openstack itself, especially where significant networking changes are made, would be in this category. Hypervisor integration where kernel versions need to be vetted, and package updates need to be canaried are in this category. Lab[test] will never have tenants or projects other than ourselves. This has not been obvious and, as an environment, it has been thought to be transient, temporarily, and/or customer facing at various points.
My first instinct was to fold the [test] naming into whatever next phase normal prepend we settle on (i.e. cloud). Bryan pointed out that making it more difficult to discern between customer facing equipment and internal equipment is a net-negative even if it did away with the confusion we are living with now. I propose we add a indicator of [i] to all "cloud" equipment and *nothing with this indicator will ever be customer facing*. The current indicator of [test] is used both for hiera targeting via regex.yaml and as a human indicator.
lab => cloud
cloudvirt1001 cloudcontrol1001 cloudservices1001 cloudnodepool1001
labtest => cloudi
cloudicontrol2003 cloudivirt2001 cloudivirt2002
Or open to suggestion, but we need to settle on something this week.
==== deployments and regions (oh my) =====
I have struggled with this damn naming thing for so long I am numb to it :) I have the following theory: there is no defensible naming strategy only ones that do not make you vomit.
===== Current situation =====
We have been working with the following assumptions: a "deployment" is a superset of an openstack setup (keystone, nova, glance, etc) where each "deployment" is a functional analog. i.e. even though striker is not an openstack component it is a part of our openstack ...stack and as such is assignable to a particular deployment. deployment => region => component(s)[availablility-zones]. Where we currently have 2 full and 1 burgeoning deployment: main (customer facing in eqiad), labtest (internal use cases in codfw), and labtestn (internal PoC neutron migration environment). FYI in purely OpenStack ecosystem terms, the shareable portions between regions are keystone and horizon.
role::wmcs::openstack::main::control
deployment -> region --> availability zone
main -> eqiad --> nova
So far this has been fine and was a needed classification system to make our code mulit-tenant at all. We are working with several drawbacks at the moment: labtest is a terrible name (as described above), labtestn is difficult to understand, if we pursue the labtest and labtestn strategy we end up with mainn, regions and availability zones are not coupled to deployment naming, these names while distinct do not lend themselves to cohesive expansion. On and on, and nothing will be perfect but we can do a lot better. I have had a lot of issues in finding a naming scheme that we can live with here, such as:
* 'db' in the name issue * 1001 looking like a host issue * labtest is a prepend (labtestn is not) * unclarity on internal/staging/PoC usage and customer facing * schemes that provide hugely long and impractical names
===== proposed situation =====
I do not feel that enamored with any naming solution other than all the ones I've tried end up with oddities and particular ugliness.
[site][numeric](deployment) -> [site][numeric][r postfix for region] (region) --> [site][numeric][region][letter postfix for row] (availability zone -- indicator for us that will last a long time I expect)
# eqiad0 is now 'main' and will be retired with neutron. It also will not match the consistent naming for region, etc. # legacy to be removed # role::wmcs::openstack::eqiad0::control eqiad0 -> eqiad --> nova
# Once the current nova-network setup is retired we end up at deployment 1 in eqiad eqiad1 -> eqiad1r --> eqiad1rb --> eqiad1rc
# role::wmcs::openstack::codfwi1::control codfwi1 -> codfwi1r --> codfwi1rb
codfwi2 -> codfwi2r --> codfwi2rb
This takes our normal datacenter naming ([dc provider][airport]) and includes an 'i' if internal use cases along with a numeric postfix for deployment per site, and postfixes for sub-names such as "region" or "availability-zone". It's not phonetic but it could work. I am going to drop a few links I've walked through in the bottom section (#naming). My only ask is if you have a concern, please suggest an alternative that is thought out to at least 3 deployments per site and differentiates "internal" and "external" use cases. I can change our existing deployments without too much fanfare. These are basically key namespaces in hiera, and class namespaces in Puppet at the moment. I won't bother updating the regions or availability zones in place that exist now -- until redeployment. It becomes decidedly more fixed as we move into more eqiad deployments (as I have no plans to change the existing eqiad deployment in place). This is influenced by my experience in naming things in the networking world where there are multiple objects tied together to achieve a desired end, such as: foo-in-rule-set, foo-interface, foo-out-rule-set, foo-provider-1, etc.
I, absurdly, have more to write but this is enough for a single email. Implications for Neutron actually happening, Debian, next wave of reboots, team practices, and more will be separate. Please ack this and provide feedback or I'm a runaway train.
Best,