On 2/29/24 19:23, Andrew Bogott wrote:
we're containers-only, but without openstack we would also need to replace or abandon a bunch of other things:
- DNSaaS
I have been thinking about this as well. Most of the DNSaaS usage is because nova, isn't it?
I believe the DNS abstraction + integration that kubernetes has via Services resource is very powerful. That, with the ingress, and the external ops/dns.git may cover all our use cases.
Can you give an example of a DNS entry that we would loose if going containers only?
- DBaaS
This is 100% true.
I honestly ignore if:
a) trove is capable of scheduling DBs as containers b) there is a k8s-native DBaaS solution
- Something to manage auth and multi tenancy
The auth/multitenancy in Toolforge is done via LDAP/Striker/maintain-kubeusers, no? Keystone has very little role in Toolforge.
- Object storage UI
This is 100% true.
Also, this is maybe not the most difficult thing to implement ourselves, if we really need this at all.
- Persistent volumes
I think we may implement similar semantics using k8s PV/PVCs, but I could be wrong.
I'm curious to hear more about the advantage of promoting containers off of VMs and onto metal. My understanding is that the performance cost of virtualization is very small (although non-0). What are the other advantages of moving containers up to first-class?
Some ideas:
== easier maintenance and operation ==
My experience is that maintaining and operating a k8s cluster is way easier than maintaining and operating an openstack cluster.
Upgrades are faster. Components are simpler. They break less.
I can't count how many engineering hours we have spent dealing with rabbitmq problems, or galera problems, or some random openstack API problem.
We just don't seem to have any of this with k8s.
Don't be afraid to tell me if I'm being biased here.
== easier for users ==
Today, use-cases that don't fit in Toolforge are told to move into Cloud VPS. This implies a full system administration that for most cases don't need to be. A container enginer with less platform restrictions than Toolforge may suffice in some cases.
I believe that, in the long run, the community may benefit from having a Toolforge fall back to be a managed k8s, rather than a VM hosting.
== storage may be lighter for containers ==
If I'm not mistaken, in ceph we don't do any de-duplication for common blocks. This means that we store things like linux kernel images for each VM instance we have. Well, the whole base system.
If the debian base system is 1G and we have 1000 VMs, that's 1 TB of data. With 3 ceph copies, 3 TB worth of hardware.
Storage for container images may be lighter if we compare to this, due to its layered filesystem nature.
This is just a theory anyway.
== futuristic tech vs old tech ==
Containers feels the future. VMs feels as old tech. I know this is not a very elaborated argument, but is just a feeling I have from time to time.
On 2/29/24 10:50 AM, David Caro wrote:
Is this to replace toolforge only? Or CloudVPS as a whole?
Yes, CloudVPS as a whole. Stop offering VM hosting, and only offer Container hosting.
If CloudVPS, I think that there's many use-cases covered now that would not be possible (or very different) inside k8s, specially for self-managed infra (what would be an openstack project now).
I would like to collect such list for future reference.
Also, a lot of users are VM-bound, so moving to containers would not be easy for them (or even possible, ex. beta cluster). Not saying if we should
Not sure if the beta cluster is a good example in this context.
Could we find any other strongly VM-bound users?
proposal is to continue supporting those or not (ex. drop CloudVPS offering and replace with k8s as a service).
Maybe we could aim to offer 3 levels of k8s:
* via toolforge ** the PaaS we know, the most abstracted, via the APIs we are creating
* via managed k8s ** you get a dedicated namespace for you, unrestricted within the defined limits
* via self managed k8s ** you get a whole cluster (a virtual one, again, maybe via vcluster or similar)