-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
📖 clarify semantic meaning of the Version fields #11564
base: main
Are you sure you want to change the base?
📖 clarify semantic meaning of the Version fields #11564
Conversation
… to Kubernetes distribution version Signed-off-by: Riccardo Piccoli <[email protected]>
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
This PR is currently missing an area label, which is used to identify the modified component when generating release notes. Area labels can be added by org members by writing Please see the labels list for possible areas. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
Welcome @rccrdpccl! |
Hi @rccrdpccl. Thanks for your PR. I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
I don't mind adding distribution after kubernetes version in go doc, many companies use custom versioning schema, but as a matter of fact the codebase is full of assumptions that this field has a direct correlation with the Kubernetes version being installed. Top of mind we have similar assumptions in KCP/CABPK, most probably also in the topology controller and the E2E test framework, and I would not be surprised if we have assumption on version in other places as well. Also, the K8s version being installed is a key information we need to have for implementing support of any new K8s version whenever it introduces a change that requires special handling (and this happens more often that you might expect); also the entire test matrix for our supported versions is based on this information. This make me wondering if we are going down on a slippery path if we start decoupling the kubernetes distribution version from the actual kubernetes version being installed. @vincepri @enxebre @sbueringer opinions? should we move this discussion to an issue first? |
Also @chrischdi |
Generally speaking, moving towards having better supportability for distros to handle their versioning makes sense to me. That should hopefully come without any loss or impact for KCP/CABPK/... and for the special handling scenarios, while still enabling other building blocks to replace KCP/CABPK in a meaningful manner. A less convoluted path to satisfy both scenarios might be having a new field for the distro version? so then we have semantics for both and is up to implementers how to leverage them together. I agree with Fabrizio that this deserves thorough discussion. |
My take on it. Yes, we have a lot of assumptions all over our code base that the version is the actual Kubernetes version (+/- ~ some suffix according to semver is accepted/ignored). But not only that, we also have a huge ecosystem using Cluster API that up until today can assume that the version is the Kubernetes version. By changing that fact, we are not only breaking our own code that relies on that assumption but also everyone building on top of Cluster API that uses that version. We also can't just simply change the semantic of the field by updating the godoc and then keep all of our usages that assume that the version is the Kubernetes version the same. About some specific cases:
These cases show that it was necessary in the past to rely on the version being the Kubernetes version, while some of those cases are becoming eventually irrelevant because the respective Kubernetes versions go out of support we have other cases where this might never happen (e.g. KCP / CABPK being able to figure out the correct kubeadm config apiVersion based on Machine.spec.version). In general, I think we have to be able going forward to implement Kubernetes version specific behavior. By changing the semantic of this field to "Kubernetes distribution version" we won't be able to do this anymore, which in the worst case means we won't be able to support a new Kubernetes version. Furthermore, if we also couldn't rely anymore that the apiserver itself knows it's correct Kubernetes version we also won't be able to leverage new apiserver features in our controller code until all Kubernetes versions that don't support it are out of support (and we couldn't even implement checks if CAPI is running against a supported kube-apiserver version). That being said, definitely open to discussions about for example adding an additional field. I think this definitely requires an issue and it probably would be good to write a small proposal to get some better idea what we want to achieve. |
What this PR does / why we need it:
The aim of this PR is to clarify the semantic meaning of the Version fields. At the moment, the documentation heavily implies that version fields should contain a Kubernetes version.
However this is problematic when working with Kubernetes distributions, especially the ones with a versioning schema totally different from Kubernetes (i.e. OpenShift).
UX problem
At the moment, to install a k8s distribution, the user needs to work out what k8s version corresponds to the desired k8s distribution version. However, this is not even always possible: for example, in the OpenShift case, a minor release is based on a given Z patch of k8s, and other Z releases for the same Y version might be based on the same k8s Z patch. As a practical example, OpenShift 4.17.0 is based on 1.30.4 and OpenShift 4.17.1 is also based on 1.30.4, making a bidirectional conversion actually impossible.
Kubernetes version references
Unfortunately this is not straightforward, as there are multiple references to explicit Kubernetes versions.