-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[processor/k8sattributes] update internally stored pods with replicaset/deployment information #37088
base: main
Are you sure you want to change the base?
[processor/k8sattributes] update internally stored pods with replicaset/deployment information #37088
Conversation
Signed-off-by: Florian Bacher <[email protected]>
Signed-off-by: Florian Bacher <[email protected]>
Signed-off-by: Florian Bacher <[email protected]>
@@ -1095,6 +1096,26 @@ func (c *WatchClient) addOrUpdateReplicaSet(replicaset *apps_v1.ReplicaSet) { | |||
c.ReplicaSets[string(replicaset.UID)] = newReplicaSet | |||
} | |||
c.m.Unlock() | |||
|
|||
for _, pod := range c.Pods { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we can avoid looping over Pod's cache by waiting for all of the syncs to happen first that would be great. Left a comment back at the issue: #37056 (comment)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the initial review @ChrsMark! Currently the issue also with the wait_for_metadata
is that during the initial sync, the pods already present in the system can arrive at the informer before their replicasets. However maybe there is a way to first wait for the initial replicaSet informer sync to be complete, before proceeding with the pod informer. I will try to see if this can be done, as I'm also not quite happy with the current solution of iterating over all pods yet.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems like the solution is much simpler than initially thought. I now changed the order of the informer initialisation to start the replicaSet informer before the pod informer and that also fixed the issue. So far I have verified it manually, but I will look into providing an e2e test, similar to the current ones but with a collector restart.
Signed-off-by: Florian Bacher <[email protected]>
Signed-off-by: Florian Bacher <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Signed-off-by: Florian Bacher <[email protected]>
…ame' into fix/37056/k8sattr-deployment-name
Description
This PR adapts the k8sattributes processor to update its internally stored pods with the information about their respective replica sets and deployments, after the replica set information is received via the informer.
This can happen during the initial sync of the processor, where it can happen that the replica set information for existing pods is received after the pods have already been received and stored internally.
Link to tracking issue
Fixes #37056
Testing
Added unit test
Documentation