You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We are currently exploring scaling out (downsize from 8GB ram nodes to 4GB ram nodes) our kubernetes cluster and we noticed that theres quite the variation in core-dump-handler pods memory usages.
We are currently exploring scaling out (downsize from 8GB ram nodes to 4GB ram nodes) our kubernetes cluster and we noticed that theres quite the variation in core-dump-handler pods memory usages.
This appears to be the case that handlers that have handled a crash have significantly higher memory usage than those that have not.
Could this be resolved?
The text was updated successfully, but these errors were encountered: