New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DaemonsetController can't feel it when node has more resources, e.g. other Pod exits #46935
Comments
@k82cn There are no sig labels on this issue. Please add a sig label by: |
/sig apps |
One option is similar with I'm thinking to handle it by #42002; let scheduler help to place the Pod when there's enough resource. |
@kubernetes/sig-apps-bugs @kubernetes/sig-scheduling-bugs |
/assign |
I've always thought that there is some kind of periodic resync which checks all daemons and should handle this issue. It's even mentioned in some code comments. |
@lukaszo wrote:
I thought so too, but as far as I can tell, the default period is 12 hours and the randomized value could be up to 2x that, so we can't really rely on it for this kind of problem.
|
As far as I can tell, the So I think this explains why DaemonSet does not appear to resync. It's using the default period which is a random value between 12h and 24h. |
Eventually, statefulsets should also be pushed to use AddEventHandler. We don't want controller loops to depend on tight resync intervals. The resource handlers on secondary caches are paramount for complementing the resync logic of the main controllers, eg. once a Node has more resources (status change on the Node?), a watch event in the secondary (node) cache of the DS controller should trigger a resync of all the DaemonSets that want to run on that node. |
nop, it dependents on pod's event (e.g. add, update, delete) :(. It avoid too many info in Node object. In scheduler, it cache those info in In DaemonSet's |
Automatic merge from submit-queue (batch tested with PRs 49488, 50407, 46105, 50456, 50258) Requeue DaemonSets if non-daemon pods were deleted. **What this PR does / why we need it**: Requeue DaemonSets if no daemon pods were deleted. **Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #46935 **Release note**: ```release-note None ```
Is this a BUG REPORT or FEATURE REQUEST? (choose one):
BUG REPORT
Kubernetes version (use
kubectl version
):master branch
What happened:
This's a follow up of #45628, there's a case that: a node does not have enough cpu or memory resource to create the pod of ds, later I delete pods on it, and it satisfy the request now, but the ds controller could not feel it.
The text was updated successfully, but these errors were encountered: