Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Daemonset can not create the needed pod when the node has enough resource after the other daemonset delete #58868

Closed
chentao1596 opened this issue Jan 26, 2018 · 7 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/apps Categorizes an issue or PR as relevant to SIG Apps.
Projects

Comments

@chentao1596
Copy link

What happened:
I create two daemonsets, only one can create the needed pod because of the node's resource is not enough.
I delete the normal one, but the other one still not create the needed pod.

What you expected to happen:
I hope the other one can quickly create the needed pod.

How to reproduce it (as minimally and precisely as possible):
My enviroment:
1 master + 1 node
two daemonsets have the same resource required: 700ms(cpu) + 700Mi(memory), but my node can only support one(e.g memory is not enough for 2 pods)

Anything else we need to know?: none

Environment:

  • Kubernetes version (use kubectl version):
    build by myself:
    Client Version: version.Info{Major:"1", Minor:"10+", GitVersion:"v1.10.0-alpha.1.1355+6477f2b0fdbc0b", GitCommit:"6477f2b0fdbc0b7e72dccba5b5123e72244e9909", GitTreeState:"clean", BuildDate:"2018-01-25T03:06:42Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
    Server Version: version.Info{Major:"1", Minor:"10+", GitVersion:"v1.10.0-alpha.1.1355+6477f2b0fdbc0b", GitCommit:"6477f2b0fdbc0b7e72dccba5b5123e72244e9909", GitTreeState:"clean", BuildDate:"2018-01-25T02:48:14Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
  • Cloud provider or hardware configuration: none
  • OS (e.g. from /etc/os-release):CentOS Linux release 7.3.1611 (Core)
  • Kernel (e.g. uname -a):Linux host-10-10-10-53 3.10.0-514.26.2.el7.x86_64 Unit test coverage in Kubelet is lousy. (~30%) #1 SMP Tue Jul 18 11:25:45 CST 2017 x86_64 x86_64 x86_64 GNU/Linux
  • Install tools: ansible
  • Others: none
@k8s-ci-robot k8s-ci-robot added the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label Jan 26, 2018
@chentao1596
Copy link
Author

chentao1596 commented Jan 26, 2018

/cc @k82cn
I found you have handle it when no-daemon pod deleted.
#46935
Please have a see, thank you!

@k82cn
Copy link
Member

k82cn commented Jan 26, 2018

I delete the normal one, but the other one still not create the needed pod.

delete pod or DS?

@k82cn
Copy link
Member

k82cn commented Jan 26, 2018

/sig apps

@k8s-ci-robot k8s-ci-robot added sig/apps Categorizes an issue or PR as relevant to SIG Apps. and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Jan 26, 2018
@chentao1596
Copy link
Author

chentao1596 commented Jan 27, 2018

delete pod or DS?

delete DS.

@kow3ns kow3ns added this to Backlog in Workloads Feb 26, 2018
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 27, 2018
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 27, 2018
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/apps Categorizes an issue or PR as relevant to SIG Apps.
Projects
Workloads
  
Done
Development

Successfully merging a pull request may close this issue.

4 participants