Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

service-node-port-range and ip_local_port_range collision #6342

Closed
sp-joseluis-ledesma opened this issue Jan 15, 2019 · 11 comments
Closed

service-node-port-range and ip_local_port_range collision #6342

sp-joseluis-ledesma opened this issue Jan 15, 2019 · 11 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@sp-joseluis-ledesma
Copy link
Contributor

1. What kops version are you running? The command kops version, will display
this information.

Version 1.11.0 (git-2c2042465)

2. What Kubernetes version are you running? kubectl version will print the
version if a cluster is running or provide the Kubernetes version specified as
a kops flag.

Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.11", GitCommit:"637c7e288581ee40ab4ca210618a89a555b6e7e9", GitTreeState:"clean", BuildDate:"2018-11-26T14:25:46Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}

3. What cloud provider are you using?
aws

4. What commands did you run? What is the simplest way to reproduce this issue?
There is a collision between the --service-node-port-range (kube-apiserver flag, which defaults to 30000-32767) and the sysctl setting net.ipv4.ip_local_port_range (10240-65535).

5. What happened after the commands executed?
When creating a service which requires a NodePort, this port may be in use by a existing connection on any node/master

6. What did you expect to happen?
Kops should configure the nodes to avoid this collision. Kops should set net.ipv4.ip_local_reserved_ports to the ports used for NodePort services

**7. Please provide your cluster manifest. Execute
N/A

8. Please run the commands with most verbose logging by adding the -v 10 flag.
Paste the logs into this report, or in a gist and provide the gist link here.

N/A

9. Anything else do we need to know?
N/A

@cmedley2
Copy link

We ran into this as well using the latest kops ami and kops version 1.10 on aws.

I agree it would be nice if it could set the reserved ports to prevent the collision or at least have the kops defaults for ip_local_port_range not collide with the default kubernetes service-node-port-range.

@justinsb
Copy link
Member

Do we know who is setting ip_local_port_range to 10240-65535? This is using the latest kops ami?

@cmedley2
Copy link

No still not sure what was setting it and yes it was built using k8s-1.10-debian-jessie-amd64-hvm-ebs-2018-08-17

@justinsb
Copy link
Member

You're right - looks like we picked it up from here andsens/bootstrap-vz@98de220#diff-cef01cc693f764f04635775422fb3584 I'm guessing we picked that up when we rebased to pick up the ENI driver :-(

@sp-joseluis-ledesma
Copy link
Contributor Author

Either way, I think it would be safer to just ensure we have reserved the nodePortRange so no collision is possible whatever the ip_local_port_range value is.

@cmedley2
Copy link

I agree.

@justinsb
Copy link
Member

You're (both) right - I'm working to get that PR merged. Sorry it took me so long to understand, and thanks for figuring it out & fixing!

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 26, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jun 25, 2019
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

5 participants