Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Try kube-proxy via ipvs instead of iptables or userspace #17470

Closed
thockin opened this issue Nov 19, 2015 · 45 comments
Closed

Try kube-proxy via ipvs instead of iptables or userspace #17470

thockin opened this issue Nov 19, 2015 · 45 comments
Assignees
Labels
area/kube-proxy sig/network Categorizes an issue or PR as relevant to SIG Network.

Comments

@thockin
Copy link
Member

thockin commented Nov 19, 2015

We should see if we can make ipvs do everything we need - it should perform even better than iptables. A benchmark is in order.

Notes:

root@kubernetes-minion-32zi:/home/thockin# ipvsadm -A -t 10.9.8.7:12345 -s rr
root@kubernetes-minion-32zi:/home/thockin# ipvsadm -a -t 10.9.8.7:12345 -m -r 10.244.1.27:9376
root@kubernetes-minion-32zi:/home/thockin# ipvsadm -a -t 10.9.8.7:12345 -m -r 10.244.1.28:9376

root@kubernetes-minion-32zi:/home/thockin# ip addr add 10.9.8.7/32 dev eth0

root@kubernetes-minion-32zi:/home/thockin# curl 10.9.8.7:12345
hostB

root@kubernetes-minion-32zi:/home/thockin# curl 10.9.8.7:12345
hostA

root@kubernetes-minion-32zi:/home/thockin# docker run -ti busybox wget -qO- 10.9.8.7:12345
hostB

root@kubernetes-minion-32zi:/home/thockin# docker run -ti busybox wget -qO- 10.9.8.7:12345
hostA

"masq" mode is DNAT not SNAT src ip is preserved.

We have to assign the VIP to some interface in the root NS. This is a bit ugly in that ports NOT exposed by the VIP get sent to the host (e.g. 22). I think we can fix that by adding another catchall for the VIP. I don't know if there are limits to local IPs

Not sure if there is a atomic batch update command, but it does handle batch invocation at least.

Several scheduling policies, but rr seems sufficient, maybe lc. sh seems to give us client affinity.

We can configure timeouts.

We'll need to do something for node-ports, probably still iptables. I think this (and the other tricks we pull for load-balancers) will be the biggest challenge.

@BenTheElder busy? :)

@BenTheElder
Copy link
Member

Pretty busy right now, last round of midterms and final projects right now then soon-ish I have finals. We're out for the holidays in about 3 weeks though (done for sure by December 12th).

I'll be sure to take a look if/when I can find the time though!

@thockin
Copy link
Member Author

thockin commented Nov 19, 2015

I was kidding :)
On Nov 18, 2015 5:42 PM, "Benjamin Elder" notifications@github.com wrote:

Pretty busy right now, last round of midterms and final projects right now
then soon-ish I have finals. We're out for the holidays in about 3 weeks
though (December 12th).

I'll be sure to take a look if/when I can find the time though!


Reply to this email directly or view it on GitHub
#17470 (comment)
.

@BenTheElder
Copy link
Member

Ah, whizzed right over my head. :)

I very much enjoy working in OSS though, If I don't get wrapped up in
something else I may have to get back into k8s tinkering again.

I'll stop cluttering this issue for now though :)

On Wed, Nov 18, 2015 at 9:55 PM, Tim Hockin notifications@github.com
wrote:

I was kidding :)
On Nov 18, 2015 5:42 PM, "Benjamin Elder" notifications@github.com
wrote:

Pretty busy right now, last round of midterms and final projects right
now
then soon-ish I have finals. We're out for the holidays in about 3 weeks
though (December 12th).

I'll be sure to take a look if/when I can find the time though!


Reply to this email directly or view it on GitHub
<
#17470 (comment)

.


Reply to this email directly or view it on GitHub
#17470 (comment)
.

@hw-qiaolei
Copy link
Contributor

@thockin I like this idea. Use iptables for LB seems limited(for LB algorithms) and less graceful(thousands of iptables rules).

I noticed Andrey Sibiryov who is from Uber had also given a session "Kernel load-balancing for Docker containers using IPVS" on DockerCon 2015 eu. Please see DockerCon 2015 eu Agenda.

@thockin
Copy link
Member Author

thockin commented Nov 23, 2015

Yeah, I think this is actually not a very hard project, but I'd want to see
some graphs.

On Sun, Nov 22, 2015 at 11:04 PM, qiaolei notifications@github.com wrote:

@thockin https://github.com/thockin I like this idea. Use iptables for
LB seems limited(for LB algorithms) and less graceful(thousands of
iptables rules).

I noticed Andrey Sibiryov who is from Uber had also given a session
"Kernel load-balancing for Docker containers using IPVS" on DockerCon 2015
eu. Please see DockerCon 2015 eu Agenda
http://europe-2015.dockercon.com/agenda.


Reply to this email directly or view it on GitHub
#17470 (comment)
.

@aledbf
Copy link
Member

aledbf commented Dec 4, 2015

Video DockerCon 2015 eu Kernel load-balancing for Docker containers using IPVS

@thockin
Copy link
Member Author

thockin commented Dec 4, 2015

yeah, IPVS works. I tried it out a few months back, but I was missing a
piece of the recipe.

On Thu, Dec 3, 2015 at 8:09 PM, Manuel Alejandro de Brito Fontes <
notifications@github.com> wrote:

Video DockerCon 2015 eu Kernel load-balancing for Docker containers using
IPVS https://www.youtube.com/watch?v=oFsJVV1btDU


Reply to this email directly or view it on GitHub
#17470 (comment)
.

@guybrush
Copy link

guybrush commented Dec 4, 2015

also it would be cool then to have k8s-services utilize the ipvs features, like persistence and selecting the balance-strategy (and even weights?)

apiVersion: v1
kind: Service
metadata:
  name: my-service
spec:
  selector: 
    project: my-service
  ports:
    - protocol: "TCP"
      port: 80
      targetPort: 80
      strategy: "rr"
      persistence: true

@guilhem
Copy link

guilhem commented Dec 8, 2015

Remind me something :) #3760 (comment)

@feiskyer
Copy link
Member

Interesting.

@qoke
Copy link

qoke commented Jan 12, 2016

Whilst poking around for other threads, I found this... moby/libnetwork#852
and this https://github.com/kobolog/gorb which may be of interest..

@davidopp
Copy link
Member

@kubernetes/huawei

@smarterclayton
Copy link
Contributor

Spoke with some folks internally, we think this is a good path of investigation (although expensive).

@thockin
Copy link
Member Author

thockin commented May 22, 2016

Expensive in what regard?
On May 22, 2016 3:32 PM, "Clayton Coleman" notifications@github.com wrote:

Spoke with some folks internally, we think this is a good path of
investigation (although expensive).


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub
#17470 (comment)

@smarterclayton
Copy link
Contributor

smarterclayton commented May 23, 2016 via email

@feiskyer
Copy link
Member

Maybe we could make kube-proxy pluggable, so everyone can integration with their own implementation as needed.

@thockin
Copy link
Member Author

thockin commented May 23, 2016

That's sort of the idea. We will build a few implementations I to it, but
people who want other types of proxies can do their own.
On May 22, 2016 6:17 PM, "Pengfei Ni" notifications@github.com wrote:

Maybe we could make kube-proxy pluggable, so everyone can integration with
their own implementation as needed.


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub
#17470 (comment)

@fasaxc
Copy link
Contributor

fasaxc commented Jun 3, 2016

@thockin I gave this a spin with the goal of understanding how it might interact with iptables policy rules such as those used by Calico. I haven't pulled on every thread but it didn't look too promising :-(. It now looks promising, see below. After

  • running with Calico networking on a pair of computes and one master node
  • creating a pair of nginx pods behind a service
  • stopping kube-proxy
  • removing kube-proxy's iptables rules
  • adding the service IP to a local dummy
ip link add type dummy
ip addr add 10.100.0.50 dev dummy0
  • manually configuring the service IP as an ipvs VIP
ipvsadm -A -t <service-ip> -p -s rr
ipvsadm -a -t <service-ip> -m -r <pod-ip>
ipvsadm -a -t <service-ip> -m -r <pod-ip>

(-p invokes 'persistent' mode, which allows for omitting the port.)

I found that:

  • ipvs uses its own connection tracking, which seems to bypass requires a flag to be set to update nf_conntrack (see follow-up below). That means that the standard way of accepting return packets in iptables (-m conntrack --ctstate ESTABLISHED) doesn't fire. Instead, they hit our -m conntrack --ctstate INVALID -j DROP rule.
  • It looks like ipvs provides an iptables extension that should do the trick but I couldn't get that to match packets; it may be bugged or I may have misused it somehow. It looks like -m ipvs --vdir REPLY should match the right packets but that doesn't seem to fire in any of the tables that I tried (I tried all the nat tables, all the mangle tables and all the filter tables).

I also found that the requirement to have the service IP on a local dummy interface on the host was a bit of a pain. Running a command such as curl <service IP> on the host chooses <service IP> for its source IP by default, which breaks return traffic if the request gets sent to another host. I'm sure that can be sorted with appropriate routing rules but it might be a bit fiddly.

If I remove Calico's -m conntrack --ctstate INVALID -j DROP rule then packets start flowing, so it does look like that's the main issue to solve.

@aledbf
Copy link
Member

aledbf commented Jun 3, 2016

ipvs uses its own connection tracking, which seems to bypass nf_conntrack

you need to enable connection tracking using "sysctl -w net.ipv4.vs.conntrack=1"

@fasaxc
Copy link
Contributor

fasaxc commented Jun 6, 2016

@aledbf Thanks for the tip, I'll give that a try.

@fasaxc
Copy link
Contributor

fasaxc commented Jun 6, 2016

I retested with the net.ipv4.vs.conntrack flag set and then it seems to work with Calico's existing -m conntrack --ctstate ESTABLISHED rule. I tested

  • connecting from a pod to one on the same host via a service IP
  • connecting from a pod to one on another host via a service IP
  • connecting from a host to service IP.

All seemed to work as expected and policy was being applied.

I did not try connecting from pod to its own service IP. There still might be wrinkles there due to the need to SNAT those packets.

@fasaxc
Copy link
Contributor

fasaxc commented Jun 6, 2016

Update: while I had the rig set up, I checked the latter case and it also seems to work as expected. I manually inserted an iptables rule that masqueraded looped-back traffic.

@smarterclayton
Copy link
Contributor

Having to create a dummy IP for each service would be unfortunate. Is
there a way around that?

On Mon, Jun 6, 2016 at 10:07 AM, Shaun Crampton notifications@github.com
wrote:

Update: while I had the rig set up, I checked the latter case and it also
seems to work as expected. I manually inserted an iptables rule that
masqueraded looped-back traffic.


You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
#17470 (comment),
or mute the thread
https://github.com/notifications/unsubscribe/ABG_pwvWU2Rog4uOd6WZA0eUr99maQ93ks5qJCmjgaJpZM4GlM7S
.

@resouer
Copy link
Contributor

resouer commented Jun 21, 2016

It seems docker 1.12 will ship Service based on IPVS

@hhzguo
Copy link

hhzguo commented Nov 29, 2016

fasaxc:
Thanks a lot for the guide. I am a newbie in iptables, can you tell the detail how to add masquerade to loopback traffic?

Thanks a lot
Henry

@fasaxc
Copy link
Contributor

fasaxc commented Dec 8, 2016

@hhzguo I did a manual test that matched on the specific IP addresses I was expecting, nothing that was production ready, I'm afraid.

@starsdeep
Copy link

starsdeep commented Dec 21, 2016

@fasaxc

ip link add type dummy
ip addr add <service-ip> dev dummy0
ipvsadm -A -t <service-ip> -p -s rr
ipvsadm -a -t <service-ip> -m -r <pod-ip>
ipvsadm -a -t <service-ip> -m -r <pod-ip>

This only work for request from container, but does not work for request from host, because ipvs does not support client and director on the same machine. Is there any workaround?

@starsdeep
Copy link

starsdeep commented Dec 21, 2016

@thockin

root@kubernetes-minion-32zi:/home/thockin# ipvsadm -A -t 10.9.8.7:12345 -s rr
root@kubernetes-minion-32zi:/home/thockin# ipvsadm -a -t 10.9.8.7:12345 -m -r 10.244.1.27:9376
root@kubernetes-minion-32zi:/home/thockin# ipvsadm -a -t 10.9.8.7:12345 -m -r 10.244.1.28:9376
root@kubernetes-minion-32zi:/home/thockin# ip addr add 10.9.8.7/32 dev eth0
root@kubernetes-minion-32zi:/home/thockin# curl 10.9.8.7:12345

Does this really works? it seems that ipvs does not support client and director on the same machine.

@thockin
Copy link
Member Author

thockin commented Dec 23, 2016 via email

@starsdeep
Copy link

starsdeep commented Dec 30, 2016

@thockin @fasaxc

I manually test this:

  • stop kube-proxy
  • remove ALL iptables
  • restart docker and flanneld
  • add ipvsadm rules
  • curl : in container works, however, curl : on host does not work

More information:

After clearing ALL iptables and restarting docker and flanneld, iptables on my host is as follows:

vagrant@master:~$ sudo iptables-save
# Generated by iptables-save v1.4.21 on Fri Dec 30 07:28:19 2016
*raw
:PREROUTING ACCEPT [62761:27768513]
:OUTPUT ACCEPT [61564:28873910]
COMMIT
# Completed on Fri Dec 30 07:28:19 2016
# Generated by iptables-save v1.4.21 on Fri Dec 30 07:28:19 2016
*mangle
:PREROUTING ACCEPT [62772:27769085]
:INPUT ACCEPT [62441:27740645]
:FORWARD ACCEPT [322:27900]
:OUTPUT ACCEPT [61576:28875158]
:POSTROUTING ACCEPT [61886:28902338]
COMMIT
# Completed on Fri Dec 30 07:28:19 2016
# Generated by iptables-save v1.4.21 on Fri Dec 30 07:28:19 2016
*nat
:PREROUTING ACCEPT [294:20065]
:INPUT ACCEPT [194:11858]
:OUTPUT ACCEPT [816:49477]
:POSTROUTING ACCEPT [802:48637]
:DOCKER - [0:0]
:FLANNEL - [0:0]
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -s 192.168.64.0/20 -j FLANNEL
-A POSTROUTING ! -s 192.168.64.0/20 -d 192.168.64.0/20 -j MASQUERADE
-A FLANNEL -d 192.168.64.0/20 -j ACCEPT
-A FLANNEL ! -d 224.0.0.0/4 -j MASQUERADE
COMMIT
# Completed on Fri Dec 30 07:28:19 2016
# Generated by iptables-save v1.4.21 on Fri Dec 30 07:28:19 2016
*filter
:INPUT ACCEPT [53103:24760430]
:FORWARD ACCEPT [8:2122]
:OUTPUT ACCEPT [52333:25783147]
:DOCKER - [0:0]
:DOCKER-ISOLATION - [0:0]
-A FORWARD -j DOCKER-ISOLATION
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A DOCKER-ISOLATION -j RETURN
COMMIT
# Completed on Fri Dec 30 07:28:19 2016

test pods and svc:

vagrant@master:~$ kubectl get pod -o wide
NAME                     READY     STATUS    RESTARTS   AGE       IP             NODE
nginx-2032906785-eu3of   1/1       Running   0          14m       192.168.70.2   kube-node-1
nginx-2032906785-gfrjr   1/1       Running   0          14m       192.168.67.2   kube-node-2

vagrant@master:~$ kubectl get svc
NAME         CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
kubernetes   10.254.0.1      <none>        443/TCP   37m
nginx        10.254.81.238   <none>        80/TCP    14m

ipvs rules:

vagrant@master:~$ sudo ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.254.81.238:80 rr
  -> 192.168.67.2:80              Masq    1      0          0
  -> 192.168.70.2:80              Masq    1      0          0

dummy interface

vagrant@master:~$ ip a | grep dummy0 -A 3
19: dummy0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default
    link/ether 9a:75:f0:fb:6b:45 brd ff:ff:ff:ff:ff:ff
    inet 10.254.81.238/32 scope global dummy0
       valid_lft forever preferred_lft forever
20: dummy1: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default
    link/ether 36:bd:dd:56:2e:d0 brd ff:ff:ff:ff:ff:ff

curl : in container works

vagrant@master:~$ sudo docker run -ti busybox wget -qO- 10.254.81.238:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

curl : on host does not work

vagrant@master:~$ curl 10.254.81.238:80
curl: (7) Failed to connect to 10.254.81.238 port 80: Connection timed out

@kobolog
Copy link

kobolog commented Jan 4, 2017

IPVS definitely supports client and director on the same machine. I just tried re-running your example @starsdeep and it worked:

[root@aws ~]# ip link add eth1 type dummy
[root@aws ~]# ip addr add 10.8.8.8/32 dev eth1
[root@aws ~]# ipvsadm -A -t 10.8.8.8:10000 -s rr
[root@aws ~]# docker run -d nginx
b57c0a31491efa19f1820100ad123952f6d2ec6f60eaf923e0e220dcb5b69578
[root@aws ~]# ipvsadm -a -t 10.8.8.8:10000 -m -r $(docker inspect -f '{{ (index .NetworkSettings.Networks "bridge").IPAddress }}' b57c):80
[root@aws ~]# curl -sS --head 10.8.8.8:10000
HTTP/1.1 200 OK
Server: nginx/1.11.8
Date: Wed, 04 Jan 2017 17:16:26 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 27 Dec 2016 14:23:08 GMT
Connection: keep-alive
ETag: "5862794c-264"
Accept-Ranges: bytes

Maybe there're some other 3rd-party networking toolkits you have running on that box that might interfere with experiment?

@matthiasr
Copy link

it seems there is more concrete work ongoing for this: #38969 #38817

Related mail thread: https://groups.google.com/forum/#!topic/kubernetes-sig-network/59HG9SlypBc

@guybrush
Copy link

guybrush commented Apr 3, 2017

there have been 2 talks at kubecon berlin regarding IPVS:

@ghost
Copy link

ghost commented Apr 6, 2017

We have a tested implementation of IPVS kubeproxy in #44063
Busy tying all of the related issues and PR's together now.

@ChenLingPeng
Copy link
Contributor

@kobolog
I found that your script only work when director visit local containers. when tring from other director, no response will reply. Do you have any idea that I can visit container in HostB when requet from HostA using IPVS mode?

@kobolog
Copy link

kobolog commented Apr 12, 2017

@ChenLingPeng normally you'd want IPVS running either on your origin or your destination host in this kind of setups – this is because by default IPVS NAT will only do DNAT so if you have an IPVS-in-the-middle then the response will not hit it on the way back and from origin's point-of-view it's gonna be a martian packet.

There are a few ways to have IPVS-in-the-middle if you want it anyway, e.g. you can use a varation of source-based routing where each backend has multiple IPs – one per IPVS – and then configure static routes on them so that traffic is routed back to a corresponding IPVS host. Example: you got two IPVS hosts: IPVS-A with 10.0.0.1 and IPVS-B with 10.0.0.2, and a number of backends. Each backend would have at least two IP addressess configured, one per IPVS host. Let's say BE-A has IPs 10.0.1.1 and 10.0.1.2. In this case, you add it to IPVS-A as 10.0.1.1 and to IPVS-B as 10.0.1.2. This would allow you to configure static routes on BE-A that would essentially instruct the networking stack to route all traffic coming on 10.0.1.1 back to 10.0.0.1 and all traffic coming on 10.0.1.2 back to 10.0.0.2.

@murali-reddy
Copy link

murali-reddy commented Apr 22, 2017

I was testing IPVS based service proxy solution in Kube-router [1] for Kubernetes last couple of days. Here are my observations.

First, it seems hard to use direct routing. We need assign VIP (cluster IP/node IP) to the pods and pods across the nodes and nodes should be in same L2 domain, as MAC rewrite is done and need to be sent directly to the pod by IPVS. So we have to use ipvs masquerade mode for a viable solution. But it requires that reverse traffic from the pods has to go through node so that source IP is replaced with cluster IP/node ip which ever was used. One soultion is to to do both SNAT (to replace source IP with node's IP) and DNAT, in which case we will still need iptable rules to do SNAT. But doing SNAT will break the network policies enforcements as we loose source IP. However if you are using host gateway [2] or cloud gateway [2] based routing for pod-to-pod connectivity then things just fall in place with out needing SNAT. With node port based service, when client is outside the cluster, reverse traffic will not hit IPVS node. So its essential that traffic to node port needs both SNAT and DNAT. Source IP itself has no significance for non-pod clients so network policies is not issue in this case. Ofcourse this is not unique problem for IPVS, but even for IPVS proxier we will still need iptable rules that will support --cluster-cidr and masquerade-all flags

[1] https://github.com/cloudnativelabs/kube-router/blob/master/app/controllers/network_services_controller.go
[2] https://github.com/coreos/flannel/blob/master/Documentation/backends.md

@ddysher
Copy link
Contributor

ddysher commented May 2, 2017

continuing @starsdeep 's experiment, but it works both in container and in host (only one host). Is there anything specific in your network environment @starsdeep ?

# prepare local kubernetes cluster
$ sudo ./hack/local-up-cluster.sh
$ sudo kill -9 $KUBE_PROXY_PID

# run two nginx pods
$ kubectl run --image nginx --replicas=2 nginx

# expose deployment
$ kubectl expose deployment nginx --port=80 --target-port=80

$ kubectl get services
NAME         CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   10.0.0.1     <none>        443/TCP   3m
nginx        10.0.0.185   <none>        80/TCP    4s

$ kubectl get pods -o wide
NAME                    READY     STATUS    RESTARTS   AGE       IP           NODE
nginx-348975970-7x18g   1/1       Running   0          49s       172.17.0.3   127.0.0.1
nginx-348975970-rtqrz   1/1       Running   0          49s       172.17.0.4   127.0.0.1

# Add dummy link
$ sudo ip link add type dummy
$ sudo ip addr add 10.0.0.185 dev dummy0

# Add ipvs rules; real server should use nat mode, since host is essentially
# the gateway.
$ sudo ipvsadm -A -t 10.0.0.185:80
$ sudo ipvsadm -a -t 10.0.0.185:80 -r 172.17.0.3:80 -m
$ sudo ipvsadm -a -t 10.0.0.185:80 -r 172.17.0.4:80 -m
$ sudo ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.0.0.185:80 wlc
  -> 172.17.0.3:80                Masq    1      0          1
  -> 172.17.0.4:80                Masq    1      0          1

# Works in container
$ docker run -ti busybox wget -qO- 10.0.0.185:80
<!DOCTYPE html>
// truncated

# Works in host
$ curl 10.0.0.185:80
<!DOCTYPE html>
// truncated

To use dr mode, I'v created another dummy interface in pod as well...

# continue above setup;
$ PID=$(docker inspect -f '{{.State.Pid}}' k8s_nginx_nginx-348975970-rtqrz_default_b1661284-2eeb-11e7-924d-8825937fa049_0)
$ sudo mkdir -p /var/run/netns
$ sudo ln -s /proc/$PID/ns/net /var/run/netns/$PID
$ sudo ip link add type dummy
$ sudo ip link set dummy1 netns $PID
$ sudo ip netns exec $PID ip addr add 10.0.0.185 dev dummy1
$ sudo ip netns exec $PID ip link set dummy1 up
# same for the other pod
$ sudo ipvsadm -D -t 10.0.0.185:80
$ sudo ipvsadm -A -t 10.0.0.185:80
$ sudo ipvsadm -a -t 10.0.0.185:80 -r 172.17.0.3:80 -g
$ sudo ipvsadm -a -t 10.0.0.185:80 -r 172.17.0.4:80 -g    
$ docker run -ti busybox wget -qO- 10.0.0.185:80
<!DOCTYPE html>
// truncated

// ignored seeting arp_ignore/arp_announce

Just a quick and dirty experiment.

Also, links from @guybrush are outdated, repost here
http://youtu.be/4-pawkiazEg (huawei)
http://youtu.be/KJ-A8LYriGI (comcast)

@m1093782566
Copy link
Contributor

m1093782566 commented May 2, 2017

@ddysher

, but it works both in container and in host (only one host).

I assume need at least 2 hosts, and try to visit container(in the next host) from host via vip. In my observation, response won't come back.

@ddysher
Copy link
Contributor

ddysher commented May 2, 2017

@m1093782566 yeah, i'd think so but haven't had time to look at it yet.

I'm playing with a single host since @starsdeep only uses vagrant@master

@warmchang
Copy link
Contributor

nice job!

k8s-github-robot pushed a commit that referenced this issue Aug 30, 2017
Automatic merge from submit-queue (batch tested with PRs 51377, 46580, 50998, 51466, 49749)

Implement IPVS-based in-cluster service load balancing

**What this PR does / why we need it**:

Implement IPVS-based in-cluster service load balancing. It can provide some performance enhancement and some other benefits to kube-proxy while comparing iptables and userspace mode. Besides, it also support more sophisticated load balancing algorithms than iptables (least conns, weighted, hash and so on).

**Which issue this PR fixes**

#17470 #44063

**Special notes for your reviewer**:


* Since the PR is a bit large, I splitted it and move the commits related to ipvs util pkg to PR #48994. Hopefully can make it easier to review.

@thockin @quinton-hoole @kevin-wangzefeng @deepak-vij @haibinxie @dhilipkumars @fisherxu 

**Release note**:

```release-note
Implement IPVS-based in-cluster service load balancing
```
@cmluciano
Copy link

/close

IPVS is now in alpha form

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/kube-proxy sig/network Categorizes an issue or PR as relevant to SIG Network.
Projects
None yet
Development

No branches or pull requests