-
Notifications
You must be signed in to change notification settings - Fork 4.5k
A question about the registration of the consul cluster #6665
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
I think you should register "global" services to all of master nodes to have HA. But as I understand services should register to client node, and it should register using localhost:8500. It prevets returnig client IP address when that client node is down |
@spawluk Thanks for your answer,I tried to register the service to the client and use localhost to register, but the result is still not working. When the client is down, the service registered on it is still unavailable. |
Firstly I would have some question. |
hi @PumpkinSeed The problem description is the same as this, but the solution is not solved. |
I should replicate the situation to have a clear answer to you. What happens if you try to register the service via a consul agent? It should solve the issue where the services not connecting directly to the servers but I'm not totally sure about. |
I'm not sure if I understood correctly what you doing. I'll try to describe what I understood:
Do I understand correctly? If so, that is correct consul behavior. Any consul node that you register service into becomes main source of truth of service existence. This node f.e. executes health-checks of registered service and passes information about it to master nodes. If you kill main source of truth, service is down. The way we work with this is many servers with same service, each server has its own consul client instance connected to masters servers. This is what I understand correct architecture of consul infrastructure |
@spawluk you understand, thanks, and I still have some questions to ask. This kind of architecture requires us to start multiple instances of the same service to register with different agents to ensure that the service is finally available. |
This architecture must require that the service is also clustered. If the service is single-node, after the consul-agent is down, the single-node service registered on it will not be discovered. Is it understandable that the high availability of a consul cluster is dependent on the service cluster being highly available. |
First I have to say that I didn't use k8s ever so I may be wrong here. I think consul authors ment that every kubernetes container with service should have consul client installed along with service. Then when you scale service, you add consul client for each copy of your service. In that approach it does not matter that consul is down in one of your contaiers. It disables also that one single service, but there are other copies of it in cluster, hence HA |
if the service is a single node,The registered consul-agent is down. Does it mean that this service will not be available? |
I think you misunderstood the service discovery. You should somehow reach the consensus between the Consul cluster's nodes. This means if you register the service you should see it on all nodes of the Consul cluster. The best if you have 3 server node whose are independent from each other, but sharing the state. You can follow the installation here. On the other hand you should have Consul clients on each pod where you want to have service discovery. These client's should connect to the servers. If they don't connect they won't do anything with the requests, because the clients directly send all of the requests to the one of the server nodes. |
Howdy all, Issues on GitHub for Consul are intended to be related to bugs or feature requests, so we recommend using our other community resources instead of asking here.
If you feel this is a bug, please open a new issue with the appropriate information. |
I built a consul cluster: 3 servers, 1 client.
I registered all the services to one of the sever nodes. Now that the node's services are not available after the node is broken.
how can this problem be solved?
The text was updated successfully, but these errors were encountered: