-
Notifications
You must be signed in to change notification settings - Fork 9.5k
Error: etcdserver: user name is empty when auth.rbac.enabled set to false #2433
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
@Diluka I could not reproduce this issue by following your steps. Could you verify that you are using the latest available chart? By the way, I noticed a few differences in the command in order to get this working, could you check if by any chance you made a typo copying the command? -helm install etcd bitnami/etcd -n compose --set auth.rbac.enabled=false
+helm install --name etcd bitnami/etcd --namespace compose --set auth.rbac.enabled=false After that it worked without issues: $ kubectl exec -it $POD_NAME -n compose -- etcdctl put /message Hello
OK |
helm version is 3 |
You're right! I have just updated Helm to 3 to avoid similar situations in the future. 😅 Anyhow, I followed your steps again and it worked without issues for us. Is this an issue you can find reproducible? I.e. deploying in a new empty namespace, without an existing PVC, etc. Also, could you confirm that you did not apply any other changes to the standalone chart apart from disabling auth.rbac.enabled? |
root@ecs-6189:~# kubectl create namespace abc
namespace/abc created
root@ecs-6189:~# helm install etcd bitnami/etcd -n abc --set auth.rbac.enable=false
NAME: etcd
LAST DEPLOYED: Wed Apr 29 00:38:43 2020
NAMESPACE: abc
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
** Please be patient while the chart is being deployed **
etcd can be accessed via port 2379 on the following DNS name from within your cluster:
etcd.abc.svc.cluster.local
To set a key run the following command:
export POD_NAME=$(kubectl get pods --namespace abc -l "app.kubernetes.io/name=etcd,app.kubernetes.io/instance=etcd" -o jsonpath="{.items[0].metadata.name}")
kubectl exec -it $POD_NAME -- etcdctl put /message Hello
To get a key run the following command:
export POD_NAME=$(kubectl get pods --namespace abc -l "app.kubernetes.io/name=etcd,app.kubernetes.io/instance=etcd" -o jsonpath="{.items[0].metadata.name}")
kubectl exec -it $POD_NAME -- etcdctl get /message
To connect to your etcd server from outside the cluster execute the following commands:
kubectl port-forward --namespace abc svc/etcd 2379:2379 &
echo "etcd URL: http://127.0.0.1:2379"
* As rbac is enabled you should add the flag `--user root:$ETCD_ROOT_PASSWORD` to the etcdctl commands. Use the command below to export the password:
export ETCD_ROOT_PASSWORD=$(kubectl get secret --namespace abc etcd -o jsonpath="{.data.etcd-root-password}" | base64 --decode)
root@ecs-6189:~# export POD_NAME=$(kubectl get pods --namespace abc -l "app.kubernetes.io/name=etcd,app.kubernetes.io/instance=etcd" -o jsonpath="{.items[0].metadata.name}")
root@ecs-6189:~# kubectl exec -it $POD_NAME -- etcdctl proot@ecs-6189:~# kubectl exec -it $POD_NAME -n abc -- etcdctl put /message Hello
{"level":"warn","ts":"2020-04-28T16:40:12.795Z","caller":"clientv3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"endpoint://client-3d051b8d-5064-4f9b-8726-7e6ea8dee2fa/127.0.0.1:2379","attempt":0,"error":"rpc error: code = InvalidArgument desc = etcdserver: user name is empty"}
Error: etcdserver: user name is empty
command terminated with exit code 1 create a new namespace, and still failed |
We were able to reproduce this issue in a Minikube environment. However, I'm unsure why this happens and need to continue investigating this. Also, it looks like the NOTES are not taking into account the "auth.rbac.enable=false":
We'll keep you posted. |
Hi @Diluka, after some troubleshooting I found the issue. First of all we should use Afterwards, it is possible that because of the misconfigured deployment, the etcd data was persisted so when you deployed the new statefulset it stopped working. To confirm that, you would see an entry like this in the pod logs:
To fix this issue, either deploy with a different name or in a different namespace, with the proper $ helm del -n abc etcd
$ kubectl delete pvc -n abc data-etcd-0 |
This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback. |
Due to the lack of activity in the last 5 days since it was marked as "stale", we proceed to close this Issue. Do not hesitate to reopen it later if necessary. |
I am seeing this issue again with the above command.
|
Can you please create a new issue specifying your concrete use case and fill the issue template? With that, we should have all the info available |
@simonbowen It should be Anyway, I still have the same issue, I'll try fixing It or disabling probes |
@nicovak If I remember correctly I tried it both ways and neither worked. |
@simonbowen I saw the ternary condition in helm statefulset template, you are right. helm install etcd-dns bitnami/etcd --set auth.rbac.create=false,readinessProbe.enabled=false,livenessProbe.enabled=false,startupProbe.enabled=false --namespace kube-system |
Which chart:
The name (and version) of the affected chart
bitnami/etcd:3.4.7
Describe the bug
A clear and concise description of what the bug is.
To Reproduce
Steps to reproduce the behavior:
helm install etcd bitnami/etcd -n compose --set auth.rbac.enabled=false
export POD_NAME=$(kubectl get pods --namespace compose -l "app.kubernetes.io/name=etcd,app.kubernetes.io/instance=etcd" -o jsonpath="{.items[0].metadata.name}")
kubectl exec -it $POD_NAME -n compose -- etcdctl put /message Hello
Expected behavior
A clear and concise description of what you expected to happen.
OK
Version of Helm and Kubernetes:
helm version
:kubectl version
:Additional context
Add any other context about the problem here.
works when using default namespace
The text was updated successfully, but these errors were encountered: