Closed
Description
Same charts helmv3 got error, helmv2 is ok.
error logs
Every 1.0s: helm template php -f a.yaml | grep okok -A 8 Mon Aug 19 16:39:09 2019
Error: YAML parse error on php/templates/deploy.yaml: error converting YAML to JSON: yaml: line 4: mapping values are not allowed in this context
# ci envs
{{- $env := .Values.env | quote -}}
{{- $name := .Values.deploy.name | quote -}}
{{- $namespace := .Values.deploy.namespace | quote -}}
{{- $project := .Values.deploy.project | quote -}}
{{- $ciUser := .Values.deploy.publish_user | quote -}}
{{- $ciTime := .Values.deploy.publish_at | quote -}}
{{- $replicas := .Values.deploy.replicas | default 1 | quote -}}
{{- $image := .Values.deploy.image | quote -}}
# {{- $domain := .Values.domain | default "{{ $namespace }}-{{ $project }}-{{ $env }}.newops.haodai.net" | quote -}}
{{- $domain := .Values.domain | default "{{ $CI_DOMAIN_NAME }}" | quote -}}
{{- $nodePort := .Values.nodePort | quote -}}
---
# Deployment
# updated_at: {{ $ciUser }}
# username: {{ $ciTime }}
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: {{ $name }}
namespace: {{ $namespace }}
annotations:
namespace : {{ $namespace }}
project: {{ $project }}
publish_user: {{ $ciUser }}
publish_at: {{ $ciTime }}
spec:
replicas: {{ $replicas }}
template:
metadata:
labels:
name: {{ $name }}
spec:
restartPolicy: {{ .Values.restartPolicy | default "Always" | quote }}
nodeSelector:
env: {{ $env }}
imagePullSecrets:
- name: {{ .Values.imagePullSecrets | default "devuser-harborkey" | quote }}
volumes:
{{- include "php.nfs" . }}
containers:
- name: {{ $name }}
image: {{ $image }}
imagePullPolicy: {{ .Values.imagePullPolicy | default "Always" | quote }}
securityContext:
capabilities:
add: ["SYS_PTRACE"]
ports:
- containerPort: {{ .Values.containerPort | default 80 }}
# 健康检测
{{- include "php.probe" . }}
# 资源
resources:
{{- include "php.resource" . }}
# 挂载目录
volumeMounts:
{{- include "php.mounts" . }}
# 环境变量
env:
{{- include "php.envs" . }}
{{- include "php.mysql" . }}
{{- include "php.codis" . }}
{{- include "php.log" . }}
# 启动命令和参数
command: {{ .Values.command }}
args: {{ .Values.args }}
---
# Service
apiVersion: v1
kind: Service
metadata:
name: {{ $name }}
labels:
name: {{ $name }}
namespace: {{ $namespace }}
spec:
# okok
{{- if eq .Values.env "online" }}
type: NodePort
ports:
- port: {{ .Values.service.port | default 80 }}
targetPort: {{ .Values.containerPort | default 80 }}
# nodePort: {{ required "nodeport required" $nodePort }}
selector:
name: {{ $name }}
{{- else }}
ports:
- port: {{ .Values.service.port | default 80 }}
targetPort: {{ .Values.containerPort | default 80 }}
selector:
name: {{ $name }}
{{- end }}
---
# Ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ $name }}
namespace: {{ $namespace }}
annotations:
traefik.ingress.kubernetes.io/redirect-entry-point: https
spec:
rules:
# - host: {{ if eq $.Values.env "pre" }}
# {{- $namespace }}-{{ $project }}-pre.newops.example.net
# {{else if eq $env "test" }}
# {{- $namespace }}-{{ $project }}-test.newops.example.net
# {{ else }}
# {{- $domain }}
# {{- end }}
- host: {{ if eq .Values.env "\"pre\"" }}
{{- $namespace }}-{{ $project }}-pre.newops.example.net
{{ else }}
{{- $domain }}
{{- end }}
http:
paths:
- path: /
backend:
serviceName: {{ $name }}
servicePort: {{ .Values.service.port | default 80 }}
---
# no quote, will cause 'did not find expected key'
# envinfo: {{ $env }}|{{ if eq $env "\"pre\"" }} env is known pre {{ else }} env: {{ $env }} {{ end }}
# envinfo2: {{ $env }}|{{ if eq $env "pre" }} env is known pre {{ else }} env: {{ $env }} {{ end }}
envinfo3: {{ $.Values.env }}|{{ if eq $.Values.env "pre" }} env is known pre {{ else }} env: {{ $env }} {{ end }}
# comment is ok, normal yaml is not
# envinfo: {{ $env }}|{{ if eq $env "\"pre\"" }} env is known pre {{ else }} env: {{ $env }} {{ end }}
# env: pre
env: "online"
deploy:
name: test
# namespace: nsdefault
nodePort: 80
domain:
helmv2 is ok
Every 1.0s: helmv2 template php -f a.yaml | grep okok -A 8 Mon Aug 19 16:42:45 2019
# okok
type: NodePort
ports:
- port: 80
targetPort: 80
# nodePort: "80"
selector:
name: "test"
Output of helm version
:
[wen@234 k8snew helm]$ helm version
version.BuildInfo{Version:"v3.0.0-alpha.2", GitCommit:"97e7461e41455e58d89b4d7d192fed5352001d44", GitTreeState:"clean", GoVersion:"go1.12.7"}
[wen@234 k8snew helm]$ helmv2 version
Client: &version.Version{SemVer:"v2.13.0", GitCommit:"79d07943b03aea2b76c12644b4b54733bc5958d6", GitTreeState:"clean"}
Error: could not find tiller
Output of kubectl version
:
[wen@234 k8snew helm]$ kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:11:31Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:02:58Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
[wen@234 k8snew helm]$
Cloud Provider/Platform (AKS, GKE, Minikube etc.):
non-cloud
attachment
Metadata
Metadata
Assignees
Labels
Type
Projects
Milestone
Relationships
Development
No branches or pull requests
Activity
derkoe commentedon Sep 4, 2019
I get the same error message with our SonarQube chart for OpenShift. The same install works with Helm v2.
The error message points to line 16 which seems quite strange (but this might reference the rendered version).
It tried some variants - it works by removing the dash ("-") after the else on line 18 .
So this works:
This doesn't:
Here's the fix for our chart: https://github.com/porscheinformatik/helm-charts/pull/9
Maybe this helps to fix it.
bacongobbler commentedon Oct 30, 2019
I cannot appear to reproduce this error with 3.0.0-rc.1. Please feel free to coment here if you have a case that can reproduce this issue where we can look into this further. Thanks!
chinglinwen commentedon Oct 31, 2019
hi @bacongobbler , I've extract the relates files, to help to reproduce the issue
helmv3err.tar.gz
bacongobbler commentedon Oct 31, 2019
Thanks. Re-opening to investigate further
jjangga0214 commentedon Dec 29, 2019
@bacongobbler
This is a very simple example I experience.
$ helm upgrade --install --set-file foo=path/to/foo bar mylocalchart --dry-run Error: YAML parse error on mylocalchart/templates/secrets.yaml: error converting YAML to JSON: yaml: line 8: mapping values are not allowed in this context
system info: ubuntu 19.04, 64 bits
Grannath commentedon Jan 8, 2020
I also have an issue with this. When I call
helm upgrade --install
in our CI pipeline, it fails with such an error. When I template locally, the result looks good, and a server dry run accepts it with no issues.I'll investigate further and try to find out what is going an, potentially with a simplified example. But at least so far I don't see anything close to the given line number that could be wrong.
EDIT: I found the problem, it seems. I was trying to select between startup and readiness probe based on the kube version. But the
semvarCompare
always evaluated to true, regardless of how I did the comparison.So, a) potentially broken semvarComparison function, and b) absolutely terrible error message.
matharoo commentedon Feb 3, 2020
I was also getting a similar error saying :
....../deployment.yaml unable to parse YAML: error converting YAML to JSON: yaml: line 7: mapping values are not allowed in this context
My Yaml:
Somehow the error was reporting line number 7 which would have meant line
kind: Deployment
but however since it rendered the conditional statements so the line number was starting fromkind: Deployment
and then the right line number 7 wasapp.environment: {{ .Values.global.runtimeEnvironment | default ("production") }}app.environment: {{ default "production" $parent.environment }}
.So my fix was to change it to :
app.environment: {{ default "production" $parent.environment }}
After this I ran
helm lint
and then no errors were found 🙂Update helm-chart-with-parameters.md
WarheadsSE commentedon Apr 15, 2020
If I may offer an observation: often this is not a bug in Helm, but rather with the rendered template content being invalid according to the K8s specs.
Unfortunately, the message is not clear, and depending on the complexity of the template being rendered, the message could point 10s of lines away.
Case in point, my own. While attempting to add a helper template in a library chart, I inadvertently had "one too many" end of line trims (
-}}
). As a result, the output looked a little "off":technosophos commentedon May 1, 2020
As far as I can tell, all of the errors presented here are because the rendered YAML content was invalid.
In most cases, it's because Kubernetes' YAML parser is more strict, and does not allow this:
You need to do this:
Which in your templates is this:
This is almost always the right pattern for you to use with string values. (It will protect you against cases where your values look like numbers or bools as well.)
The other big cause of this problem is chomping whitespace by using
{{-
instead of{{
.Unfortunately the YAML parser reports its line numbers (e.g. the line of the rendered YAML), not the template's line numbers (because it has no awareness of what the template's line numbers looked like). There is no way for us to reconstruct the original template line that caused the problem because that information is not ascertainable -- we don't know which original line actually caused a YAML formatting error.
However, there is a fairly easy way to find the cause with
helm template
: Use the--debug
flag:Note that with
--debug
on, it prints the YAML, and you can easily see thedata: bar:
field on line 5 of the rendered YAML.I am re-marking this as closed, as I have not found any case of this issue which is a bug in Helm. The
--debug
flag should give you the information you need to find and fix YAML formatting errors.17 remaining items