Skip to content

High CPU usage with low number of containers #1774

Open
@matt-jordan

Description

@matt-jordan

Environment

Version: v0.24.1 (note: we are using this version due to the Prometheus label issues in later versions)
OS: Ubuntu 16.04, 4.4.0-31-generic, x86_64
Docker: Docker version 1.12.6, build 78d1802
Containers: 4 (including cadvisor)
Cores: 2

Docker

Dockerfile:

FROM google/cadvisor:v0.24.1

ENTRYPOINT ["/usr/bin/cadvisor", "-logtostderr", "-profiling=true"]

Docker command:

docker run   --volume=/:/rootfs:ro   --volume=/var/run:/var/run:rw   --volume=/sys:/sys:ro   --volume=/var/lib/docker/:/var/lib/docker:ro   --publish=8085:8080   --detach=true   --name=test-cadvisor docker-registry:5000/test/test-cadvisor:latest

Problem

cAdvisor CPU usage is extremely high, such that it impacts the performance of the other containers. Typical CPU usage is between 20 - 110%, usually sitting at around 60-80%.

Output of the following profiling is attached:

go tool pprof -png -output=out.png http://localhost:8085/debug/pprof/profile

Note that I ran the profiling twice, just to compare results. I won't presume to interpret the results, other than to say that they look somewhat similarish, with a lot of time being spent in syscall.Syscall / syscall.Syscall6 as well as memory allocations and what I presume is garbage collection.

At first I thought it might be the same issue as #735 or kubernetes/kubernetes#23255 , but I haven't seen any invocations of du show up in the output of ps.

It's also interesting to note that we don't see this same CPU hit on all of our nodes. To date, we see it mostly on instances that are running containers that are using net=host and that spawn a significant number of process/threads within their containers. These containers wrap some legacy monolith applications that deviate from the "usual" operational model of web apps.
out2
out

Activity

dashpole

dashpole commented on Nov 30, 2017

@dashpole
Collaborator

A number of performance tweaks have been made since then. From a quick scan, it looks like a large amount of that time is spent handling external requests. Might be worth measuring the number of requests cadvisor is serving, and seeing what its resource usage is when it is not handling any requests.

ZOXEXIVO

ZOXEXIVO commented on Mar 17, 2018

@ZOXEXIVO

Same issue: 8core x2 CPU with 9 running containers ~ CAdvisor take 13% of processor resources

schabrolles

schabrolles commented on Apr 14, 2018

@schabrolles

Same here: 1core VM with ~10 containers (running but doing nothing). CAdvisor takes 18% cpu.

elsbrock

elsbrock commented on May 1, 2018

@elsbrock

Same here. Lots of iowait. cadvisor USED mem (top) > 2G. Running cAdvisor version v0.27.4 (492322b). Turned out there were too many open files.

bmerry

bmerry commented on Jul 2, 2018

@bmerry

I'm seeing something similar (CPU usage consistently 30-70%). Profile below:
cadvisor

This profile is on a machine running only 12 containers, and we've seen it on machines with even fewer, but there doesn't seem to be a clear pattern. Restarting cadvisor doesn't resolve the problem, but rebooting the machine does. We have noticed that the affected machines have been gradually losing available memory over time, which /proc/meminfo shows is going into Slab memory, and slabtop shows the slab memory is about 80%+ dentry. We're not sure which direction the causality is i.e. whether cadvisor is hammering the FS and causing lots of dentries to be cached, or if something else on the machine is causing memory issues and that is making cadvisor CPU-heavy. The machine from which that profile was taken is also not showing the same memory leak, so it may be a red herring.

We're running cadvisor 0.29.0 with /usr/local/bin/cadvisor --port 5003 --logtostderr=true. Some Docker info from the machine from which the profile was taken:

Containers: 71
 Running: 12
 Paused: 0
 Stopped: 59
Images: 80
Server Version: 18.03.1-ce
Storage Driver: overlay2
 Backing Filesystem: extfs
 Supports d_type: true
 Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host ipvlan macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 773c489c9c1b21a6d78b5c538cd395416ec50f88
runc version: 4fc53a81fb7c994640722ac585fa9ca548971871
init version: 949e6fa
Security Options:
 apparmor
 seccomp
  Profile: default
Kernel Version: 4.13.0-41-generic
Operating System: Ubuntu 16.04.4 LTS
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 125.9GiB
Name: lab1
ID: GRBO:TBRS:Z5BJ:4RVS:HBEA:V24X:Q33J:66JI:7D4F:XX2O:ZD2M:WPYS
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: true
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false

WARNING: No swap limit support
dashpole

dashpole commented on Jul 2, 2018

@dashpole
Collaborator

@bmerry the graph you show looks like about what I would expect. Assuming your containers don't have anything in their r/w layer, most cpu usage generally comes from reading from cgroup files, of which there are many for the memory cgroup.

Not sure about the memory leak. I'm not super familiar with the cgroup implementation, but I know they aren't real files, and are "stored" at least partially in the dentry cache. Accessing cgroup files repeatedly should make the dentry cache grow, but it should be reclaimed when free memory gets low.

bmerry

bmerry commented on Jul 2, 2018

@bmerry

@dashpole thanks for taking a look. I've noticed that the pprof report claims ~4% CPU usage, which doesn't match what top reports (40%+). I've confirmed by taking another profile and the graph header reports ~4% while top showed 30%+ over the entire 30s. So there is something odd in the way the profiler is working - perhaps it only profiles user time? htop shows the CPU usage is mostly in the kernel.

Running perf top, the top hits are

  31.04%  [kernel]                                      [k] memcg_stat_show
  18.61%  [kernel]                                      [k] memcg_sum_events.isra.22
   9.41%  [kernel]                                      [k] mem_cgroup_iter
   6.94%  [kernel]                                      [k] css_next_descendant_pre
   6.11%  [kernel]                                      [k] _find_next_bit
   3.96%  [kernel]                                      [k] mem_cgroup_usage.part.43
   1.75%  [kernel]                                      [k] find_next_bit
   1.38%  [kernel]                                      [k] mem_cgroup_node_nr_lru_pages
   1.22%  [kernel]                                      [k] nmi

which does seem to confirm your idea that it's related to cgroups.

The machine with the graph (using ~40% CPU) has 14669 files in /sys/fs/cgroup (counted with find . | wc), while another machine using ~1% CPU has 9935. So the file count alone doesn't seem to explain the order-of-magnitude differences in CPU usage.

FWIW, I've just tried with cadvisor 0.30.2 and --disable_metrics=tcp,udp,disk,network - no improvement. Is there anything else I should try turning off?

Accessing cgroup files repeatedly should make the dentry cache grow, but it should be reclaimed when free memory gets low.

I'm hoping so - for now it's been "leaking" slowly enough that there hasn't been any memory pressure. On one machine the memory was in SUnreclaim rather than SReclaimable, but when I dropped the dentry cache manually the memory was returned.

bmerry

bmerry commented on Jul 2, 2018

@bmerry

It seems like it's definitely some slow path in the kernel. Simply time cat /sys/fs/cgroup/memory/memory.stat takes 0.376s on the affected machine, and 0.002s on an unaffected machine. The affected machine has about 50% more cgroups, but that doesn't explain a >100x slowdown.

That sounds more like a kernel issue than cadvisor's problem and if I get time I may try to take it up on the LKML, but if you have any suggestions on fixing or diagnosing it, I'll be happy to hear them.

This is probably a separate issue from the original report, where pprof showed high CPU usage. @ZOXEXIVO @schabrolles are you seeing the same behaviour I am?

bmerry

bmerry commented on Jul 19, 2018

@bmerry

After some discussions on the linux-mm mailing list and some tests, it sounds like the problem may be "zombie" cgroups: cgroups that have no processes and have been deleted but still have memory charged to them (in my case, from the dentry cache, but it could also be from page cache or tmpfs). These are still iterated over when computing the top-level memory stats. We had a service that was repeatedly failing and being restarted (by systemd), which probably churned through a lot of cgroups over a few weeks.

I still need to experiment with ways to fix the underlying problem (including checking if it is better in newer kernels), but I'd like to find out if there is a way to work around it. In particular, we tend to use it only to get per-Docker-container metrics into Prometheus, and not so much for aggregate or system metrics (we have node-exporter for that). So if there is a way we can turn off collection of /sys/fs/cgroup/memory/memory.stats and /sys/fs/cgroup/memory/system.slice/memory.stats while still collecting memory stats on individual Docker containers that would probably help.

dashpole

dashpole commented on Jul 19, 2018

@dashpole
Collaborator

We don't have the option to disable collection of the root cgroup. We have an option --docker_only, but that keeps the root cgroup around.

bmerry

bmerry commented on Jul 19, 2018

@bmerry

We don't have the option to disable collection of the root cgroup. We have an option --docker_only, but that keeps the root cgroup around.

Do you think that would be reasonably easy for someone not familiar with the code to implement as a command-line option, or is it pretty core?

dashpole

dashpole commented on Jul 19, 2018

@dashpole
Collaborator

I would rather not add a flag for that, but you can just remove the registration of the raw factory here: https://github.com/google/cadvisor/blob/master/manager/manager.go#L335, and that would turn off collection for all cgroups that are not containers. I am planning to introduce a command line flag to control which factories are used in the future, which should allow this behavior without rebuilding cAdvisor

bmerry

bmerry commented on Jul 26, 2018

@bmerry

That didn't work for me:

F0726 15:36:56.808759    7808 cadvisor.go:168] Failed to start container manager: no known factory can handle creation of container

I also tried taking out the / case in this line and running with --docker_only: it runs, but from strace I can see that it is still opening /sys/fs/cgroup/memory/memory.stat (as well as a bunch of cgroups under /sys/fs/cgroup/memory/system.slice, despite the --docker_only).

20 remaining items

dashpole

dashpole commented on Jan 30, 2020

@dashpole
Collaborator

try using --disable_metrics=sched,percpu,diskIO,disk,network,tcp,advtcp,udp,process, which includes all optional metrics.

tmpjg

tmpjg commented on Jan 31, 2020

@tmpjg

@dashpole

same result with --disable_metrics=sched,percpu,diskIO,disk,network,tcp,advtcp,udp,process :

grafana

dashpole

dashpole commented on Jan 31, 2020

@dashpole
Collaborator

@tmpjg how many cores is your node?

tmpjg

tmpjg commented on Jan 31, 2020

@tmpjg

@dashpole 4 cores.

processor       : 0
model name      : ARMv7 Processor rev 4 (v7l)
BogoMIPS        : 38.40
Features        : half thumb fastmult vfp edsp neon vfpv3 tls vfpv4 idiva idivt vfpd32 lpae evtstrm crc32
CPU implementer : 0x41
CPU architecture: 7
CPU variant     : 0x0
CPU part        : 0xd03
CPU revision    : 4

processor       : 1
model name      : ARMv7 Processor rev 4 (v7l)
BogoMIPS        : 38.40
Features        : half thumb fastmult vfp edsp neon vfpv3 tls vfpv4 idiva idivt vfpd32 lpae evtstrm crc32
CPU implementer : 0x41
CPU architecture: 7
CPU variant     : 0x0
CPU part        : 0xd03
CPU revision    : 4

processor       : 2
model name      : ARMv7 Processor rev 4 (v7l)
BogoMIPS        : 38.40
Features        : half thumb fastmult vfp edsp neon vfpv3 tls vfpv4 idiva idivt vfpd32 lpae evtstrm crc32
CPU implementer : 0x41
CPU architecture: 7
CPU variant     : 0x0
CPU part        : 0xd03
CPU revision    : 4

processor       : 3
model name      : ARMv7 Processor rev 4 (v7l)
BogoMIPS        : 38.40
Features        : half thumb fastmult vfp edsp neon vfpv3 tls vfpv4 idiva idivt vfpd32 lpae evtstrm crc32
CPU implementer : 0x41
CPU architecture: 7
CPU variant     : 0x0
CPU part        : 0xd03
CPU revision    : 4

Hardware        : BCM2835
Revision        : a020d3
Serial          : 00000000e14271a3
Model           : Raspberry Pi 3 Model B Plus Rev 1.3
qingwave

qingwave commented on Feb 11, 2020

@qingwave
Contributor

there is a dicussion about a cgroup bug, see Showing /sys/fs/cgroup/memory/memory.stat very slow on some machines, maybe useful for this issue

futianshi1314

futianshi1314 commented on Feb 29, 2020

@futianshi1314

centos: 4.4.180-2.el7.elrepo.x86_64
Kubernetes v1.12.3
the kubelet high cpu, pprof cadvisor used cpu time long ,cost systemcall get cgroup mem
311F468C-C0CA-47c8-8F5B-0DEBBC28BF86

futianshi1314

futianshi1314 commented on Feb 29, 2020

@futianshi1314

@theojulienne @bmerry
Hi, I have the same issure,

CentOS Linux release 7.6.1810 (Core)
kernel:4.4.180-2.el7.elrepo.x86_64
Kubernetes v1.12.3

but echo 2 > /proc/sys/vm/drop_caches will without stop, and cpu load high,too many "kworker" kernel processing, which kernal version can use echo 2 drop_caches? I will reinstall the system.thanks

535813B0-F541-4e93-BB08-20E78BCB84DC

theojulienne

theojulienne commented on Feb 29, 2020

@theojulienne

I would suggest either upgrading to a newer kernel or disabling the code that is polling this stat file if that's not an option.

There's some more details of this from the kernel side from the RHEL folks here, which has a nice summary of the patches and trace scripts: https://bugzilla.redhat.com/show_bug.cgi?id=1795049

The bpftrace script they used to check for this also got published which might be useful for folks wanting to determine if this is the issue they are observing.

futianshi1314

futianshi1314 commented on Mar 1, 2020

@futianshi1314

@theojulienne OK , I'll try it later and get back to you soon .Thanks

fengpeiyuan

fengpeiyuan commented on Aug 24, 2020

@fengpeiyuan

bugfix kernel release version can click https://bugzilla.redhat.com/show_bug.cgi?id=1795049

I never did get all the way to the bottom of it. Some step in creating and destroying cgroups interacts in a non-deterministic way with the reads that cadvisor does to cause cached dentries that keep the cgroups alive as zombies. There is a patch (which I assume will go into the next Linux release) which makes the stats collection a lot faster and thus reduces the impact, but doesn't prevent the zombies in the first place.

I gave up trying to fix the problem, and now we have a cron job that times reading /sys/fs/cgroup/memory/memory.stat and if it takes too long, drops the dentry cache.

mikedanese

mikedanese commented on Oct 8, 2020

@mikedanese
Contributor

Performance of memory.stat was improved in https://spinics.net/lists/cgroups/msg21876.html

9Mad-Max5

9Mad-Max5 commented on Jan 11, 2022

@9Mad-Max5

Hi,

I'm having similar issues, with close to 30% precent of CPU usage running cadvisor on a QNAP NAS.
I think I tried everything I found to tackle this issue but without any solid result.
It cost me a lot of nerves to get the out.png from my server, as the native support isn't that good.
out2

Could anybody tell me with this picture what is going wrong?

linfangrong

linfangrong commented on Mar 13, 2025

@linfangrong
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

      Development

      No branches or pull requests

        Participants

        @SamSaffron@dims@elsbrock@awestendorf@theojulienne

        Issue actions

          High CPU usage with low number of containers · Issue #1774 · google/cadvisor