-
Notifications
You must be signed in to change notification settings - Fork 18.7k
Docker stats memory usage is misleading #10824
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
+1 I think it would make sense to make this change. |
yes memory is not right |
@spf13 |
I switched to the proficient label On Sunday, March 22, 2015, Matthew Gallagher notifications@github.com
|
Try set the param of docker run --memory,then check your |
#dibs ping @wonderflow, I think this is what you are trying to fix. Just send a PR. |
Hi @resouer @wonderflow, |
"Also recommend using standard unix postfixes here, so it 1.311G not 1.311 GiB" I'm not sure if I should change. @crosbymichael |
@wonderflow thanks for the fixes ! |
I can’t find your comment on github. Can you tell me more details about it? Thanks.
|
Just want to point out that this has had me chasing phantom memory leaks for more hours than I care to admit ;) top says "18.4" MB, docker stats says "192.9" MB. Update: Here's the part that worries me, if I set a memory limit on the container of 32MB, it looks like my container will be killed even though resident memory doesn't go above 18.4 MB. |
Is this resolved? I have the same problem as @pdericson with my apps running in docker containers (running on my local machine). top shows me 15MB in RES and the docker stats show a slowly linearly increasing memory usage. Which climbs to around 250MB over 12 hours. |
Arguably, cache space is also reclaimable, meaning that, under pressure, it can be reclaimed instead of triggering an OOM. Wether to substract it or not would be a debate in itself. I chose to do it here to follow the lead of Docker. Even if this is not the best decision (and I have not opinion on that), doing the opposite would introduce confusion and confusion rarely do good. See: moby/moby#10824
Arguably, cache space is also reclaimable, meaning that, under pressure, it can be reclaimed instead of triggering an OOM. Wether to substract it or not would be a debate in itself. I chose to do it here to follow the lead of Docker. Even if this is not the best decision (and I have not opinion on that), doing the opposite would introduce confusion and confusion rarely do good. See: moby/moby#10824
Arguably, cache space is also reclaimable, meaning that, under pressure, it can be reclaimed instead of triggering an OOM. Wether to substract it or not would be a debate in itself. I chose to do it here to follow the lead of Docker. Even if this is not the best decision (and I have not opinion on that), doing the opposite would introduce confusion and confusion rarely do good. See: moby/moby#10824
@theonlydoo If I understand correctly, the memory usage in Hope this helps :) |
@coolljt0725 so |
OOM reaping is handled by the kernel though |
@Sergeant007, your suggested container memory usage formula does not match the sum of process memory usage for me. When I compare memory used from cgroups (minus cache) to the sum of process rss, I get a not so small difference:
Does ps not reflect actual memory usage of processes in the container like some other tools? This is in a CentOS 7.3 container running on Amazon Linux 2016.09. |
@axelabs, I'm not sure if ps work 100% accurate, but even my method described above does not pretend to be fully correct. It just fixes the original behavior that was really-really misleading. So in your particular measurement the difference was 32MB out of 27GB of consumed memory. This is 0.1%. I'm not sure if there is an easier to achieve something better. And I'm not even sure which one (RSS_USED or MEM_USED) was correct. |
If anyone interested why it could be misleading : I got headaches trying to understand those misleading stats until reading this article. |
Shouldn't taking the value of "rss": 495616 be enough ? |
Hi @omasseau, It's been so long time ago (6 years), that I'm not even sure if "just taking rss" was correct or not. As far as I remember, subtracting filesystem caches was the only reliable way. If you're working on this issue, then just play with the "steps to reproduce" I posted on Apr 9, 2017 to reconfirm the numbers. I'm not actively working with Docker at my current project, so can't investigate... |
so I have 1GB of RAM to play with yet docker stats is reporting 1.3G used on a 2G system, so its reporting pages that are used for disk caching as well as used, which it "technically" used but will be freed by the OS once we run low on RAM
Instead stat should just report (used - buffers cached) and (buffers cached in brackets)
Otherwise people using this information may be mislead as to how bad stuff really is ( http://www.linuxatemyram.com/ )
Also recommend using standard unix postfixes here, so it 1.311G not 1.311 GiB
cc @crosbymichael
The text was updated successfully, but these errors were encountered: