You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The /var/lib/docker/aufs/mnt folder takes a lot of disk space which is not cleared when removing images, containers and volumes. Upon docker restart, the mnt folder is cleared.
How come there's such a leak? Maybe there should be a way to clean the mnt folder?
# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6e9b714d68fd registry.intra.weibo.com/weibo_rd_content/web_v4:WEB_V4_RELEASE_V2.3.23.37 "/docker_init.sh" 25 hours ago Up 25 hours v4
3258ca115ef3 registry.intra.weibo.com/weibo_rd_common/cadvisor:0.7.1 "/usr/bin/cadvisor - 9 weeks ago Up 9 weeks cadvisor
Docker Info:
# docker info
Containers: 2
Images: 249
Storage Driver: devicemapper
Pool Name: docker-8:6-3276803-pool
Pool Blocksize: 65.54 kB
Data file: /data0/docker1.3.2-fs/devicemapper/devicemapper/data
Metadata file: /data0/docker1.3.2-fs/devicemapper/devicemapper/metadata
Data Space Used: 4.89 GB
Data Space Total: 107.4 GB
Metadata Space Used: 12.36 MB
Metadata Space Total: 2.147 GB
Library Version: 1.02.89-RHEL6 (2014-09-01)
Execution Driver: native-0.2
Kernel Version: 2.6.32-431.11.2.el6.toa.2.x86_64
Operating System: <unknown>
Docker Version:
[root@77-109-144-bx-core ~]# docker version
Client version: 1.3.2
Client API version: 1.15
Go version (client): go1.3.3
Git commit (client): 39fa2fa/1.3.2
OS/Arch (client): linux/amd64
Server version: 1.3.2
Server API version: 1.15
Go version (server): go1.3.3
Git commit (server): 39fa2fa/1.3.2
I'm trying to determine if this is related to #11113 and #10991, or at least this part:
Right now we keep container rootfs mounted (in /var/lib/docker/devmapper/mnt/) after container launch and it is unmounted once container has exited.
But this creates problems of devices leaking into mount namespace of other containers. And that in turn does not allow container to exit or to be removed. We give up after 10 seconds in the process leaving behind active devices or unreclaimable space from thin pool.
Client version: 1.5.0
Client API version: 1.17
Go version (client): go1.4.1
Git commit (client): a8a31ef
OS/Arch (client): linux/amd64
Server version: 1.5.0
Server API version: 1.17
Go version (server): go1.4.1
Git commit (server): a8a31ef
Thanks, @corradio. I have labeled this as bug, but there's a chance that this is an existing issue; there are a number of issues related with Docker sometimes not properly unmounting containers filesystems.
/ping @unclejack perhaps you know if this is already tracked by an existing issue (and which one)
root@gitlabcirunner2:/var/lib# docker version
Client version: 1.7.1
Client API version: 1.19
Go version (client): go1.4.2
Git commit (client): 786b29d
OS/Arch (client): linux/amd64
Server version: 1.7.1
Server API version: 1.19
Go version (server): go1.4.2
Git commit (server): 786b29d
OS/Arch (server): linux/amd64
happens with aufs and had the problem with devicemapper as well
@kiwenlau it's quite likely that for some reason when those container shutdown, docker couldn't remove the directory because the shm device was busy. This tends to happen often on 3.13 kernel. You may want to update it to the 4.4 version supported on trusty 14.04.5 LTS.
The reason it disappeared after a restart, is that daemon probably tried and succeeded to clean up left over data from stopped containers.
Activity
catrixs commentedon Apr 10, 2015
We also notice the problem too.
We use device mapper, and this is our $DOCKER/devicemapper/mnt:
We only run two container:
Docker Info:
Docker Version:
xiaods commentedon Apr 10, 2015
have a try use a newer docker version?
thaJeztah commentedon Apr 10, 2015
I'm trying to determine if this is related to #11113 and #10991, or at least this part:
thaJeztah commentedon Apr 10, 2015
@corradio could you provide the output of
uname -a
,docker version
anddocker -D info
?Also (If you're not already doing so), could you try and test with the current release or release-candidate of docker?
corradio commentedon Apr 10, 2015
uname -a:
Linux infra1-par 3.13.0-36-generic #63-Ubuntu SMP Wed Sep 3 21:30:07 UTC
2014 x86_64 x86_64 x86_64 GNU/Linux
docker -D info:
Containers: 22
Images: 565
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 609
Execution Driver: native-0.2
Kernel Version: 3.13.0-36-generic
Operating System: Ubuntu 14.04.1 LTS
CPUs: 8
Total Memory: 31.26 GiB
Name: ******
ID: OG4X:E7QY:ISBZ:I5XY:FXJG:BJ6O:TCMD:QAM2:ZB5X:RAU3:AHWK:COWS
Debug mode (server): false
Debug mode (client): true
Fds: 132
Goroutines: 103
EventsListeners: 1
Init Path: /usr/bin/docker
Docker Root Dir: /var/lib/docker
WARNING: No swap limit support
Olivier Corradi
*:snips *- www.snips.net
+33 (0) 7 81213708
On 10 April 2015 at 14:39, Sebastiaan van Stijn notifications@github.com
wrote:
thaJeztah commentedon Apr 10, 2015
Thanks, looks like you missed
docker version
though :)corradio commentedon Apr 10, 2015
Sorry, here's the output of
docker version
Client version: 1.5.0
Client API version: 1.17
Go version (client): go1.4.1
Git commit (client): a8a31ef
OS/Arch (client): linux/amd64
Server version: 1.5.0
Server API version: 1.17
Go version (server): go1.4.1
Git commit (server): a8a31ef
Olivier Corradi
*:snips *- www.snips.net
+33 (0) 7 81213708
On 10 April 2015 at 15:06, Sebastiaan van Stijn notifications@github.com
wrote:
thaJeztah commentedon Apr 10, 2015
Thanks, @corradio. I have labeled this as bug, but there's a chance that this is an existing issue; there are a number of issues related with Docker sometimes not properly unmounting containers filesystems.
/ping @unclejack perhaps you know if this is already tracked by an existing issue (and which one)
stevenschlansker commentedon Apr 20, 2015
We have similar problems with btrfs driver:
poelzi commentedon Jul 29, 2015
we use docker as a build system. currently I have to delete /var/lib/docker once a week or so, because of this image leakage.
poelzi commentedon Jul 29, 2015
root@gitlabcirunner2:/var/lib# docker version
Client version: 1.7.1
Client API version: 1.19
Go version (client): go1.4.2
Git commit (client): 786b29d
OS/Arch (client): linux/amd64
Server version: 1.7.1
Server API version: 1.19
Go version (server): go1.4.2
Git commit (server): 786b29d
OS/Arch (server): linux/amd64
happens with aufs and had the problem with devicemapper as well
cpuguy83 commentedon Sep 15, 2015
Is this related to docker-in-docker?
There are known issues with aufs over aufs, as well as cleaning up subvolumes from btrfs.
Note that with devicemapper you cannot use a static docker binary, as it will cause corruption issues leading to just this.
Is this still a problem?
Please make sure to include the output of
docker info
as well.Thanks!
thaJeztah commentedon Oct 20, 2015
Is this still an issue with the current release?
4 remaining items
icecrime commentedon Sep 10, 2016
Cc @tonistiigi @mlaventure.
thaJeztah commentedon Sep 14, 2016
Improved management is currently being worked on in #26108, and tracked through #22871
thaJeztah commentedon Sep 14, 2016
Given that the issues mentioned in #12265 (comment) are already tracked through separate issues, let me close this issue to not duplicate things.
kiwenlau commentedon Jul 18, 2017
I faced the same problem like @corradio.
/var/lib/docker/aufs/mnt/ tooks a lot of disk space:
I only run 8 containers, but there are 80 directories in /var/lib/docker/aufs/mnt/, maybe this is the problem:
I removed stopped containers, untagged images and unused volumes, but nothing helps.
Do I have to restart docker to solve it? I have the problem in production environment...
And I checked docker#12265 (comment) , in fact, they seem not exactly the same as our problem.
kiwenlau commentedon Jul 18, 2017
I reduced the disk usage from 83% to 19% by just restarting docker
Why does this happen?
In addition, it is not a good idea to restart docker in production environment. Is there any better solution?
mlaventure commentedon Jul 19, 2017
@kiwenlau it's quite likely that for some reason when those container shutdown, docker couldn't remove the directory because the
shm
device was busy. This tends to happen often on3.13
kernel. You may want to update it to the 4.4 version supported on trusty 14.04.5 LTS.The reason it disappeared after a restart, is that daemon probably tried and succeeded to clean up left over data from stopped containers.
kiwenlau commentedon Jul 19, 2017
@mlaventure Thx!
I've checked the kernel version, it's 3.13.
I will probably update the kernel to completely solve the problem.
warmchang commentedon Feb 3, 2018
👍
anshumansworld commentedon Sep 10, 2020
This will delete all the containers, volumes, networks. Beware!!