-
Notifications
You must be signed in to change notification settings - Fork 559
Running out of loopback devices #19
Comments
I think I've figured out what I need to do. I need to |
Confirmed: Unmounting /var/lib/docker on exit fixes the error. Question: Is this a more general issue that just shows it head early with devicemapper? If you mount something inside the docker container, but you don't unmount it before you destroy the container, does that mountpoint live somewhere on the host forever? Just wondering if the cgroups mounts also need cleaning up on exit? |
I am seeing the same issue when using wrapdocker inside of a drone build container. The idea is that my build process outputs a docker container. After about ten builds I get:
But I am using an unmodified version of wrapdocker so I'm unsure @d11wtq whether your solution is right for me. Again, only restarting my drone machine temporarily fixes the problem. The minimal .drone.yml file that reproduces this is:
Where build.sh is:
Any thoughts? Thanks. |
I also hit a problem with the same symptoms. I can see in the output of I haven't been able to fix the leaking but I was able to mitigate it by an addition to the containers. What I found is that sometimes, the next available loop device would be a higher number than the existing device nodes I was able to fix that particular case by invoking this scriptlet in the container before docker is started, which ensures two free loopback devices exist:
Maybe it'd be possible to add something like that ^ into wrapdocker, if it helps anyone. |
The solution for me appears to be simply run:
at the end of my build script to gracefully stop the docker daemon started by wrapdocker. Then the number of loopback devices as per |
I did encounter a situation in which
In this case I use the following for graceful shutdown of docker -d:
|
I'm using this to stop wrapdocker : it seems the pid used by service docker is not always the same as wrapdocker :) |
@rohanpm it serves me well, thank you |
@rohanpm Works for me too, thank you! I quote your solution here: http://paislee.io/how-to-build-and-deploy-docker-images-with-drone/ |
I'll close this issue since it appears to be solved. But feel free to re-open/comment if it's not the case! Thanks, |
@rohanpm I've used your script in moby/moby#9117 - if you can confirm thats ok, that would be great :) |
@SvenDowideit, sure, no problem. (I saw the later discussion about maybe doing the same thing from golang code too.) |
ya :) much +1 to getting something done! |
I think this should be reopened and @rohanpm's contribution added to wrapdocker. Running dind on a CentOS 7 host results in the same error due to not enough loopback devices. |
Oh, sure. I'll be happy to look at a PR for this. Thanks a lot! |
I'm having the same issue with running dind on Centos7, it can't seem to actually start the docker daemon within the container. |
The #!/bin/bash
handle_error() {
echo "FAILED: line $1, exit code $2"
echo "Remove loop device here ...."
exit 1
}
trap 'handle_error $LINENO $?' ERR
set -e
# start your build here ... Just my two cents. |
I'm reopening but I don't use CentOS so I don't know how to help. I hope someone has a better idea! |
I am currently still haveing issues with running out of "Running out of loopback devices " on ubuntu 14:04 LTS with docker 1.5 on the host machine. I hhave tried the following. Not sure where I am going wrong with this. Please help. |
This happens on the standard amazon linux AMI on EC2 as well. |
I am seeing this issue on CoreOS 668.2.0. The script by @rohanpm does not work in my instance. I get the following output:
|
|
NOTE #1: The dind mesos-slave require a /var/lib/docker volume in order to use the aufs driver instead of the lvm-loop one. The later leaks loop devices when used with docker-compose (compare jpetazzo/dind#19). There /var/lib/docker volumes are mounted from the host using /var/tmp/mesosslave1 and /var/tmp/mesosslave2. These can be deleted after a run. NOTE #2: When using boot2docker on Mac the /Users directory is mounted using a vboxsf mount into the boot2docker VirtualBox machine. Those vboxsf mounts are not sufficient to mount them into a dind container as /var/lib/docker. Any "docker pull" will fail on those volumes then.
NOTE #1: The dind mesos-slave require a /var/lib/docker volume in order to use the aufs driver instead of the lvm-loop one. The later leaks loop devices when used with docker-compose (compare jpetazzo/dind#19). There /var/lib/docker volumes are mounted from the host using /var/tmp/mesosslave1 and /var/tmp/mesosslave2. These can be deleted after a run. NOTE #2: When using boot2docker on Mac the /Users directory is mounted using a vboxsf mount into the boot2docker VirtualBox machine. Those vboxsf mounts are not sufficient to mount them into a dind container as /var/lib/docker. Any "docker pull" will fail on those volumes then.
NOTE #1: The dind mesos-slave require a /var/lib/docker volume in order to use the aufs driver instead of the lvm-loop one. The later leaks loop devices when used with docker-compose (compare jpetazzo/dind#19). There /var/lib/docker volumes are mounted from the host using /var/tmp/mesosslave1 and /var/tmp/mesosslave2. These can be deleted after a run. NOTE #2: When using boot2docker on Mac the /Users directory is mounted using a vboxsf mount into the boot2docker VirtualBox machine. Those vboxsf mounts are not sufficient to mount them into a dind container as /var/lib/docker. Any "docker pull" will fail on those volumes then.
NOTE #1: The dind mesos-slave require a /var/lib/docker volume in order to use the aufs driver instead of the lvm-loop one. The later leaks loop devices when used with docker-compose (compare jpetazzo/dind#19). There /var/lib/docker volumes are mounted from the host using /var/tmp/mesosslave1 and /var/tmp/mesosslave2. These can be deleted after a run. NOTE #2: When using boot2docker on Mac the /Users directory is mounted using a vboxsf mount into the boot2docker VirtualBox machine. Those vboxsf mounts are not sufficient to mount them into a dind container as /var/lib/docker. Any "docker pull" will fail on those volumes then.
Just to add, on CoreOS 717.3.0 this seems to work just fine with loopback devices starting with |
Problem still happening on CoreOS |
+1 Debian, Docker 1.6.2 |
Same problem on CentOS 7.1 |
The first DinD container seems to start fine every time now, |
I added a retry-loop and at the 3rd attempt also both containers are and stay running. |
For those of you using DinD for CI/testing, please have a look at this new blog post! |
@jpetazzo thx for post. After severals weeks of pain and suffering I've ended with exposing socket. I've just wanted to hear that it is not the worst solution from someone else. |
There is now an official docker:dind image upstream! I invite you to test it, since it is actively maintained. Thank you! |
Firstly, thanks for this. While on the surface it's good fun, it's really useful for developing against docker itself (my use case).
I have changed the wrapdocker script slightly to use
-s devicemapper
instead of aufs, so that I don't need to use a volume for each container (and therefore in theory have more disposable containers).However, after I've started and stopped some number of containers (it feels like 10-15, but I haven't counted), docker refuses to start in any further containers, with the log output:
At this point, I can't figure out how to clean up whatever is using the loopback devices. Nothing shows up in
df -a
on the host machine, nor in the current (new) container. So I end up resorting to restarting the entire system.Do I need to add something else to wrapdocker if I'm using devicemapper? Or would this be considered a bug in docker itself?
The text was updated successfully, but these errors were encountered: