-
Notifications
You must be signed in to change notification settings - Fork 18.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cannot start container: Getting the final child's pid from pipe caused "EOF" #40835
Comments
Same issue.
|
I have the same issue ... |
@kolyshkin ptal |
We are seeing this as well. Does anybody have any findings on the culprit for a potential workaround? Right now the only recourse we have is to restart the entire docker service |
Same issue here. This issue tends to happen when there're a large number of containers running on the host. (33 this time) |
Im also affacted.
|
Seeing this on a fresh lab deploy, no other containers running, CentOS 7, with userns enabled. My Debian 10 environment isn't seeing any issues. Disabling userns made the issue go away.
|
Just found the following indicating that it's a configuration issue on my side:
|
I am facing a similar issue when trying to build a docker image:
The above solution doesn't seem to work. |
I will try it ASAP and report it back. |
The
|
It may not matter, I have got same error message when creating deployment on GKE(ver.15.12.2). resources:
limits:
cpu: 1100m
memory: 3000m # correct: 3000M
requests:
cpu: 1100m
memory: 3000m # correct: 3000M JFYI. |
I ran into the same issue on a brand new debian 10 and new debian 9 provisioned using docker-machine. There were 2 files in /etc/systemd/system/ I rm'd /etc/systemd/system/docker.service and restarted the service. |
how to fix? |
There are probably multiple causes of this error. I ran into it because k3s had also been installed on my centos 7.6 docker server.
Uninstalling k3s (https://rancher.com/docs/k3s/latest/en/installation/uninstall/) appears to have fixed it. |
I solved it by removing |
I fixed the issue in my case by increasing the resource limits of the pod. In a previous case it was the cores, in the most recent case it was the memory. |
In my case (CentOS 7.8), it was |
I read about it that they had removed the value behind the = |
Okay, just found another interesting post on other forum. You can find more in this post https://serverfault.com/questions/1017994/docker-compose-oci-runtime-create-failed-pthread-create-failed/1018402 It seems to be there is no bug... |
In my case the OS is running on a dedicated server without virtualization. |
Ran into the same issue with Debian on WSL1 while following this: https://nickjanetakis.com/blog/setting-up-docker-for-windows-and-wsl-to-work-flawlessly docker-compose -version was complaining that I needed WSL2. made sure to install docker-compose via pip3 and place the path to it before the other bins. |
I just restart docker service(systemctl restart docker.service) and it works for me. |
Same as @deanmax. Commenting |
Setting user.max_user_namespaces or restarting docker seemed to only fix the issue temporarily, it keeps coming back, any update from someone from the moby team? |
This error would typically be seen due to OOM.
|
@cpuguy83 My system is not running any containers and has 30gb free memory. The issue appears before a container is started (or maybe during the start). It occurs regularly every ~30 days. Only a reboot helps. |
@ceecko The error is during container bootstrap. While it is "before the container starts", it is part of the startup process. Explaining how Because of the nature of the crash, it is really difficult to debug where it is coming from. |
This error has nothing to do with OOM, we see it all the time without having any OOM notifications or memory threshold |
@cpuguy83 once it occurs, it's easy to reproduce - happens in 100% of cases :) |
My case is swap size not enough, and can quick fix by this
For me I only run the way3 fix and you can put it in cronjob to run it every few hours/days |
I have this problem occasionally occur in one of our Jenkins machines (what is the politically correct name to a Jenkins slave?), while did not see it on the other(s). I have a sandbox job and printed some info. Looks like the kernel version and docker version differs in these servers. sysctl provided values differ I guess because of the (quite) different kernel version. Also the problematic server does not have any swap at all (not sure if it could have any effect as it has plenty of RAM). This is where I did not see this docker problem happen (I did not check any logs etc, just as a simple Jenkins user): This is where it occasionally happens. |
…om/moby/moby/issues/40835\#issuecomment-649216095 Signed-off-by: Damien Duportal <damien.duportal@gmail.com>
…om/moby/moby/issues/40835\#issuecomment-649216095 (#1822) Signed-off-by: Damien Duportal <damien.duportal@gmail.com>
In my k8s environment, it seems the sandbox container couldn't find the user-created container resource(pid, file.. etc), as the user-created container had finished too fast.
The error no longer occurred after I added 'sleep 5' command in my container.
|
I've had the same issue starting WebODM. Turns out there was an option called Commenting out those option for |
I'm also getting this problem on build machines that remain up and running for a lot of time and spawn a lot of containers.
I get an error like:
everytime I try to do a "docker build" or "docker run".
so my guess is that the actual bug is in the containerd layer. |
I see you're running an older version of docker and a version of containerd (v1.4) that reached EOL; if you have a system to test on, perhaps you could try if the problem still occurs on current versions of containerd (and docker) |
Getting same issue running hello-world on LXD on Container station.
I can run hello-world directly but the thing is docker commands doesn't work with Container station and it has issues with opening port, whereas LXD can open port without any issues Is there a way to get it work on LXD?
|
Haven't done so myself, but there's some tutorials here; |
Same issue here:
|
I'm using CentOS 7.9 and it is fixed by setting |
You are a life saver! In my case the problem was related to fail2ban filling up the available ip rules. The last line of |
solution1!!! |
I made a change to a cloudflare container config file, not even adding lines but modifying a pre-existing one. docker compose down as usual and docker compose up -d throws the error. Trying to restart sites with 1 second planned downtime has turned to 30 minutes and counting :( pid max is set high af (> 4 million) I only have 5 containers running, and was running this same cloudflare container perfectly fine with 3 times as many for the past several months. Restarting containerd did not help. This only seems to be an issue on this cloudflare container, and I tried rolling back to two previous releases that would have been latest during development. I can create new services, but am afraid to take down any existing services to find out if those will work. sleep 5 before and after the run command did nothing as well. I'm out of ideas |
Strato's new (already since a few months) V-Server generation does not use Virtuozzo any more and no longer seems to have this limitation luckily. |
works for me with
|
have you found any solution? |
same issue is seen in docker version 26.1.3 |
Description
Intermittently containers cannot be started and docker returns the following error:
docker: Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "process_linux.go:319: getting the final child's pid from pipe caused \"EOF\"": unknown.
Restarting the machine resolves the issue.
Steps to reproduce the issue:
Describe the results you received:
docker: Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "process_linux.go:319: getting the final child's pid from pipe caused \"EOF\"": unknown.
Describe the results you expected:
Container should start
Additional information you deem important (e.g. issue happens only occasionally):
The issue happens only occasionally.
It appears to be connected to #37722 and docker/for-linux#856
Feel free to close if there's nothing Docker can do about it.
Output of
docker version
:Output of
docker info
:Additional environment details (AWS, VirtualBox, physical, etc.):
VM
/var/log/messages
entriesThe text was updated successfully, but these errors were encountered: