[go: up one dir, main page]

Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't start service #825

Open
2 of 3 tasks
budimanjojo opened this issue Oct 16, 2019 · 4 comments
Open
2 of 3 tasks

Can't start service #825

budimanjojo opened this issue Oct 16, 2019 · 4 comments

Comments

@budimanjojo
Copy link
  • This is a bug report
  • This is a feature request
  • I searched existing issues before opening this one

Starting containers after several containers already started with docker-compose up failed with this error:

Cannot start service servicename: OCI runtime create failed: container_linux.go:345: starting container process caused "process_linux.go:303: getting the final child's pid from pipe caused \"EOF\"": unknown

Expected behavior

Service started normally

Output of docker version:

Client: Docker Engine - Community
 Version:           19.03.3
 API version:       1.40
 Go version:        go1.12.10
 Git commit:        a872fc2f86
 Built:             Tue Oct  8 00:59:59 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.3
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.12.10
  Git commit:       a872fc2f86
  Built:            Tue Oct  8 00:58:31 2019
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.2.6
  GitCommit:        894b81a4b802e4eb2a91d1ce216b8817763c29fb
 runc:
  Version:          1.0.0-rc8
  GitCommit:        425e105d5a03fabd737a126ad93d62a9eeede87f
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683

Output of docker info:

Client:
 Debug Mode: false

Server:
 Containers: 11
  Running: 8
  Paused: 0
  Stopped: 3
 Images: 20
 Server Version: 19.03.3
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Native Overlay Diff: true
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 894b81a4b802e4eb2a91d1ce216b8817763c29fb
 runc version: 425e105d5a03fabd737a126ad93d62a9eeede87f
 init version: fec3683
 Security Options:
  seccomp
   Profile: default
 Kernel Version: 4.15.0
 Operating System: Ubuntu 18.04.3 LTS
 OSType: linux
 Architecture: x86_64
 CPUs: 4
 Total Memory: 3GiB
 Name: myhostname
 ID: number
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled

Additional environment details (AWS, VirtualBox, physical, etc.)

I'm running Ubuntu 18.04.3 with kernel 4.15

@budimanjojo
Copy link
Author

I've tried everything. And nothing work. I've been using this VPS for two years without any problem and suddenly this happen after I decided to do docker-pull to update my services. Now I can only have max 8 containers running. I need to stop one container to start one container. It's weird. These are what I tried:

ERROR: for servicename  Cannot start service servicename: OCI runtime create failed: container_linux.go:348: starting container process caused "process_linux.go:301: running exec setns process for init caused \"exit status 40\"": unknown
  • I also think about Out of Memory/CPU because it only happens when I have about 8 containers running. But this is my docker stats shows:
CONTAINER ID        NAME                CPU %               MEM USAGE / LIMIT   MEM %               NET I/O             BLOCK I/O           PIDS
f07a69492cfd        servicename         0.01%               11.99MiB / 3GiB     0.39%               2.71kB / 0B         0B / 0B             9
c53d0578b648        servicename         0.00%               10.31MiB / 3GiB     0.34%               9.39kB / 5.95kB     0B / 0B             22
7279a769ad2b        servicename         0.43%               12.86MiB / 3GiB     0.42%               123kB / 146kB       0B / 0B             13
6bef8a4d2fcf        servicename         0.00%               14.92MiB / 3GiB     0.49%               4.04kB / 0B         0B / 0B             6
ed78eeaf1325        servicename         10.54%              33.27MiB / 3GiB     1.08%               2.53MB / 870kB      0B / 0B             8
567f4b34716b        servicename         0.00%               6.258MiB / 3GiB     0.20%               3.93MB / 113kB      0B / 0B             1
f53f66f613fc        servicename         0.39%               7.773MiB / 3GiB     0.25%               465kB / 433kB       0B / 0B             4
741dc20d9034        servicename         0.56%               145.8MiB / 3GiB     4.75%               391kB / 2.19MB      0B / 0B             32

And also top command shows me

              total        used        free      shared  buff/cache   available
Mem:           3.0G        405M        2.0G         20M        656M        2.6G
Swap:          768M          0B        768M

So I'm really confused now. Maybe somebody can help me with this? As this is a VPS I can not upgrade the linux kernel, I'm now in

-> uname -r
4.15.0

@hellojukay
Copy link
hellojukay commented Dec 18, 2019

i have the same problem in k8s

ailed create pod sandbox: rpc error: code = Unknown desc = failed to start sandbox container for pod "runner-xb5qjal9-project-5447-concurrent-0s5bnn": Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "process_linux.go:303: getting the final child's pid from pipe caused \"EOF\"": unknown
[root@k8s02v sudoers.d]# clear
[root@k8s02v sudoers.d]# uname -r
3.10.0-862.el7.x86_64
[root@k8s02v sudoers.d]# cat /etc/centos-release
CentOS Linux release 7.5.1804 (Core)
[root@k8s02v sudoers.d]# docker info
Containers: 5
 Running: 4
 Paused: 0
 Stopped: 1
Images: 82
Server Version: 18.09.7
Storage Driver: overlay2
 Backing Filesystem: xfs
 Supports d_type: true
 Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 894b81a4b802e4eb2a91d1ce216b8817763c29fb
runc version: 425e105d5a03fabd737a126ad93d62a9eeede87f
init version: fec3683
Security Options:
 seccomp
  Profile: default
Kernel Version: 3.10.0-862.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 15.67GiB
Name: k8s02v.zgb.shyc3.360es.cn
ID: 5WZX:FQYF:SLAG:WEPV:X5ZL:A7LL:26LK:OXMK:X6OY:XXEC:GXFO:J4HK
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false
Product License: Community Engine
[root@k8s02v sudoers.d]# docker system df
TYPE                TOTAL               ACTIVE              SIZE                RECLAIMABLE
Images              78                  3                   17.55GB             17.41GB (99%)
Containers          5                   4                   0B                  0B
Local Volumes       0                   0                   0B                  0B
Build Cache         0                   0                   0B                  0B```

@100cm
Copy link
100cm commented Jan 6, 2020

any progress on this ?
same problem here.
3.10.0-957.5.1.el7.x86_64

@hansireit
Copy link
hansireit commented Jun 30, 2020

I have the same problem.

# uname -r
3.10.0-1062.4.2.vz7.116.7
# docker info
Client:
 Debug Mode: false

Server:
 Containers: 42
  Running: 26
  Paused: 0
  Stopped: 16
 Images: 118
 Server Version: 19.03.12
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Native Overlay Diff: true
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 7ad184331fa3e55e52b890ea95e65ba581ae3429
 runc version: dc9208a3303feef5b3839f4323d9beb36df0a9dd
 init version: fec3683
 Security Options:
  seccomp
   Profile: default
 Kernel Version: 3.10.0-1062.4.2.vz7.116.7
 Operating System: Debian GNU/Linux 9 (stretch)
 OSType: linux
 Architecture: x86_64
 CPUs: 8
 Total Memory: 16GiB
 Name: 34234.p3.sfge.net
 ID: SEE5:X5V7:RHTN:2RKG:MZL7:7AXZ:6DVU:TG5Z:BOQD:WDC7:DZRK:PFD3
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Username: *******
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled

When I try to run more than about 26 containers, I get:

Cannot start service <CONTAINER>: OCI runtime create failed: container_linux.go:349: starting container process caused "process_linux.go:319: getting the final child's pid from pipe caused \"EOF\"": unknown

Is it possible that my VPS-provider blocks the allocation of more containers?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants