Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Docker 1.13: docker service/stack ps cannot show the service/task ports status #30232

Open
twang2218 opened this issue Jan 18, 2017 · 6 comments
Labels
area/cli area/networking area/swarm kind/enhancement Enhancements are not bugs or new features but can improve usability or performance. version/1.13

Comments

@twang2218
Copy link

twang2218 commented Jan 18, 2017

Description

docker service create --publish can specify the published port mapping for the service. However, both docker service ls and docker service ps cannot show such information.

In 1.13, one more column, PORTS, was added in docker service ps result, however, it's always empty.

Steps to reproduce the issue:

root@d1:~/stack-deploy# docker service create --name app1 -p 8001:80 nginx
aw1aubl9w8ki9dmw2bo6q6937
root@d1:~/stack-deploy# docker service ps app1
ID            NAME    IMAGE         NODE  DESIRED STATE  CURRENT STATE             ERROR  PORTS
9c17a7wonz55  app1.1  nginx:latest  d2    Running        Preparing 12 seconds ago

Update: the below problem was caused by mixing Docker 1.12 and 1.13 in the Swarm cluster, check the comment below about the details

I also tried with --publish mode=host which is introduced in 1.13, but still nothing shows on PORTS column:

root@d1:~/stack-deploy# docker service create --name app2 --publish mode=host,published=8002,target=80 nginx
7evmnm2gau4c5bprds1hna7ft
root@d1:~/stack-deploy# docker service ps app2
ID            NAME    IMAGE         NODE  DESIRED STATE  CURRENT STATE          ERROR  PORTS
2haoqqj8a12c  app2.1  nginx:latest  d2    Running        Running 7 seconds ago

Describe the results you received:

The PORTS section is empty.

Describe the results you expected:

There should be something like 80/tcp, 443/tcpor 8001 => 80/tcp, 443/tcp in the section.

And docker stack ps inherit the same problem with empty PORTS column:

root@d1:~/stack-deploy# docker stack ps lnmp -f desired-state=running
ID            NAME          IMAGE                      NODE  DESIRED STATE  CURRENT STATE           ERROR  PORTS
1x6qiieam21p  lnmp_mysql.1  mysql:5.7                  d1    Running        Running 31 minutes ago
7irrc6v9xnbo  lnmp_nginx.1  twang2218/lnmp-nginx:v1.2  d1    Running        Running 31 minutes ago
2bq2kjm6xacn  lnmp_php.1    twang2218/lnmp-php:v1.2    d1    Running        Running 31 minutes ago
edp0ed1k6u9w  lnmp_nginx.2  twang2218/lnmp-nginx:v1.2  d1    Running        Running 31 minutes ago
1hlmkgtpf1pa  lnmp_php.2    twang2218/lnmp-php:v1.2    d2    Running        Running 31 minutes ago
0xjjyu3tyewp  lnmp_php.3    twang2218/lnmp-php:v1.2    d2    Running        Running 31 minutes ago
e9lgn25kyepx  lnmp_php.4    twang2218/lnmp-php:v1.2    d1    Running        Running 31 minutes ago

Additional information you deem important (e.g. issue happens only occasionally):

After dive into the code, I found the code of printing port information is at:

https://github.com/docker/docker/blob/master/cli/command/task/print.go#L142-L154

		fmt.Fprintf(
			out,
			psTaskItemFmt,
			id,
			indentedName,
			image,
			nodeValue,
			command.PrettyPrint(task.DesiredState),
			command.PrettyPrint(task.Status.State),
			strings.ToLower(units.HumanDuration(time.Since(task.Status.Timestamp))),
			taskErr,
			portStatus(task.Status.PortStatus),
		)

And I inspect the task to see what's value of task.Status.PortStatus:

$ docker inspect 9c17a7wonz55
[
    {
        "ID": "9c17a7wonz55ti6k6uy9eus7g",
        "Version": {
            "Index": 222
        },
        "CreatedAt": "2017-01-18T05:38:22.69818533Z",
        "UpdatedAt": "2017-01-18T05:38:37.380391831Z",
        "Spec": {
            "ContainerSpec": {
                "Image": "nginx:latest@sha256:33ff28a2763feccc1e1071a97960b7fef714d6e17e2d0ff573b74825d0049303"
            },
            "Resources": {
                "Limits": {},
                "Reservations": {}
            },
            "RestartPolicy": {
                "Condition": "any",
                "MaxAttempts": 0
            },
            "Placement": {},
            "ForceUpdate": 0
        },
        "ServiceID": "aw1aubl9w8ki9dmw2bo6q6937",
        "Slot": 1,
        "NodeID": "kcumt000ix5pldgrxfiq4yqu0",
        "Status": {
            "Timestamp": "2017-01-18T05:38:37.362853091Z",
            "State": "running",
            "Message": "started",
            "ContainerStatus": {
                "ContainerID": "3a42430ae3f06d7dc1cb74181c0158245ff5de17914b3111d0d1ca3de7c2485d",
                "PID": 23410
            },
            "PortStatus": {}
        },
        "DesiredState": "running",
        "NetworksAttachments": [
            {
                "Network": {
                    "ID": "to761yyjzg5sa62etld8is1sh",
                    "Version": {
                        "Index": 123
                    },
                    "CreatedAt": "2017-01-17T23:58:46.668572186Z",
                    "UpdatedAt": "2017-01-18T05:11:30.405150109Z",
                    "Spec": {
                        "Name": "ingress",
                        "Labels": {
                            "com.docker.swarm.internal": "true"
                        },
                        "DriverConfiguration": {},
                        "IPAMOptions": {
                            "Driver": {},
                            "Configs": [
                                {
                                    "Subnet": "10.255.0.0/16",
                                    "Gateway": "10.255.0.1"
                                }
                            ]
                        }
                    },
                    "DriverState": {
                        "Name": "overlay",
                        "Options": {
                            "com.docker.network.driver.overlay.vxlanid_list": "4096"
                        }
                    },
                    "IPAMOptions": {
                        "Driver": {
                            "Name": "default"
                        },
                        "Configs": [
                            {
                                "Subnet": "10.255.0.0/16",
                                "Gateway": "10.255.0.1"
                            }
                        ]
                    }
                },
                "Addresses": [
                    "10.255.0.8/16"
                ]
            }
        ]
    }
]

As shown above, it's empty:

        "Status": {
            "Timestamp": "2017-01-18T05:38:37.362853091Z",
            "State": "running",
            "Message": "started",
            "ContainerStatus": {
                "ContainerID": "3a42430ae3f06d7dc1cb74181c0158245ff5de17914b3111d0d1ca3de7c2485d",
                "PID": 23410
            },
            "PortStatus": {}
        },

Output of docker version:

 docker version
Client:
 Version:      1.13.0-rc7
 API version:  1.25
 Go version:   go1.7.3
 Git commit:   48a9e53
 Built:        Fri Jan 13 06:52:01 2017
 OS/Arch:      linux/amd64

Server:
 Version:      1.13.0-rc7
 API version:  1.25 (minimum version 1.12)
 Go version:   go1.7.3
 Git commit:   48a9e53
 Built:        Fri Jan 13 06:52:01 2017
 OS/Arch:      linux/amd64
 Experimental: true

Output of docker info:

root@d1:~/stack-deploy# docker info
Containers: 7
 Running: 5
 Paused: 0
 Stopped: 2
Images: 8
Server Version: 1.13.0-rc7
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 60
 Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host ipvlan macvlan null overlay
Swarm: active
 NodeID: rlt51or3rdev9ku00bsclb3he
 Is Manager: true
 ClusterID: ilt7bnrmlxdialuitpmyovs1h
 Managers: 2
 Nodes: 2
 Orchestration:
  Task History Retention Limit: 5
 Raft:
  Snapshot Interval: 10000
  Number of Old Snapshots to Retain: 0
  Heartbeat Tick: 1
  Election Tick: 3
 Dispatcher:
  Heartbeat Period: 5 seconds
 CA Configuration:
  Expiry Duration: 3 months
 Node Address: 138.197.213.116
 Manager Addresses:
  138.197.213.116:2377
  138.197.221.47:2377
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 03e5862ec0d8d3b3f750e19fca3ee367e13c090e
runc version: 2f7393a47307a16f8cee44a37b262e8b81021e3e
init version: 949e6fa
Security Options:
 apparmor
 seccomp
  Profile: default
Kernel Version: 4.4.0-59-generic
Operating System: Ubuntu 16.04.1 LTS
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 488.4 MiB
Name: d1
ID: JDBE:R26Z:SLGS:WIBN:S33Z:XLM7:HRW4:FEPV:4TU6:6ZJH:H5UY:YC5X
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
WARNING: No swap limit support
Labels:
 provider=digitalocean
Experimental: true
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false

Additional environment details (AWS, VirtualBox, physical, etc.):

@jmzwcn
Copy link
Contributor

jmzwcn commented Jan 18, 2017

missing during parse it?

@twang2218
Copy link
Author

For the mode=host, I found the problem, it's because the cluster is mixed with Docker 1.12 and 1.13. And the task was scheduled to a 1.12 node, then it just worked as mode=ingress silently, which cause the PORTS column empty.

Retested on a pure 1.13 cluster, there are PORTS info for mode=host service tasks, however, still empty for mode=ingress.

mode=host:

$ docker service create --name myapp2 --publish mode=host,published=8002,target=80 nginx
$ docker service ps myapp2
ID            NAME      IMAGE         NODE  DESIRED STATE  CURRENT STATE           ERROR  PORTS
pobogp37t1ba  myapp2.1  nginx:latest  d1    Running        Running 13 minutes ago         *:8002->80/tcp

mode=ingress:

$ docker service create --name myapp1 -p 8001:80 nginx
$ docker service ps myapp1
ID            NAME      IMAGE         NODE  DESIRED STATE  CURRENT STATE           ERROR  PORTS
jd9zik85ymuu  myapp1.1  nginx:latest  d1    Running        Running 26 minutes ago

@thaJeztah
Copy link
Member

This was by design, and the reason for this is that ports for individual tasks are not published in mode=ingress (the service is published, but individual tasks are not directly published). When mode=host, individual tasks are published, so the port is shown.

However there was already some discussion about this, because from a UX perspective, it's confusing, so if technically possible, I would not be against presenting this information. We need a design for this (both API, and a presentation that allows users to distinguish between the "publish" modes

@thaJeztah thaJeztah added area/cli kind/enhancement Enhancements are not bugs or new features but can improve usability or performance. labels Jan 19, 2017
@twang2218
Copy link
Author

twang2218 commented Jan 19, 2017

Yes, it's confusing, especially for docker node ps and docker stack ps, the result will mixed with different services' tasks of mode=ingress and mode=host, some have ports info, some don't.

I understand it's published on service if mode=ingress, not on individual tasks. In such case, should we add a PORTS column in the docker service ls? there should be safe for showing the PORTS info for both mode=ingress and mode=host.

@aluzzardi
Copy link
Member

@twang2218 This looks very wrong.

We were supposed to show *:8002->80/tcp for ingress ports, and <NODE IP>:1234->80/tcp for host ports.

Furthermore, docker service ls was supposed to show ingress ports. Not sure what happened with the formatting there.

/cc @mavenugo @vieux @thaJeztah

I think this should be fixed in 1.13.x

@thaJeztah
Copy link
Member

Still a bit in doubt if <node ip>:1234->80 is correct (we were discussing situations where a node has multiple interfaces) don't have a suggestion for a different presentation though, so open to suggestions

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/cli area/networking area/swarm kind/enhancement Enhancements are not bugs or new features but can improve usability or performance. version/1.13
Projects
None yet
Development

No branches or pull requests

5 participants