Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

support for volumes_from docker-compose construct #190

Merged
merged 1 commit into from
Oct 23, 2016

Conversation

surajssd
Copy link
Member

@surajssd surajssd commented Oct 7, 2016

Now a user can provide volumes_from to share volumes from other service so here the PVC created for that service will be shared by service calling volumes_from.

@kadel
Copy link
Member

kadel commented Oct 14, 2016

PVC are created with AccessMode "ReadWriteOnce". So they can't be mounted to multiple containers :-(

@surajssd
Copy link
Member Author

@kadel so I tried following in minikube, see if you can re-create it:

Start a minikube VM and ssh into it

$ minikube start
$ minikube ssh
docker@minikube:~$ mkdir ~/data

And create a PV and PVC using following file:

$ cat vols.yaml 
apiVersion: v1
kind: PersistentVolume
metadata:
  name: vol1
  labels:
    type: local
spec:
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  hostPath:
    path: "/home/docker/data"
---

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: myclaim
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi

and create pods that share same PVC

$ cat pods.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: httpd
spec:
  containers:
  - image: docker.io/centos/httpd
    name: httpd
    volumeMounts:
    - mountPath: /var/www/html
      name: httpd-volume
  volumes:
  - name: httpd-volume
    persistentVolumeClaim:
      claimName: myclaim
---

apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  containers:
  - image: nginx
    name: nginx
    volumeMounts:
    - mountPath: /mynginx
      name: nginx-volume
  volumes:
  - name: nginx-volume
    persistentVolumeClaim:
      claimName: myclaim

once all started

$ kubectl exec -it nginx bash
root@nginx:/# cat > mynginx/index.html
this file is created from nginx container and will be served by httpd

get IP addr of pod

$ kubectl get pods -o wide
NAME      READY     STATUS    RESTARTS   AGE       IP           NODE
httpd     1/1       Running   0          14m       172.17.0.4   minikube
nginx     1/1       Running   0          14m       172.17.0.3   minikube

in minikube vm i could see httpd serving that file created from nginx container

$ minikube ssh
docker@minikube:~$ curl 172.17.0.4
this file is created from nginx container and will be served by httpd

Now a user can provide volumes_from to share volumes
from other service so here the PVC created for that
service will be shared by service calling volumes_from
@kadel
Copy link
Member

kadel commented Oct 19, 2016

OK, I followed your instruction and it worked. But according to docs it shouldn't have :-)

'ReadWriteOnce – the volume can be mounted as read-write by a single node'

Otherwise It doesn't make sense, or I don't get it :-(

@kadel
Copy link
Member

kadel commented Oct 19, 2016

Ohhh, I get it now. (maybe 😉 ) it is by single node not single container. This will work on single node cluster. But not on multi-node. I'll have to test this.

@kadel
Copy link
Member

kadel commented Oct 19, 2016

I tested this on multinode cluster and if pods are on different nodes it is not working :-(

▶ kubectl exec -it nginx bash        
root@nginx:/# echo "asdf" > /mynginx/index.html 
root@nginx:/# exit

▶ kubectl exec -it httpd bash
[root@httpd /]# ls /var/www/html/
[root@httpd /]# exit

@surajssd
Copy link
Member Author

I have a 2 node 1 master k8s cluster and 1 more machine acting as NFS server out of k8s cluster.

these are the pods i am running

$ kubectl get pods -o wide
NAME      READY     STATUS    RESTARTS   AGE       IP             NODE
httpd     1/1       Running   0          35m       10.246.93.2    kubernetes-node-2
httpd1    1/1       Running   0          4m        10.246.93.4    kubernetes-node-2
httpd2    1/1       Running   0          3m        10.246.93.7    kubernetes-node-2
httpd3    1/1       Running   0          3m        10.246.93.8    kubernetes-node-2
httpd4    1/1       Running   0          2m        10.246.48.3    kubernetes-node-1
httpd5    1/1       Running   0          1m        10.246.48.7    kubernetes-node-1

and here is the pv and pvc i have

$ kubectl get pv
NAME      CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS    CLAIM             REASON    AGE
vol1      5Gi        RWO           Retain          Bound     default/myclaim             45m

$ kubectl get pvc
NAME      STATUS    VOLUME    CAPACITY   ACCESSMODES   AGE
myclaim   Bound     vol1      5Gi        RWO           42m

now on node1

[vagrant@kubernetes-node-1 ~]$ curl 10.246.93.2
hey this should reflect everywhere
[vagrant@kubernetes-node-1 ~]$ curl 10.246.48.3
hey this should reflect everywhere
[vagrant@kubernetes-node-1 ~]$ curl 10.246.93.4
hey this should reflect everywhere
[vagrant@kubernetes-node-1 ~]$ curl 10.246.48.7
hey this should reflect everywhere

now on node2

[vagrant@kubernetes-node-2 ~]$ curl 10.246.93.2
hey this should reflect everywhere
[vagrant@kubernetes-node-2 ~]$ 
[vagrant@kubernetes-node-2 ~]$ curl 10.246.48.3
hey this should reflect everywhere
[vagrant@kubernetes-node-2 ~]$ curl 10.246.93.4
hey this should reflect everywhere
[vagrant@kubernetes-node-2 ~]$ curl 10.246.48.7
hey this should reflect everywhere

The only difference is pv this time instead of hostPath it's NFS

$ cat pv.yaml 
apiVersion: v1
kind: PersistentVolume
metadata:
  name: vol1
  labels:
    type: local
spec:
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  nfs:
    path: /shared/kubernetes/web
    server: 192.168.121.199

The pvc is as before pod definitions are also as before.

followed http://severalnines.com/blog/wordpress-application-clustering-using-kubernetes-haproxy-and-keepalived for the setup

all the pods are also same just having different names.

@kadel
Copy link
Member

kadel commented Oct 20, 2016

Ok, it works with NFS on multi node. I think that we can marge this, so more people can test it and we can improve later on.

Copy link
Member

@kadel kadel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@kadel kadel added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Oct 20, 2016
@kadel
Copy link
Member

kadel commented Oct 20, 2016

maybe wait for @ngtuna if he has any feedback on this.

switch t := obj.(type) {
case *api.ReplicationController:
svcName := t.ObjectMeta.Name
for _, dependentSvc := range komposeObject.ServiceConfigs[svcName].VolumesFrom {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should make it better by reducing copied block of code

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would appreciate help on this part of code repetition! I also asked about it once on slack!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right. Probably we can use a common function to return a api.PodTemplateSpec struct for all cases. Then you don't have to repeat the for _, dependentSvc... loop. That's not much but yeah I mean somehow it looks better a little bit.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

because all resources in 4 cases contains api.PodTemplateSpec in their structs

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But here I am adding data to the existing t.Spec.Template.Spec.Volumes and t.Spec.Template.Spec.Containers[0].VolumeMounts so returning common blank struct won't help IMO.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK so I will check it and probably make an enhanced PR later. Let's get it merged. Thanks @surajssd 👍

@surajssd
Copy link
Member Author

surajssd commented Oct 23, 2016

Ok, it works with NFS on multi node. I think that we can marge this, so more people can test it and we can improve later on.

@kadel so we had this discussion about the accessModes not being enforced, so I found similar information in video here[1] which says that kubernetes does best match against accessModes and storage capacity, these are not use to enforce storage characteristics in backend, you can go over a space requested.

Also in the end of talk some questions regarding this are asked at [2].

[0] Kubernetes Storage 101; KubeCon EU 2016 https://www.youtube.com/watch?v=ZqTHe6Xj0Ek
[1] https://youtu.be/ZqTHe6Xj0Ek?t=819
[2] https://youtu.be/ZqTHe6Xj0Ek?t=2051

@surajssd
Copy link
Member Author

@ngtuna @kadel thanks for reviews and great points!

@surajssd surajssd merged commit b068c5c into kubernetes:master Oct 23, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lgtm "Looks good to me", indicates that a PR is ready to be merged. review needed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants