Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

support kompose down subcommand #113

Merged
merged 3 commits into from
Sep 2, 2016
Merged

Conversation

ngtuna
Copy link
Contributor

@ngtuna ngtuna commented Aug 17, 2016

Add new subcommand kompose down, in contrast with kompose up. There are two options:

$ kompose down --file <docker-compose>
$ kompose down --bundle <bundle-file>

It will delete corresponding services and deployments of converted application.

$ kompose down --all
Using flag --all/-a will delete all resources in the kubernetes cluster.
Are you sure to continue? (yes/no): yes

It will delete all resources (deployments, services, replication controllers, daemonsets) in the cluster. (only at default namespace, other namespaces are coming in follow-up PR).

Can you take a look @janetkuo ? I'm not sure if I missed something in calling kubernetes apis. For example, I set DeleteOptions = nil when deleting deployment. or ListOptions is empty. They are temporarily set until we find suitable customized options.

@ngtuna
Copy link
Contributor Author

ngtuna commented Aug 17, 2016

Fixes #41

@kadel
Copy link
Member

kadel commented Aug 17, 2016

Do we really need down --all? If someone wants to clean whole namespace it can be done via kubectl.

}
logrus.Infof("Successfully deleted service: %s", name)

err = client.Deployments(api.NamespaceDefault).Delete(name, nil)
Copy link
Member

@kadel kadel Aug 17, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not enough to just delete Deployments, because it will leave running pods behind.
First you need to scale Deployments to 0, and than it is safe to delete Deployment.

Copy link
Member

@kadel kadel Aug 17, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, it seems that we need also somehow clean ReplicaSets :-(
Now I remember. kubectl uses Reaper to clean all related objects.
You can see it here: https://github.com/kubernetes/kubernetes/blob/v1.3.5/pkg/kubectl/stop.go#L362

Copy link
Member

@janetkuo janetkuo Aug 17, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, in the future k8s will support server-side cascading deletion. For now we can use reapers.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK thanks @janetkuo & @kadel. Let me check reapers and make a follow-up commit.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need confirmation @janetkuo / @kadel : We can use:

  • ServiceReaper to delete svc
  • DeploymentReaper to delete pod, rc, replicaset & deployment
  • DaemonReaper to delete daemonset
  • JobReaper to delete job.

Am I missing or wrong something?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are right.

btw: look at func ReaperFor(.... this might useful.

Let me know if I can help you with that more.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added a follow-up commit below. Can you check it?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure we should delete all jobs or not. Marked it //TODO

@ngtuna
Copy link
Contributor Author

ngtuna commented Aug 23, 2016

Stuck at #117 where I can't use "k8s.io/kubernetes/pkg/kubectl/cmd" for printing resources in format of kubectl get

@kadel
Copy link
Member

kadel commented Aug 24, 2016

Stuck at #117 where I can't use "k8s.io/kubernetes/pkg/kubectl/cmd" for printing resources in format of kubectl get

#117 (comment)

@kadel
Copy link
Member

kadel commented Aug 24, 2016

@ngtuna If it is just for printing, can we use something else for now?

@ngtuna
Copy link
Contributor Author

ngtuna commented Aug 24, 2016

@kadel yes I'm also thinking about it

@ngtuna ngtuna removed the in progress label Sep 2, 2016
@ngtuna ngtuna merged commit 0da484a into kubernetes:master Sep 2, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants