Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add mock CSI driver deploy example #277

Merged
merged 1 commit into from
Jul 31, 2020

Conversation

Jiawei0227
Copy link
Contributor

@Jiawei0227 Jiawei0227 commented Jul 29, 2020

This commit adds an example folder which contains a deploy instruction
of the full CSI mock driver including csi-provisioner, csi-resizer,
csi-snapshotter and csi-node-driver-registrar.

This helps to playaround with the CSI mock driver to understand the
functionality.

Tested on GCP.

What type of PR is this?

Uncomment only one /kind <> line, hit enter to put that in a new line, and remove leading whitespaces from that line:

/kind api-change
/kind bug
/kind cleanup
/kind design
/kind documentation
/kind failing-test
/kind feature
/kind flake
/kind example

What this PR does / why we need it:

Which issue(s) this PR fixes:

Fixes #

Special notes for your reviewer:
This is a demo deployment of mock csi driver with all the containers.

Does this PR introduce a user-facing change?:

Deployment manifests and examples are added for mock CSI driver under `mock/example`

@k8s-ci-robot
Copy link
Contributor

@Jiawei0227: The label(s) kind/example cannot be applied, because the repository doesn't have them

In response to this:

This commit adds an example folder which contains a deploy instruction
of the full CSI mock driver including csi-provisioner, csi-resizer,
csi-snapshotter and csi-node-driver-registrar.

This helps to playaround with the CSI mock driver to understand the
functionality.

Tested on GCP.

What type of PR is this?

Uncomment only one /kind <> line, hit enter to put that in a new line, and remove leading whitespaces from that line:

/kind api-change
/kind bug
/kind cleanup
/kind design
/kind documentation
/kind failing-test
/kind feature
/kind flake
/kind example

What this PR does / why we need it:

Which issue(s) this PR fixes:

Fixes #

Special notes for your reviewer:
This is a demo deployment of mock csi driver with all the containers.

Does this PR introduce a user-facing change?:

NONE

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added the release-note-none Denotes a PR that doesn't merit a release note. label Jul 29, 2020
@k8s-ci-robot
Copy link
Contributor

Thanks for your pull request. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

📝 Please follow instructions at https://git.k8s.io/community/CLA.md#the-contributor-license-agreement to sign the CLA.

It may take a couple minutes for the CLA signature to be fully registered; after that, please reply here with a new comment and we'll verify. Thanks.


Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@k8s-ci-robot k8s-ci-robot added the cncf-cla: no Indicates the PR's author has not signed the CNCF CLA. label Jul 29, 2020
@k8s-ci-robot
Copy link
Contributor

Welcome @Jiawei0227!

It looks like this is your first PR to kubernetes-csi/csi-test 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.

You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.

You can also check if kubernetes-csi/csi-test has its own contribution guidelines.

You may want to refer to our testing guide if you run into trouble with your tests not passing.

If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!

Thank you, and welcome to Kubernetes. 😃

@k8s-ci-robot
Copy link
Contributor

Hi @Jiawei0227. Thanks for your PR.

I'm waiting for a kubernetes-csi member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Jul 29, 2020
@Jiawei0227
Copy link
Contributor Author

I signed it

@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. and removed cncf-cla: no Indicates the PR's author has not signed the CNCF CLA. labels Jul 29, 2020
installed in the default namespace. Correspondingly, you can find a pod with prefix `csi-mockplugin`. In the meantime,
a `StorageClass` called `test-csi-mock` will be generated along with a `VolumeSnapshotClass` called `csi-mock-snapclass`.

Note that if you are using version prior to 1.17, [snapshot-controller](https://github.com/kubernetes-csi/external-snapshotter#usage) will require to be manual installed.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

prior to 1.17, there was no snapshot controller. I think maybe a more accurate note is to say that the k8s distribution is supposed to install the snapshot-controller, but if it doesn't, then you'll have to manually install it separately.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

serviceAccount: csi-mock
containers:
- name: csi-provisioner
image: gcr.io/k8s-staging-csi/csi-provisioner:canary
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We're switching to k8s.gcr.io as our image repo. Can you use the images available there? Here's this list of image tags currently available:

https://github.com/kubernetes/k8s.io/blob/master/k8s.gcr.io/images/k8s-staging-sig-storage/images.yaml

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

image: gcr.io/k8s-staging-csi/csi-provisioner:canary
args:
- "--csi-address=$(ADDRESS)"
- "--leader-election"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can probably disable leader election for all the sidecars since the mock driver is not resilient to restarts anyway.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

type: Directory

---
apiVersion: storage.k8s.io/v1
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

v1 was made available in 1.18. So we should document that this example requires 1.18+

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

name: example-pvc
spec:
accessModes:
- ReadWriteMany
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

mock driver is closer to readwriteonce. I'm not even sure it can handle data persisting across a pod restart, but it definitely doesn't support sharing data across multiple nodes.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@@ -0,0 +1,28 @@
# CSI Mock Driver Example
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This may be better under the mock/ directory. We can point to it from the top level readme.

In the mock/readme.md, it would be also good to describe the high level limitations of the mock driver, such as:

  • requires all the components to run on the same node
  • only supports single node
  • does not persist data across restarts

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

# CSI Mock Driver Example

This folder contains an example manifest of deploying CSIDriver including `csi-driver-node-registrar`, `csi-provisioner`,
`csi-resizer` and `csi-snapshotter` onto a cluster. For testing purpose, `csi-attcher` is not included. Thus,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

csi-attacher

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

spec:
accessModes:
- ReadWriteMany
storageClassName: test-csi-mock
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

missing a storageclass yaml?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The storageclass is included in deploy/csi-mock-driver-deployment.yaml

Copy link
Contributor Author

@Jiawei0227 Jiawei0227 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@msau42 Thanks for the quick review! Have addressed the comments.

spec:
accessModes:
- ReadWriteMany
storageClassName: test-csi-mock
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The storageclass is included in deploy/csi-mock-driver-deployment.yaml

name: example-pvc
spec:
accessModes:
- ReadWriteMany
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

type: Directory

---
apiVersion: storage.k8s.io/v1
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

image: gcr.io/k8s-staging-csi/csi-provisioner:canary
args:
- "--csi-address=$(ADDRESS)"
- "--leader-election"
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@@ -0,0 +1,28 @@
# CSI Mock Driver Example
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

# CSI Mock Driver Example

This folder contains an example manifest of deploying CSIDriver including `csi-driver-node-registrar`, `csi-provisioner`,
`csi-resizer` and `csi-snapshotter` onto a cluster. For testing purpose, `csi-attcher` is not included. Thus,
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

installed in the default namespace. Correspondingly, you can find a pod with prefix `csi-mockplugin`. In the meantime,
a `StorageClass` called `test-csi-mock` will be generated along with a `VolumeSnapshotClass` called `csi-mock-snapclass`.

Note that if you are using version prior to 1.17, [snapshot-controller](https://github.com/kubernetes-csi/external-snapshotter#usage) will require to be manual installed.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

serviceAccount: csi-mock
containers:
- name: csi-provisioner
image: gcr.io/k8s-staging-csi/csi-provisioner:canary
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

mock/README.md Outdated
Limitation about this mock CSI Driver are:
- It only supports single node.
- It requires all the components to run on the same node, i.e. the pod that uses pv created by this CSI driver should be on the same node with the driver.
csi driver
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this looks like an extra line

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

serviceAccount: csi-mock
containers:
- name: csi-provisioner
image: gcr.io/k8s-staging-sig-storage/csi-provisioner:v1.6.0
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The repo should be "k8s.gcr.io/sig-storage/csi-provisioner..." instead. k8s.gcr.io maps to gcr.io/k8s-artifacts-prod. The staging repo is for pre-release testing only and images are temporarily and get deleted after 30 days or so.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

args:
- "--csi-address=$(ADDRESS)"
env:
- name: ADDRESS
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think since this env is hardcoded, we don't need to actually define an env variable for it, and can just use it directly in the argument.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto throughout

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

apiVersion: v1
fieldPath: spec.nodeName
securityContext:
privileged: true
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Normally most of these containers do not have to be privileged. Usually only the driver container has to be privileged in order to do the mount operation.

However, selinux systems require privileged in order to access the csi socket as a hostpath. Maybe add the same comment here, and make all the containers privileged.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

args:
- "--v=5"
- "--csi-address=$(ADDRESS)"
- "--leader-election=false"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we don't actually need to explicitly set this since the default is false

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

# deploy this pod only on where csi-mock-driver exists
# This is because csi-mock-driver keeps all states in memory and the process that
# provisioned the PV needs to be the same process that's mounting it
affinity:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

as a future improvement, if we add topology support to the mock driver, then we don't need this in the pod. The pv provisioned will have a nodeaffinity on it, and the pod will always get scheduled to the correct node.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, that sounds awesome. I will make a comment on this as TODO then.

Copy link
Contributor Author

@Jiawei0227 Jiawei0227 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done. Thanks for the review! I am kind of new to this, so I am curious what's the difference between runAsUser: 0 v.s. privileged: true. I looked up online and could not find a clear explanation.

mock/README.md Outdated
Limitation about this mock CSI Driver are:
- It only supports single node.
- It requires all the components to run on the same node, i.e. the pod that uses pv created by this CSI driver should be on the same node with the driver.
csi driver
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

serviceAccount: csi-mock
containers:
- name: csi-provisioner
image: gcr.io/k8s-staging-sig-storage/csi-provisioner:v1.6.0
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

args:
- "--csi-address=$(ADDRESS)"
env:
- name: ADDRESS
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

apiVersion: v1
fieldPath: spec.nodeName
securityContext:
privileged: true
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

args:
- "--v=5"
- "--csi-address=$(ADDRESS)"
- "--leader-election=false"
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

# deploy this pod only on where csi-mock-driver exists
# This is because csi-mock-driver keeps all states in memory and the process that
# provisioned the PV needs to be the same process that's mounting it
affinity:
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, that sounds awesome. I will make a comment on this as TODO then.

@msau42
Copy link
Collaborator

msau42 commented Jul 30, 2020

/approve
lgtm, can you squash your commits into one?

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: Jiawei0227, msau42

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Jul 30, 2020
This commit adds an example folder which contains a deploy instruction
of the full CSI mock driver including csi-provisioner, csi-resizer,
csi-snapshotter and csi-node-driver-registrar.

This helps to playaround with the CSI mock driver to understand the
functionality.

Tested on GCP.

- Update the image registry to k8s.gcr.io/sig-storage/xxx
- Set privileged container for all container.
@msau42
Copy link
Collaborator

msau42 commented Jul 30, 2020

regarding you question on privileged vs running as uid 0, my basic understanding is that privileged enables a lot more than just the uid: all the host devices get mounted in the container, and there's also a number of linux capabilities that get enabled: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#privileged

@Jiawei0227
Copy link
Contributor Author

Okay, thanks! Commits squashed.

@msau42
Copy link
Collaborator

msau42 commented Jul 30, 2020

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Jul 30, 2020
@msau42
Copy link
Collaborator

msau42 commented Jul 30, 2020

Can you also add a release note saying that deployment manifests and examples are added for mock driver

@k8s-ci-robot k8s-ci-robot added the release-note Denotes a PR that will be considered when it comes time to generate release notes. label Jul 30, 2020
@k8s-ci-robot k8s-ci-robot removed the release-note-none Denotes a PR that doesn't merit a release note. label Jul 30, 2020
@msau42
Copy link
Collaborator

msau42 commented Jul 31, 2020

/ok-to-test

@k8s-ci-robot k8s-ci-robot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Jul 31, 2020
@k8s-ci-robot k8s-ci-robot merged commit 4b9faee into kubernetes-csi:master Jul 31, 2020
@Jiawei0227 Jiawei0227 deleted the csi-mock-deploy branch July 31, 2020 23:26
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm "Looks good to me", indicates that a PR is ready to be merged. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. release-note Denotes a PR that will be considered when it comes time to generate release notes. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants