Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prequel changes to adding security context to the AppRepo cleanup jobs #6646

Merged
merged 10 commits into from
Aug 21, 2023

Conversation

antgamdia
Copy link
Contributor

Description of the change

While working on #6605, I noticed I had a bunch of unrelated changes. Instead of overloading the PR, I have extracted them into this one. The main changes are:

  • Apply the same fix in go Dockerfiles as we did for kubeapps-apis (avoid downloading linters if lint=false). It helps reduce the build time locally.
  • Remove some duplicated keys in the yamls, as we already discussed.
  • Add some missing apprepo-controller args in the tests, mainly just for completness.
  • Fix some tests in the apprepo-controller. Mainly just swapping some misplaced get, want.
  • Handle cronjob v1beta1 vs v1 in a missing place.
  • Pass the podSecurityContext and containerSecurityContext in the pod template properly.
  • Update a missing copyright header.
  • Fix wrong values.yaml metadata (mainly ocicatalog/ociCatalog)

Benefits

Get the aforementioned issues solved.

Possible drawbacks

N/A (not adding new logic here)

Applicable issues

Additional information

N/A

Signed-off-by: Antonio Gamez Diaz <agamez@vmware.com>
Signed-off-by: Antonio Gamez Diaz <agamez@vmware.com>
Signed-off-by: Antonio Gamez Diaz <agamez@vmware.com>
Signed-off-by: Antonio Gamez Diaz <agamez@vmware.com>
Signed-off-by: Antonio Gamez Diaz <agamez@vmware.com>
Signed-off-by: Antonio Gamez Diaz <agamez@vmware.com>
Signed-off-by: Antonio Gamez Diaz <agamez@vmware.com>
Signed-off-by: Antonio Gamez Diaz <agamez@vmware.com>
Signed-off-by: Antonio Gamez Diaz <agamez@vmware.com>
@netlify
Copy link

netlify bot commented Aug 17, 2023

Deploy Preview for kubeapps-dev canceled.

Built without sensitive environment variables

Name Link
🔨 Latest commit 2f00e80
🔍 Latest deploy log https://app.netlify.com/sites/kubeapps-dev/deploys/64dde8ae76a44100086d13c7

Copy link
Contributor

@absoludity absoludity left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Excellent, thanks for all the improvements Antonio!

@antgamdia antgamdia merged commit 74a9232 into vmware-tanzu:main Aug 21, 2023
40 checks passed
@antgamdia antgamdia deleted the 6545-add-seccontext-apprepo-1 branch August 21, 2023 06:52
antgamdia added a commit that referenced this pull request Aug 22, 2023
### Description of the change

Even if the sync jobs were added a security context (by means of each
AppRepo CRD), this information was not available for Cleanup jobs. This
is mainly due to the fact that those jobs are spun up once a NotFound
error is thrown when fetching an AppRepo.
However, Kubernetes does have a native approach for dealing with these
scenarios: finalizers.

In #6605 we proposed a
simplistic workaround based on adding more params to the controller...
but as suggested in
#6605 (comment),
moving to finalizers is a better long-term solution.


### Benefits

Cleanup jobs are now handled within an existing AppRepo context...
meaning we have all the syncJobTemplate available to be used (ie,
securityPolicies and so on).

### Possible drawbacks

When dealing with finalizers in the past I often found it really
annoying when they get stuck and prevent the resource to get deleted. I
wonder if we should add some info in the FAQ on how to manually remove
the finalizers.

Additionally, and this might be something important: for the AppRepo
controller to be able to `update` AppRepos in other namespaces !=
kubeapps.... (to add the finalizer) it now needs extra RBAC. Before we
were just granting `...-appprepos-read`.. but now we would need to grant
`...-write` as well...and I'm not sure we really want to do so.
WDYT, @absoludity ?
Another idea is using an admission policy... but not sure again if we
want to deal with that...

~(I haven't modified the RBAC yet in this PR)~ Changes have been
performed finally

### Applicable issues

- fixes #6545

### Additional information

This PR is based on top of
#6646, but the main change
to review is
6e70910
The rest is just moving code into separate files, mostly.

Also, I have been taking a look at `kubebuilder` to create a new
controller, based on the `sigs.k8s.io/controller-runtime` rather than on
the workqueues we currently have. While it is pretty easy to start with
([see quickstart](https://book.kubebuilder.io/quick-start)), I think it
is adding too much complexity (using kustomize, adding rbac proxies,
prometheus metrics, etc...
I also quickly tried the k8s codegen scripts, but ran into some issues
with my setup... but perhaps it's the best option.

IMO, at some point we should start thinking about moving towards a new
state-of-the-art k8s controller boilerplate.

---------

Signed-off-by: Antonio Gamez Diaz <agamez@vmware.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants