Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

helm install cannot set tolerations #9453

Open
marslo opened this issue Sep 11, 2024 · 0 comments
Open

helm install cannot set tolerations #9453

marslo opened this issue Sep 11, 2024 · 0 comments
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@marslo
Copy link

marslo commented Sep 11, 2024

What happened?

I'd like to assign kubernetes-dashboard-* pods running in all control-plane even they're not been taint.

it can be setup manually to all deployments under spec.template.spec.tolerations as below :

spec:
  template:
    spec:
      tolerations:
        - key: node-role.kubernetes.io/control-plane
          effect: NoSchedule

So, I've tried to install via helm with params as below, but no lucky :

$ helm upgrade --install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard \
               --namespace monitoring \
               --set spec.template.spec.tolerations[0].key=node-role.kubernetes.io/control-plane,spec.template.spec.tolerations[0].effect=NoSchedule

# and
$ helm upgrade --install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard \
               --namespace monitoring \
               # without `.spec.template`
               --set .spec.tolerations[0].key=node-role.kubernetes.io/control-plane,.spec.tolerations[0].effect=NoSchedule
  • result:

    # deployment
    $ kubectl -n monitoring get deploy -o go-template='{{range .items}}{{.metadata.name}}{{"\t"}}{{.spec.template.spec.tolerations}}{{"\n"}}{{end}}'
    kubernetes-dashboard-api	<no value>
    kubernetes-dashboard-auth	<no value>
    kubernetes-dashboard-kong	<no value>
    kubernetes-dashboard-metrics-scraper	<no value>
    kubernetes-dashboard-web	<no value>
    
    # pods
    $ kubectl -n monitoring get po -o go-template='{{range .items}}{{.metadata.name}}{{"\t"}}{{.spec.tolerations}}{{"\n"}}{{end}}'
    kubernetes-dashboard-api-57d7cfdc45-86rth	[map[effect:NoExecute key:node.kubernetes.io/not-ready operator:Exists tolerationSeconds:300] map[effect:NoExecute key:node.kubernetes.io/unreachable operator:Exists tolerationSeconds:300]]
    kubernetes-dashboard-auth-5c5745f8d9-k5rkc	[map[effect:NoExecute key:node.kubernetes.io/not-ready operator:Exists tolerationSeconds:300] map[effect:NoExecute key:node.kubernetes.io/unreachable operator:Exists tolerationSeconds:300]]
    kubernetes-dashboard-kong-7696bb8c88-xlxkn	[map[effect:NoExecute key:node.kubernetes.io/not-ready operator:Exists tolerationSeconds:300] map[effect:NoExecute key:node.kubernetes.io/unreachable operator:Exists tolerationSeconds:300]]
    kubernetes-dashboard-metrics-scraper-5485b64c47-w9bgr	[map[effect:NoExecute key:node.kubernetes.io/not-ready operator:Exists tolerationSeconds:300] map[effect:NoExecute key:node.kubernetes.io/unreachable operator:Exists tolerationSeconds:300]]
    kubernetes-dashboard-web-fccb7d557-4v9hw	[map[effect:NoExecute key:node.kubernetes.io/not-ready operator:Exists tolerationSeconds:300] map[effect:NoExecute key:node.kubernetes.io/unreachable operator:Exists tolerationSeconds:300]]

So, I've manually edit all deployments by added following content,

5 deployments
$ kubectl -n monitoring edit deploy kubernetes-dashboard-api
deployment.apps/kubernetes-dashboard-api edited
$ kubectl -n monitoring edit deploy kubernetes-dashboard-auth
deployment.apps/kubernetes-dashboard-auth edited
$ kubectl -n monitoring edit deploy kubernetes-dashboard-kong
deployment.apps/kubernetes-dashboard-kong edited
$ kubectl -n monitoring edit deploy kubernetes-dashboard-web
deployment.apps/kubernetes-dashboard-web edited
$ kubectl -n monitoring edit deploy kubernetes-dashboard-metrics-scraper
deployment.apps/kubernetes-dashboard-metrics-scraper edited
      tolerations:
        - key: node-role.kubernetes.io/control-plane
          effect: NoSchedule

and it works well:

$ kubectl -n monitoring get deploy -o go-template='{{range .items}}{{.metadata.name}}{{"\t"}}{{.spec.template.spec.tolerations}}{{"\n"}}{{end}}'
kubernetes-dashboard-api	[map[effect:NoSchedule key:node-role.kubernetes.io/control-plane]]
kubernetes-dashboard-auth	[map[effect:NoSchedule key:node-role.kubernetes.io/control-plane]]
kubernetes-dashboard-kong	[map[effect:NoSchedule key:node-role.kubernetes.io/control-plane]]
kubernetes-dashboard-metrics-scraper	[map[effect:NoSchedule key:node-role.kubernetes.io/control-plane]]
kubernetes-dashboard-web	[map[effect:NoSchedule key:node-role.kubernetes.io/control-plane]]

basic environment:

os environment
$ cat /etc/os-relesae
NAME="Oracle Linux Server"
VERSION="8.10"
ID="ol"
ID_LIKE="fedora"
VARIANT="Server"
VARIANT_ID="server"
VERSION_ID="8.10"
PLATFORM_ID="platform:el8"
PRETTY_NAME="Oracle Linux Server 8.10"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:oracle:linux:8:10:server"
HOME_URL="https://linux.oracle.com/"
BUG_REPORT_URL="https://github.com/oracle/oracle-linux"

ORACLE_BUGZILLA_PRODUCT="Oracle Linux 8"
ORACLE_BUGZILLA_PRODUCT_VERSION=8.10
ORACLE_SUPPORT_PRODUCT="Oracle Linux"
ORACLE_SUPPORT_PRODUCT_VERSION=8.10

$ lsb_release -a
LSB Version:	:core-4.1-amd64:core-4.1-noarch
Distributor ID:	OracleServer
Description:	Oracle Linux Server release 8.10
Release:	8.10
Codename:	n/a

$ uname -a
Linux dc5lt-ssdfw01 5.15.0-202.135.2.el8uek.x86_64 #2 SMP Fri Jan 5 16:12:57 PST 2024 x86_64 x86_64 x86_64 GNU/Linux
helm
$ helm version
version.BuildInfo{Version:"v3.15.4", GitCommit:"fa9efb07d9d8debbb4306d72af76a383895aa8c4", GitTreeState:"clean", GoVersion:"go1.22.6"}

$ helm repo list | grep kubernetes-dashboard
kubernetes-dashboard	https://kubernetes.github.io/dashboard/

$ helm search repo kubernetes-dashboard
NAME                                     	CHART VERSION	APP VERSION	DESCRIPTION
kubernetes-dashboard/kubernetes-dashboard	7.5.0        	           	General-purpose web UI for Kubernetes clusters
kubernetes and crio
$ kubectl version
Client Version: v1.30.4
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.30.4

$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"30", GitVersion:"v1.30.4", GitCommit:"a51b3b711150f57ffc1f526a640ec058514ed596", GitTreeState:"clean", BuildDate:"2024-08-14T19:02:46Z", GoVersion:"go1.22.5", Compiler:"gc", Platform:"linux/amd64"}
  
$ crio -v
crio version 1.30.3
Version:        1.30.3
GitCommit:      8750e76e814ab80c40061f07402187d6b33ab72e
GitCommitDate:  2024-07-01T07:09:15Z
GitTreeState:   clean
BuildDate:      1970-01-01T00:00:00Z
GoVersion:      go1.22.0
Compiler:       gc
Platform:       linux/amd64
Linkmode:       static
BuildTags:
  static
  netgo
  osusergo
  exclude_graphdriver_btrfs
  exclude_graphdriver_devicemapper
  seccomp
  apparmor
  selinux
LDFlags:          unknown
SeccompEnabled:   true
AppArmorEnabled:  false

What did you expect to happen?

using helm command line to install or upgrade --install support tolerations operation

How can we reproduce it (as minimally and precisely as possible)?

  • install via:

    $ helm upgrade --install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard \
                   --namespace <namespace> \
                   --set spec.template.spec.tolerations[0].key=node-role.kubernetes.io/control-plane,spec.template.spec.tolerations[0].effect=NoSchedule
  • verify via:

    $ kubectl -n <namespace> get deploy -o go-template='{{range .items}}{{.metadata.name}}{{"\t"}}{{.spec.template.spec.tolerations}}{{"\n"}}{{end}}'

Anything else we need to know?

No response

What browsers are you seeing the problem on?

No response

Kubernetes Dashboard version

7.5.0

Kubernetes version

1.30.4

Dev environment

No response

@marslo marslo added the kind/bug Categorizes issue or PR as related to a bug. label Sep 11, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests

1 participant