Table of Contents

Instalação do Kubernetes no Debian 11

Cenário

HostnameIPv4IPv6Data CenterESXI
kube-ctrl-pl-01.juntotelecom.com.br177.75.187.2122804:694:4c00:4001::212São PauloESXI 03
kube-worker-02.juntotelecom.com.br177.75.187.2222804:694:4c00:4001::222São PauloESXI 03
kube-worker-01.juntotelecom.com.br177.75.187.2162804:694:4c00:4001::216São PauloESXI 02

Partições

CapacidadeParticão
2 G /
8 G /usr
1 G /boot
2 G /home
20 G /var
1 Gswap

FIXME Foi feita a instalação do Debian netinst. Durante a instalação a única opção selecionada foi a do SSH.

FIXME Durante a instalação foi criado o usuário gean sem poderes administrativos.

Partições adicionais

FIXME As partições adicionais usadas são do storage.

Preparando o sistema operacional(SO)

Como durante a instalação apenas a opção do SSH foi selecionada para ser instalado, vamos precisar alguns serviços - pacotes - para operar o SO.

Já que o usuário criado durante a instalação do SO não possui privilégios administrativos, usaremos de início o usuário root.

Executar em ambos os servidores

$ su -
# apt update
# apt install vim wget curl sudo accountsservice sudo lvm2 open-vm-tools build-essential jq

Seguindo as boas práticas de segurança, não usaremos o usuário root a partir de agora. Portanto, é necessário conceder privilégios de root para o usuário - que foi criado durante a instalação do SO.

FIXME Por padrão o acesso SSH do usuário root vem desabilitado.

# getent passwd | grep gean
gean:x:1000:1000:Gean Martins,,,:/home/gean:/bin/bash
# getent group | grep gean
cdrom:x:24:gean
floppy:x:25:gean
audio:x:29:gean
dip:x:30:gean
video:x:44:gean
plugdev:x:46:gean
netdev:x:108:gean
gean:x:1000:
# usermod -aG sudo gean
# getent group | grep gean
cdrom:x:24:gean
floppy:x:25:gean
sudo:x:27:gean
audio:x:29:gean
dip:x:30:gean
video:x:44:gean
plugdev:x:46:gean
netdev:x:108:gean
gean:x:1000:
$ cat <<EOF | sudo tee -a /etc/hosts
177.75.187.212	kube-ctrl-pl-01.juntotelecom.com.br	kube-ctrl-pl-01
177.75.187.216	kube-worker-01.juntotelecom.com.br	kube-worker-01
177.75.187.222	kube-worker-02.juntotelecom.com.br	kube-worker-02
2804:694:4c00:4001::212	kube-ctrl-pl-01.juntotelecom.com.br	kube-ctrl-pl-01
2804:694:4c00:4001::216	kube-worker-01.juntotelecom.com.br	kube-worker-01
2804:694:4c00:4001::222	kube-worker-02.juntotelecom.com.br	kube-worker-02
EOF

Executar no control plane

$ sudo hostnamectl set-hostname kube-ctrl-pl-01.juntotelecom.com.br

Executar no worker 01

$ sudo hostnamectl set-hostname kube-worker-01.juntotelecom.com.br

Executar no worker 02

$ sudo hostnamectl set-hostname kube-worker-02.juntotelecom.com.br

Disco adicional

FIXME Disco reservado para o pods - containers.

Em ambos os servidores

$ MOUNT_POINT=/var/lib/containers
$ DISK_DEVICE=/dev/sdb
$ echo -e "n\np\n1\n\n\nw" | sudo fdisk ${DISK_DEVICE}
$ sudo mkfs.ext4 ${DISK_DEVICE}1
$ UUID=`sudo blkid -o export ${DISK_DEVICE}1 | grep UUID | grep -v PARTUUID`
$ sudo mkdir ${MOUNT_POINT}
$ sudo cp -p /etc/fstab{,.dist}
$ echo "${UUID}  ${MOUNT_POINT}    ext4    defaults 1 2" | sudo tee -a /etc/fstab
$ sudo mount ${MOUNT_POINT}
$ df -hT | grep containers

Executar somente nos servidores workers

FIXME Disco destinado aos volumes persistentes.

$ MOUNT_POINT=/volumes
$ DISK_DEVICE=/dev/sdc
$ echo -e "n\np\n1\n\n\nw" | sudo fdisk ${DISK_DEVICE}
$ sudo mkfs.ext4 ${DISK_DEVICE}1
$ UUID=`sudo blkid -o export ${DISK_DEVICE}1 | grep UUID | grep -v PARTUUID`
$ sudo mkdir ${MOUNT_POINT}
$ sudo cp -p /etc/fstab{,.dist}
$ echo "${UUID}  ${MOUNT_POINT}    ext4    defaults 1 2" | sudo tee -a /etc/fstab
$ sudo mount ${MOUNT_POINT}
$ df -hT | grep volumes
$ sudo mkdir /volumes/kubernetes
$ sudo chmod 777 /volumes/kubernetes

Instalando o CRI-O

Nessa instalação o CRI-O será usado como Container Runtime.

FIXME A partir da versão 1.23 do Kubernetes, o Docker não será mais compatível.

$ cat <<EOF | sudo tee /etc/modules-load.d/crio.conf
overlay
br_netfilter
EOF
$ sudo modprobe overlay
$ sudo modprobe br_netfilter
$ lsmod | grep br_netfilter
br_netfilter           32768  0
bridge                253952  1 br_netfilter
$ lsmod | grep overlay
overlay               143360  0
$ sudo apt update 
$ sudo apt install gnupg2
$ OS=Debian_11
$ VERSION=1.23
$ cat <<EOF | sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/ /
EOF
$ cat <<EOF | sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable:cri-o:$VERSION.list
deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$VERSION/$OS/ /
EOF
$ curl -L https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:$VERSION/$OS/Release.key | sudo apt-key --keyring /etc/apt/trusted.gpg.d/libcontainers.gpg add -
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)).
100   389  100   389    0     0    455      0 --:--:-- --:--:-- --:--:--   454
100   390  100   390    0     0    366      0  0:00:01  0:00:01 --:--:--   366
100   391  100   391    0     0    307      0  0:00:01  0:00:01 --:--:--   307
100   392  100   392    0     0    264      0  0:00:01  0:00:01 --:--:--   264
100   393  100   393    0     0    232      0  0:00:01  0:00:01 --:--:--   232
100  1093  100  1093    0     0    575      0  0:00:01  0:00:01 --:--:--     0
OK
$ curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/Release.key | sudo apt-key --keyring /etc/apt/trusted.gpg.d/libcontainers.gpg add -
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)).
100  1093  100  1093    0     0   1272      0 --:--:-- --:--:-- --:--:--  1270
OK
$ sudo apt update
$ sudo apt install cri-o cri-o-runc

Instalando o Kubernets

$ cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables  = 1
net.ipv4.ip_forward                 = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
$ sudo sysctl --system
* Aplicando /usr/lib/sysctl.d/50-pid-max.conf ...
kernel.pid_max = 4194304
* Aplicando /etc/sysctl.d/99-sysctl.conf ...
* Aplicando /etc/sysctl.d/kubernetes.conf ...
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv6.conf.default.forwarding = 1
net.ipv4.ip_forward = 1
* Aplicando /usr/lib/sysctl.d/protect-links.conf ...
fs.protected_fifos = 1
fs.protected_hardlinks = 1
fs.protected_regular = 2
fs.protected_symlinks = 1
* Aplicando /etc/sysctl.conf ...
$ sudo swapoff -a
$ sudo cp -fp /etc/fstab{,.dist} 
$ sudo sed -i '/swap/d' /etc/fstab
$ sudo apt install apt-transport-https ca-certificates
sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
$ sudo apt update
$ sudo apt install kubelet kubeadm kubectl
$ sudo apt-mark hold kubelet kubeadm kubectl
$ sudo systemctl daemon-reload
$ sudo systemctl enable crio --now
Created symlink /etc/systemd/system/cri-o.service → /lib/systemd/system/crio.service.
Created symlink /etc/systemd/system/multi-user.target.wants/crio.service → /lib/systemd/system/crio.service.
$ sudo systemctl status crio
● crio.service - Container Runtime Interface for OCI (CRI-O)
     Loaded: loaded (/lib/systemd/system/crio.service; enabled; vendor preset: enabled)
     Active: active (running) since Fri 2022-04-01 15:42:50 -03; 14s ago
       Docs: https://github.com/cri-o/cri-o
   Main PID: 2846 (crio)
      Tasks: 12
     Memory: 18.1M
        CPU: 151ms
     CGroup: /system.slice/crio.service
             └─2846 /usr/bin/crio
 
abr 01 15:42:50 kube-ctrl-pl-01 crio[2846]: time="2022-04-01 15:42:50.743629948-03:00" level=info msg="Conmon does support the --sync option"
abr 01 15:42:50 kube-ctrl-pl-01 crio[2846]: time="2022-04-01 15:42:50.743876926-03:00" level=info msg="No seccomp profile specified, using the internal default"
abr 01 15:42:50 kube-ctrl-pl-01 crio[2846]: time="2022-04-01 15:42:50.743903366-03:00" level=info msg="Installing default AppArmor profile: crio-default"
abr 01 15:42:50 kube-ctrl-pl-01 crio[2846]: time="2022-04-01 15:42:50.779871123-03:00" level=info msg="No blockio config file specified, blockio not configured"
abr 01 15:42:50 kube-ctrl-pl-01 crio[2846]: time="2022-04-01 15:42:50.780014506-03:00" level=info msg="RDT not available in the host system"
abr 01 15:42:50 kube-ctrl-pl-01 crio[2846]: time="2022-04-01 15:42:50.783287705-03:00" level=info msg="Found CNI network crio (type=bridge) at /etc/cni/net.d/100-crio-bridge.conf"
abr 01 15:42:50 kube-ctrl-pl-01 crio[2846]: time="2022-04-01 15:42:50.785269797-03:00" level=info msg="Found CNI network 200-loopback.conf (type=loopback) at /etc/cni/net.d/200-loopback.conf"
abr 01 15:42:50 kube-ctrl-pl-01 crio[2846]: time="2022-04-01 15:42:50.785303259-03:00" level=info msg="Updated default CNI network name to crio"
abr 01 15:42:50 kube-ctrl-pl-01 crio[2846]: time="2022-04-01 15:42:50.857778415-03:00" level=warning msg="Error encountered when checking whether cri-o should wipe images: version file /var/lib/crio/version n>
abr 01 15:42:50 kube-ctrl-pl-01 systemd[1]: Started Container Runtime Interface for OCI (CRI-O).
$ sudo systemctl enable kubelet --now

Configurando o Kubernets

Executar no master - Control Plane.

sudo kubeadm config images pull
$ sudo kubeadm config images pull
[config/images] Pulled k8s.gcr.io/kube-apiserver:v1.23.5
[config/images] Pulled k8s.gcr.io/kube-controller-manager:v1.23.5
[config/images] Pulled k8s.gcr.io/kube-scheduler:v1.23.5
[config/images] Pulled k8s.gcr.io/kube-proxy:v1.23.5
[config/images] Pulled k8s.gcr.io/pause:3.6
[config/images] Pulled k8s.gcr.io/etcd:3.5.1-0
[config/images] Pulled k8s.gcr.io/coredns/coredns:v1.8.6
$ sudo kubeadm init
[init] Using Kubernetes version: v1.23.5
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kube-ctrl-pl-01.juntotelecom.com.br kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 177.75.187.212]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [kube-ctrl-pl-01.juntotelecom.com.br localhost] and IPs [177.75.187.212 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [kube-ctrl-pl-01.juntotelecom.com.br localhost] and IPs [177.75.187.212 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 30.005419 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster
NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently.
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node kube-ctrl-pl-01.juntotelecom.com.br as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node kube-ctrl-pl-01.juntotelecom.com.br as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: b99bmp.irs1h9fogfqgrx6w
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
 
Your Kubernetes control-plane has initialized successfully!
 
To start using your cluster, you need to run the following as a regular user:
 
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
 
Alternatively, if you are the root user, you can run:
 
  export KUBECONFIG=/etc/kubernetes/admin.conf
 
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/
 
Then you can join any number of worker nodes by running the following on each as root:
 
kubeadm join 177.75.187.212:6443 --token b99bmp.irs1h9fogfqgrx6w \
        --discovery-token-ca-cert-hash sha256:25e95554c54d1041f3bf5c93f3ea5626b8ba2cb2ecd57facee0f4a1fda3d508d
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
$ kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
Warning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
poddisruptionbudget.policy/calico-kube-controllers created
$ kubectl version --short
Client Version: v1.23.5
Server Version: v1.23.5

Adicionando um node

$ sudo kubeadm join 177.75.187.212:6443 --token b99bmp.irs1h9fogfqgrx6w --discovery-token-ca-cert-hash sha256:25e95554c54d1041f3bf5c93f3ea5626b8ba2cb2ecd57facee0f4a1fda3d508d
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
 
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
 
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

FIXME Devido a troca de chaves a data e hora entre os servidores não pode ser diferentes.

Caso precise execute este comando para acertar o relógio dos servidores:

$ sudo date +%T -s "10:49:00"

.

Status do servidor

$ kubectl get nodes
NAME                                  STATUS   ROLES                  AGE    VERSION
kube-ctrl-pl-01.juntotelecom.com.br   Ready    control-plane,master   123m   v1.23.5
kube-worker-01.juntotelecom.com.br    Ready    <none>                 109m   v1.23.5
kube-worker-02.juntotelecom.com.br    Ready    <none>                 6m5s   v1.23.5
$ kubectl get pod --all-namespaces -o wide
NAMESPACE         NAME                                                          READY   STATUS    RESTARTS   AGE    IP               NODE                                  NOMINATED NODE   READINESS GATES
kube-system       coredns-64897985d-p6m8h                                       1/1     Running   0          11m    10.85.0.2        kube-ctrl-pl-01.juntotelecom.com.br   <none>           <none>
kube-system       coredns-64897985d-qdhk8                                       1/1     Running   0          11m    10.85.0.3        kube-ctrl-pl-01.juntotelecom.com.br   <none>           <none>
kube-system       etcd-kube-ctrl-pl-01.juntotelecom.com.br                      1/1     Running   0          11m    177.75.187.216   kube-ctrl-pl-01.juntotelecom.com.br   <none>           <none>
kube-system       kube-apiserver-kube-ctrl-pl-01.juntotelecom.com.br            1/1     Running   0          11m    177.75.187.216   kube-ctrl-pl-01.juntotelecom.com.br   <none>           <none>
kube-system       kube-controller-manager-kube-ctrl-pl-01.juntotelecom.com.br   1/1     Running   0          11m    177.75.187.216   kube-ctrl-pl-01.juntotelecom.com.br   <none>           <none>
kube-system       kube-proxy-9b55n                                              1/1     Running   0          11m    177.75.187.216   kube-ctrl-pl-01.juntotelecom.com.br   <none>           <none>
kube-system       kube-proxy-kjnvs                                              1/1     Running   0          3m9s   172.28.129.10    kube-worker-01.juntotelecom.com.br    <none>           <none>
kube-system       kube-scheduler-kube-ctrl-pl-01.juntotelecom.com.br            1/1     Running   0          11m    177.75.187.216   kube-ctrl-pl-01.juntotelecom.com.br   <none>           <none>
tigera-operator   tigera-operator-b876f5799-4d9w7                               1/1     Running   0          8m4s   177.75.187.216   kube-ctrl-pl-01.juntotelecom.com.br   <none>           <none>
$ kubectl get all -n kube-system
NAME                                                              READY   STATUS    RESTARTS   AGE
pod/coredns-64897985d-p6m8h                                       1/1     Running   0          15m
pod/coredns-64897985d-qdhk8                                       1/1     Running   0          15m
pod/etcd-kube-ctrl-pl-01.juntotelecom.com.br                      1/1     Running   0          15m
pod/kube-apiserver-kube-ctrl-pl-01.juntotelecom.com.br            1/1     Running   0          15m
pod/kube-controller-manager-kube-ctrl-pl-01.juntotelecom.com.br   1/1     Running   0          15m
pod/kube-proxy-9b55n                                              1/1     Running   0          15m
pod/kube-proxy-kjnvs                                              1/1     Running   0          7m26s
pod/kube-scheduler-kube-ctrl-pl-01.juntotelecom.com.br            1/1     Running   0          15m
 
NAME               TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
service/kube-dns   ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP,9153/TCP   15m
 
NAME                        DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
daemonset.apps/kube-proxy   2         2         2       2            2           kubernetes.io/os=linux   15m
 
NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/coredns   2/2     2            2           15m
 
NAME                                DESIRED   CURRENT   READY   AGE
replicaset.apps/coredns-64897985d   2         2         2       15m
$ kubectl describe pod coredns-64897985d-p6m8h -n kube-system
Name:                 coredns-64897985d-p6m8h
Namespace:            kube-system
Priority:             2000000000
Priority Class Name:  system-cluster-critical
Node:                 kube-ctrl-pl-01.juntotelecom.com.br/177.75.187.216
Start Time:           Fri, 01 Apr 2022 16:25:37 -0300
Labels:               k8s-app=kube-dns
                      pod-template-hash=64897985d
Annotations:          <none>
Status:               Running
IP:                   10.85.0.2
IPs:
  IP:           10.85.0.2
  IP:           1100:200::2
Controlled By:  ReplicaSet/coredns-64897985d
Containers:
  coredns:
    Container ID:  cri-o://f30038d0752d6c82a93995b710cbf16b374543961a16ee9a001a217f072ab6e2
    Image:         k8s.gcr.io/coredns/coredns:v1.8.6
    Image ID:      k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e
    Ports:         53/UDP, 53/TCP, 9153/TCP
    Host Ports:    0/UDP, 0/TCP, 0/TCP
    Args:
      -conf
      /etc/coredns/Corefile
    State:          Running
      Started:      Fri, 01 Apr 2022 16:25:40 -0300
    Ready:          True
    Restart Count:  0
    Limits:
      memory:  170Mi
    Requests:
      cpu:        100m
      memory:     70Mi
    Liveness:     http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5
    Readiness:    http-get http://:8181/ready delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /etc/coredns from config-volume (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cjrc2 (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  config-volume:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      coredns
    Optional:  false
  kube-api-access-cjrc2:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 CriticalAddonsOnly op=Exists
                             node-role.kubernetes.io/control-plane:NoSchedule
                             node-role.kubernetes.io/master:NoSchedule
                             node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age   From               Message
  ----     ------            ----  ----               -------
  Warning  FailedScheduling  13m   default-scheduler  0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.
  Normal   Scheduled         13m   default-scheduler  Successfully assigned kube-system/coredns-64897985d-p6m8h to kube-ctrl-pl-01.juntotelecom.com.br
  Normal   Pulled            13m   kubelet            Container image "k8s.gcr.io/coredns/coredns:v1.8.6" already present on machine
  Normal   Created           13m   kubelet            Created container coredns
  Normal   Started           13m   kubelet            Started container coredns
  Warning  NodeNotReady      5m8s  node-controller    Node is not ready
$ kubectl describe node kube-worker-01.juntotelecom.com.br
Name:               kube-worker-01.juntotelecom.com.br
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=kube-worker-01.juntotelecom.com.br
                    kubernetes.io/os=linux
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Fri, 01 Apr 2022 16:33:23 -0300
Taints:             <none>
Unschedulable:      false
Lease:
  HolderIdentity:  kube-worker-01.juntotelecom.com.br
  AcquireTime:     <unset>
  RenewTime:       Fri, 01 Apr 2022 16:41:49 -0300
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Fri, 01 Apr 2022 16:38:45 -0300   Fri, 01 Apr 2022 16:32:52 -0300   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Fri, 01 Apr 2022 16:38:45 -0300   Fri, 01 Apr 2022 16:32:52 -0300   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Fri, 01 Apr 2022 16:38:45 -0300   Fri, 01 Apr 2022 16:32:52 -0300   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            True    Fri, 01 Apr 2022 16:38:45 -0300   Fri, 01 Apr 2022 16:33:18 -0300   KubeletReady                 kubelet is posting ready status. AppArmor enabled
Addresses:
  InternalIP:  172.28.129.10
  Hostname:    kube-worker-01.juntotelecom.com.br
Capacity:
  cpu:                4
  ephemeral-storage:  19007740Ki
  hugepages-2Mi:      0
  memory:             4025220Ki
  pods:               110
Allocatable:
  cpu:                4
  ephemeral-storage:  17517533155
  hugepages-2Mi:      0
  memory:             3922820Ki
  pods:               110
System Info:
  Machine ID:                 c55f2a80a0964f00a07669a5d33c893f
  System UUID:                564dec04-de42-ef82-5234-791adbf266fb
  Boot ID:                    24fb5069-f3b7-4adb-89fc-26156620c8e9
  Kernel Version:             5.10.0-13-amd64
  OS Image:                   Debian GNU/Linux 11 (bullseye)
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  cri-o://1.23.2
  Kubelet Version:            v1.23.5
  Kube-Proxy Version:         v1.23.5
Non-terminated Pods:          (1 in total)
  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
  ---------                   ----                ------------  ----------  ---------------  -------------  ---
  kube-system                 kube-proxy-kjnvs    0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m59s
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests  Limits
  --------           --------  ------
  cpu                0 (0%)    0 (0%)
  memory             0 (0%)    0 (0%)
  ephemeral-storage  0 (0%)    0 (0%)
  hugepages-2Mi      0 (0%)    0 (0%)
Events:
  Type    Reason                   Age                    From        Message
  ----    ------                   ----                   ----        -------
  Normal  Starting                 8m43s                  kube-proxy
  Normal  Starting                 9m30s                  kubelet     Starting kubelet.
  Normal  NodeHasSufficientMemory  9m30s (x2 over 9m30s)  kubelet     Node kube-worker-01.juntotelecom.com.br status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    9m30s (x2 over 9m30s)  kubelet     Node kube-worker-01.juntotelecom.com.br status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     9m30s (x2 over 9m30s)  kubelet     Node kube-worker-01.juntotelecom.com.br status is now: NodeHasSufficientPID
  Normal  NodeAllocatableEnforced  9m15s                  kubelet     Updated Node Allocatable limit across pods
  Normal  NodeHasSufficientMemory  9m15s                  kubelet     Node kube-worker-01.juntotelecom.com.br status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    9m15s                  kubelet     Node kube-worker-01.juntotelecom.com.br status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     9m15s                  kubelet     Node kube-worker-01.juntotelecom.com.br status is now: NodeHasSufficientPID
  Normal  Starting                 9m15s                  kubelet     Starting kubelet.
  Normal  NodeReady                9m4s                   kubelet     Node kube-worker-01.juntotelecom.com.br status is now: NodeReady

Deploy de teste

kubectl create deploy nginx --image=nginx
$ kubectl get deploy -o wide
NAME    READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES   SELECTOR
nginx   1/1     1            1           28s   nginx        nginx    app=nginx
$ kubectl describe deploy nginx
Name:                   nginx
Namespace:              default
CreationTimestamp:      Fri, 01 Apr 2022 16:44:15 -0300
Labels:                 app=nginx
Annotations:            deployment.kubernetes.io/revision: 1
Selector:               app=nginx
Replicas:               1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  app=nginx
  Containers:
   nginx:
    Image:        nginx
    Port:         <none>
    Host Port:    <none>
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   nginx-85b98978db (1/1 replicas created)
Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  91s   deployment-controller  Scaled up replica set nginx-85b98978db to 1
$ kubectl describe pod nginx-85b98978db-w4724
Name:         nginx-85b98978db-w4724
Namespace:    default
Priority:     0
Node:         kube-worker-01.juntotelecom.com.br/172.28.129.10
Start Time:   Fri, 01 Apr 2022 16:43:43 -0300
Labels:       app=nginx
              pod-template-hash=85b98978db
Annotations:  <none>
Status:       Running
IP:           10.85.0.2
IPs:
  IP:           10.85.0.2
  IP:           1100:200::2
Controlled By:  ReplicaSet/nginx-85b98978db
Containers:
  nginx:
    Container ID:   cri-o://6fcfe8156a5dd429c1ac0cb376a68a51a25af01f70915e3fb2156fc289af8e10
    Image:          nginx
    Image ID:       docker.io/library/nginx@sha256:2275af0f20d71b293916f1958f8497f987b8d8fd8113df54635f2a5915002bf1
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Fri, 01 Apr 2022 16:44:03 -0300
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-29qwr (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  kube-api-access-29qwr:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age    From               Message
  ----    ------     ----   ----               -------
  Normal  Pulling    2m41s  kubelet            Pulling image "nginx"
  Normal  Pulled     2m27s  kubelet            Successfully pulled image "nginx" in 14.896206994s
  Normal  Created    2m26s  kubelet            Created container nginx
  Normal  Started    2m26s  kubelet            Started container nginx
  Normal  Scheduled  2m14s  default-scheduler  Successfully assigned default/nginx-85b98978db-w4724 to kube-worker-01.juntotelecom.com.br
$ kubectl delete deploy nginx
deployment.apps "nginx" deleted