User Tools

Site Tools


wikiv3:kube_install_debian11_dual-stack

Kubernet Dual-stack - Debian 11

Preparando o Sistema Operacional

Cenário

HostnameIPv4IPv6Data CenterESXI
kube-ctrl-pl-01.juntotelecom.com.br177.75.187.2122804:694:4c00:4001::212São PauloESXI 03
kube-worker-02.juntotelecom.com.br177.75.187.2222804:694:4c00:4001::222São PauloESXI 03
kube-worker-01.juntotelecom.com.br177.75.187.2162804:694:4c00:4001::216São PauloESXI 02

Partições

CapacidadeParticão
2 G /
8 G /usr
1 G /boot
2 G /home
20 G /var
1 Gswap

FIXME Foi feita a instalação do Debian netinst. Durante a instalação a única opção selecionada foi a do SSH.

FIXME Durante a instalação foi criado o usuário gean sem poderes administrativos.

Partições adicionais

  • /var/lib/containers: Partição usado pelo Container Runtime - CRI-O - para armazenar os pods. Usado em ambos os servidores;
  • /volumes: Partição usada para os volumes persistentes - apenas nos servidores workers.

FIXME As partições adicionais usadas são do storage.

$ cat <<EOF | sudo tee -a /etc/hosts
177.75.187.212	kube-ctrl-pl-01.juntotelecom.com.br	kube-ctrl-pl-01
177.75.187.216	kube-worker-01.juntotelecom.com.br	kube-worker-01
177.75.187.222	kube-worker-02.juntotelecom.com.br	kube-worker-02
2804:694:4c00:4001::212	kube-ctrl-pl-01.juntotelecom.com.br	kube-ctrl-pl-01
2804:694:4c00:4001::216	kube-worker-01.juntotelecom.com.br	kube-worker-01
2804:694:4c00:4001::222	kube-worker-02.juntotelecom.com.br	kube-worker-02
EOF

Executar no control plane

$ sudo hostnamectl set-hostname kube-ctrl-pl-01.juntotelecom.com.br

Executar no worker 01

$ sudo hostnamectl set-hostname kube-worker-01.juntotelecom.com.br

Executar no worker 02

$ sudo hostnamectl set-hostname kube-worker-02.juntotelecom.com.br

Disco adicional

FIXME Disco reservado para o pods - containers.

Em ambos os servidores

$ MOUNT_POINT=/var/lib/containers
$ DISK_DEVICE=/dev/sdb
$ echo -e "n\np\n1\n\n\nw" | sudo fdisk ${DISK_DEVICE}
$ sudo mkfs.ext4 ${DISK_DEVICE}1
$ UUID=`sudo blkid -o export ${DISK_DEVICE}1 | grep UUID | grep -v PARTUUID`
$ sudo mkdir ${MOUNT_POINT}
$ sudo cp -p /etc/fstab{,.dist}
$ echo "${UUID}  ${MOUNT_POINT}    ext4    defaults 1 2" | sudo tee -a /etc/fstab
$ sudo mount ${MOUNT_POINT}
$ df -hT | grep containers

Executar somente nos servidores workers

FIXME Disco destinado aos volumes persistentes.

$ MOUNT_POINT=/volumes
$ DISK_DEVICE=/dev/sdc
$ echo -e "n\np\n1\n\n\nw" | sudo fdisk ${DISK_DEVICE}
$ sudo mkfs.ext4 ${DISK_DEVICE}1
$ UUID=`sudo blkid -o export ${DISK_DEVICE}1 | grep UUID | grep -v PARTUUID`
$ sudo mkdir ${MOUNT_POINT}
$ sudo cp -p /etc/fstab{,.dist}
$ echo "${UUID}  ${MOUNT_POINT}    ext4    defaults 1 2" | sudo tee -a /etc/fstab
$ sudo mount ${MOUNT_POINT}
$ df -hT | grep volumes
$ sudo mkdir /volumes/kubernetes
$ sudo chmod 777 /volumes/kubernetes

Instalando o CRI-O

Nessa instalação o CRI-O será usado como Container Runtime.

FIXME A partir da versão 1.23 do Kubernetes, o Docker não será mais compatível.

$ cat <<EOF | sudo tee /etc/modules-load.d/crio.conf
overlay
br_netfilter
EOF
$ sudo modprobe overlay
$ sudo modprobe br_netfilter
$ lsmod | grep br_netfilter
br_netfilter           32768  0
bridge                253952  1 br_netfilter
$ lsmod | grep overlay
overlay               143360  0
cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.ipv4.ip_forward                     = 1
net.ipv6.conf.all.forwarding            = 1
net.ipv4.conf.all.forwarding            = 1
net.bridge.bridge-nf-call-iptables      = 1
net.bridge.bridge-nf-call-ip6tables     = 1
EOF
$ sudo sysctl --system
$ OS=Debian_11
$ VERSION=1.23
$ cat <<EOF | sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/ /
EOF
$ cat <<EOF | sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable:cri-o:$VERSION.list
deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$VERSION/$OS/ /
EOF
$ curl -L https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:$VERSION/$OS/Release.key | sudo apt-key --keyring /etc/apt/trusted.gpg.d/libcontainers.gpg add -
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)).
100   389  100   389    0     0    455      0 --:--:-- --:--:-- --:--:--   454
100   390  100   390    0     0    366      0  0:00:01  0:00:01 --:--:--   366
100   391  100   391    0     0    307      0  0:00:01  0:00:01 --:--:--   307
100   392  100   392    0     0    264      0  0:00:01  0:00:01 --:--:--   264
100   393  100   393    0     0    232      0  0:00:01  0:00:01 --:--:--   232
100  1093  100  1093    0     0    575      0  0:00:01  0:00:01 --:--:--     0
OK
$ curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/Release.key | sudo apt-key --keyring /etc/apt/trusted.gpg.d/libcontainers.gpg add -
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)).
100  1093  100  1093    0     0   1272      0 --:--:-- --:--:-- --:--:--  1270
OK
$ sudo apt update
$ sudo apt install cri-o cri-o-runc

Instalando o Kubernets

$ sudo swapoff -a
$ sudo cp -fp /etc/fstab{,.dist} 
$ sudo sed -i '/swap/d' /etc/fstab
sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
$ sudo apt update
$ sudo apt install kubelet kubeadm kubectl
$ sudo apt-mark hold kubelet kubeadm kubectl
$ sudo systemctl daemon-reload
$ sudo systemctl enable crio --now
$ sudo systemctl status crio
$ sudo systemctl enable kubelet --now

Configurando o Kubernets

Executar no master - Control Plane.

$ sudo kubeadm config images pull
[config/images] Pulled k8s.gcr.io/kube-apiserver:v1.23.5
[config/images] Pulled k8s.gcr.io/kube-controller-manager:v1.23.5
[config/images] Pulled k8s.gcr.io/kube-scheduler:v1.23.5
[config/images] Pulled k8s.gcr.io/kube-proxy:v1.23.5
[config/images] Pulled k8s.gcr.io/pause:3.6
[config/images] Pulled k8s.gcr.io/etcd:3.5.1-0
[config/images] Pulled k8s.gcr.io/coredns/coredns:v1.8.6
$ mkdir -p yamls/config
$ cd yamls/config/
kubeadm-config.yaml
# vim kubeadm-config.yaml
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
networking:
  podSubnet: 10.85.0.0/16,1100:200::/64
  serviceSubnet: 10.96.0.0/16,fd00::/112
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: "177.75.176.40"
  bindPort: 6443
nodeRegistration:
  kubeletExtraArgs:
    node-ip: 177.75.176.40,2804:694:3000:8000::40
$ sudo kubeadm init --config=kubeadm-config.yaml
[sudo] senha para gean:
[init] Using Kubernetes version: v1.23.5
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kube-ctrl-pl-01.juntotelecom.com.br kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 177.75.176.40]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [kube-ctrl-pl-01.juntotelecom.com.br localhost] and IPs [177.75.176.40 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [kube-ctrl-pl-01.juntotelecom.com.br localhost] and IPs [177.75.176.40 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 31.503642 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster
NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently.
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node kube-ctrl-pl-01.juntotelecom.com.br as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node kube-ctrl-pl-01.juntotelecom.com.br as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: j1c9ug.y23gft8fg7jo8wdm
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
 
Your Kubernetes control-plane has initialized successfully!
 
To start using your cluster, you need to run the following as a regular user:
 
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
 
Alternatively, if you are the root user, you can run:
 
  export KUBECONFIG=/etc/kubernetes/admin.conf
 
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/
 
Then you can join any number of worker nodes by running the following on each as root:
 
kubeadm join 177.75.176.40:6443 --token j1c9ug.y23gft8fg7jo8wdm \
        --discovery-token-ca-cert-hash sha256:8983c27fbb5196e187dce0ee20becc34cc68e195fabee116f70a35ed5af61644
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
$ kubectl version --short
Client Version: v1.23.5
Server Version: v1.23.5
$ kubectl get pod --all-namespaces -o wide
NAMESPACE     NAME                                                          READY   STATUS    RESTARTS   AGE     IP              NODE                                  NOMINATED NODE   READINESS GATES
kube-system   coredns-64897985d-jdkq7                                       1/1     Running   0          2m2s    10.85.0.2       kube-ctrl-pl-01.juntotelecom.com.br   <none>           <none>
kube-system   coredns-64897985d-r78hc                                       1/1     Running   0          2m2s    10.85.0.3       kube-ctrl-pl-01.juntotelecom.com.br   <none>           <none>
kube-system   etcd-kube-ctrl-pl-01.juntotelecom.com.br                      1/1     Running   0          2m16s   177.75.176.40   kube-ctrl-pl-01.juntotelecom.com.br   <none>           <none>
kube-system   kube-apiserver-kube-ctrl-pl-01.juntotelecom.com.br            1/1     Running   0          2m9s    177.75.176.40   kube-ctrl-pl-01.juntotelecom.com.br   <none>           <none>
kube-system   kube-controller-manager-kube-ctrl-pl-01.juntotelecom.com.br   1/1     Running   0          2m16s   177.75.176.40   kube-ctrl-pl-01.juntotelecom.com.br   <none>           <none>
kube-system   kube-proxy-rcsc2                                              1/1     Running   0          2m2s    177.75.176.40   kube-ctrl-pl-01.juntotelecom.com.br   <none>           <none>
kube-system   kube-scheduler-kube-ctrl-pl-01.juntotelecom.com.br            1/1     Running   0          2m14s   177.75.176.40   kube-ctrl-pl-01.juntotelecom.com.br   <none>           <none>
$ kubectl create -f https://projectcalico.docs.tigera.io/manifests/tigera-operator.yaml
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/imagesets.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/tigerastatuses.operator.tigera.io created
namespace/tigera-operator created
Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
podsecuritypolicy.policy/tigera-operator created
serviceaccount/tigera-operator created
clusterrole.rbac.authorization.k8s.io/tigera-operator created
clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created
deployment.apps/tigera-operator created
$ curl -L https://projectcalico.docs.tigera.io/manifests/custom-resources.yaml -o custom-resources.yaml
custom-resources.yaml
# This section includes base Calico installation configuration.
# For more information, see: https://projectcalico.docs.tigera.io/v3.22/reference/installation/api#operator.tigera.io/v1.Installation
apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
  name: default
spec:
  # Configures Calico networking.
  calicoNetwork:
    # Note: The ipPools section cannot be modified post-install.
    ipPools:
    - blockSize: 26
      cidr: 10.244.0.0/16
      encapsulation: VXLANCrossSubnet
      natOutgoing: Enabled
      nodeSelector: all()
    - blockSize: 122
      cidr: fd00::/56
      encapsulation: None
      natOutgoing: Disabled
      nodeSelector: all()
 
---
 
# This section configures the Calico API server.
# For more information, see: https://projectcalico.docs.tigera.io/v3.22/reference/installation/api#operator.tigera.io/v1.APIServer
apiVersion: operator.tigera.io/v1
kind: APIServer
metadata:
  name: default
spec: {}
$ kubectl apply -f custom-resources.yaml
installation.operator.tigera.io/default created
apiserver.operator.tigera.io/default created

REF: https://projectcalico.docs.tigera.io/networking/change-block-size

$ kubectl get all -n calico-system
NAME                                           READY   STATUS    RESTARTS   AGE
pod/calico-kube-controllers-67f85d7449-f26dm   1/1     Running   0          2m15s
pod/calico-node-9m87k                          1/1     Running   0          2m15s
pod/calico-typha-85c8459578-tchw2              1/1     Running   0          2m16s
 
NAME                                      TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
service/calico-kube-controllers-metrics   ClusterIP   10.96.25.2      <none>        9094/TCP   79s
service/calico-typha                      ClusterIP   10.96.119.134   <none>        5473/TCP   2m16s
 
NAME                         DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
daemonset.apps/calico-node   1         1         1       1            1           kubernetes.io/os=linux   2m16s
 
NAME                                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/calico-kube-controllers   1/1     1            1           2m15s
deployment.apps/calico-typha              1/1     1            1           2m16s
 
NAME                                                 DESIRED   CURRENT   READY   AGE
replicaset.apps/calico-kube-controllers-67f85d7449   1         1         1       2m15s
replicaset.apps/calico-typha-85c8459578              1         1         1       2m16s
$ kubectl get all -n calico-apiserver -o wide
NAME                                    READY   STATUS    RESTARTS   AGE   IP               NODE                                  NOMINATED NODE   READINESS GATES
pod/calico-apiserver-664c669cd7-5rwbm   1/1     Running   0          50s   192.168.244.65   kube-ctrl-pl-01.juntotelecom.com.br   <none>           <none>
pod/calico-apiserver-664c669cd7-6nqlq   1/1     Running   0          50s   192.168.244.66   kube-ctrl-pl-01.juntotelecom.com.br   <none>           <none>
 
NAME                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE   SELECTOR
service/calico-api   ClusterIP   10.96.238.12   <none>        443/TCP   50s   apiserver=true
 
NAME                               READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS         IMAGES                               SELECTOR
deployment.apps/calico-apiserver   2/2     2            2           51s   calico-apiserver   docker.io/calico/apiserver:v3.22.1   apiserver=true
 
NAME                                          DESIRED   CURRENT   READY   AGE   CONTAINERS         IMAGES                               SELECTOR
replicaset.apps/calico-apiserver-664c669cd7   2         2         2       51s   calico-apiserver   docker.io/calico/apiserver:v3.22.1   apiserver=true,pod-template-hash=664c669cd7

Adicionando os nodes

$ sudo kubeadm join 177.75.176.40:6443 --token j1c9ug.y23gft8fg7jo8wdm \
        --discovery-token-ca-cert-hash sha256:8983c27fbb5196e187dce0ee20becc34cc68e195fabee116f70a35ed5af61644
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
 
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
 
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
$ kubectl get nodes -o wide
NAME                                  STATUS   ROLES                  AGE   VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE                         KERNEL-VERSION    CONTAINER-RUNTIME
kube-ctrl-pl-01.juntotelecom.com.br   Ready    control-plane,master   26m   v1.23.5   177.75.176.40   <none>        Debian GNU/Linux 11 (bullseye)   5.10.0-13-amd64   cri-o://1.23.2
kube-worker-01.juntotelecom.com.br    Ready    <none>                 71s   v1.23.5   177.75.176.41   <none>        Debian GNU/Linux 11 (bullseye)   5.10.0-13-amd64   cri-o://1.23.2
kube-worker-02.juntotelecom.com.br    Ready    <none>                 62s   v1.23.5   177.75.176.42   <none>        Debian GNU/Linux 11 (bullseye)   5.10.0-13-amd64   cri-o://1.23.2
$ kubectl describe node kube-worker-01.juntotelecom.com.br
Name:               kube-worker-01.juntotelecom.com.br
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=kube-worker-01.juntotelecom.com.br
                    kubernetes.io/os=linux
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Fri, 08 Apr 2022 11:27:33 -0300
Taints:             <none>
Unschedulable:      false
Lease:
  HolderIdentity:  kube-worker-01.juntotelecom.com.br
  AcquireTime:     <unset>
  RenewTime:       Fri, 08 Apr 2022 11:29:20 -0300
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Fri, 08 Apr 2022 11:28:49 -0300   Fri, 08 Apr 2022 11:27:33 -0300   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Fri, 08 Apr 2022 11:28:49 -0300   Fri, 08 Apr 2022 11:27:33 -0300   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Fri, 08 Apr 2022 11:28:49 -0300   Fri, 08 Apr 2022 11:27:33 -0300   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            True    Fri, 08 Apr 2022 11:28:49 -0300   Fri, 08 Apr 2022 11:27:48 -0300   KubeletReady                 kubelet is posting ready status. AppArmor enabled
Addresses:
  InternalIP:  177.75.176.41
  Hostname:    kube-worker-01.juntotelecom.com.br
Capacity:
  cpu:                8
  ephemeral-storage:  31861548Ki
  hugepages-2Mi:      0
  memory:             8146916Ki
  pods:               110
Allocatable:
  cpu:                8
  ephemeral-storage:  29363602589
  hugepages-2Mi:      0
  memory:             8044516Ki
  pods:               110
System Info:
  Machine ID:                 1660f3019edb4b5daf5baa08a68881b2
  System UUID:                564d2094-7d42-2232-a67b-1a86eac3285e
  Boot ID:                    654ceee4-394c-466e-8d1d-a228b98035c8
  Kernel Version:             5.10.0-13-amd64
  OS Image:                   Debian GNU/Linux 11 (bullseye)
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  cri-o://1.23.2
  Kubelet Version:            v1.23.5
  Kube-Proxy Version:         v1.23.5
PodCIDR:                      192.168.1.0/24
PodCIDRs:                     192.168.1.0/24,fd00:0:0:1::/64
Non-terminated Pods:          (3 in total)
  Namespace                   Name                             CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
  ---------                   ----                             ------------  ----------  ---------------  -------------  ---
  calico-system               calico-node-pr2xf                0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
  calico-system               calico-typha-85c8459578-wpttp    0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
  kube-system                 kube-proxy-wszjz                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests  Limits
  --------           --------  ------
  cpu                0 (0%)    0 (0%)
  memory             0 (0%)    0 (0%)
  ephemeral-storage  0 (0%)    0 (0%)
  hugepages-2Mi      0 (0%)    0 (0%)
Events:
  Type    Reason                   Age                  From        Message
  ----    ------                   ----                 ----        -------
  Normal  Starting                 3s                   kube-proxy
  Normal  Starting                 113s                 kubelet     Starting kubelet.
  Normal  NodeHasSufficientMemory  113s (x2 over 113s)  kubelet     Node kube-worker-01.juntotelecom.com.br status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    113s                 kubelet     Node kube-worker-01.juntotelecom.com.br status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     113s                 kubelet     Node kube-worker-01.juntotelecom.com.br status is now: NodeHasSufficientPID
  Normal  NodeAllocatableEnforced  98s                  kubelet     Updated Node Allocatable limit across pods
  Normal  NodeHasSufficientMemory  98s                  kubelet     Node kube-worker-01.juntotelecom.com.br status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    98s                  kubelet     Node kube-worker-01.juntotelecom.com.br status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     98s                  kubelet     Node kube-worker-01.juntotelecom.com.br status is now: NodeHasSufficientPID
  Normal  NodeReady                98s                  kubelet     Node kube-worker-01.juntotelecom.com.br status is now: NodeReady
  Normal  Starting                 98s                  kubelet     Starting kubelet.

Status do servidor

$ cat /etc/cni/net.d/10-calico.conflist
{
  "name": "k8s-pod-network",
  "cniVersion": "0.3.1",
  "plugins": [
    {
      "type": "calico",
      "datastore_type": "kubernetes",
      "mtu": 0,
      "nodename_file_optional": false,
      "log_level": "Info",
      "log_file_path": "/var/log/calico/cni/cni.log",
      "ipam": { "type": "calico-ipam", "assign_ipv4" : "true", "assign_ipv6" : "true"},
      "container_settings": {
          "allow_ip_forwarding": false
      },
      "policy": {
          "type": "k8s"
      },
      "kubernetes": {
          "k8s_api_root":"https://10.96.0.1:443",
          "kubeconfig": "/etc/cni/net.d/calico-kubeconfig"
      }
    },
    {
      "type": "bandwidth",
      "capabilities": {"bandwidth": true}
    },
    {"type": "portmap", "snat": true, "capabilities": {"portMappings": true}}
  ]
}
$ kubectl cluster-info
Kubernetes control plane is running at https://177.75.176.40:6443
CoreDNS is running at https://177.75.176.40:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
 
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.     
kubectl -n kube-public get cm cluster-info -o yaml | grep "kubeconfig:" -A11 | grep "apiVersion" -A10 | sed "s/    //" | tee cluster-info.yaml
wikiv3/kube_install_debian11_dual-stack.txt · Last modified: by 127.0.0.1