kubernetes_install_debian
This is an old revision of the document!
Table of Contents
Instalação Kubernetes
Preparando o Sistema Operacional
Cenário
| Hostname | IPv4 | IPv6 | Data Center | ESXI | VLAN |
|---|---|---|---|---|---|
| kube-ctrl-pl-1.juntotelecom.com.br | 177.75.176.40 | 2804:694:3000:8000::40 | Marabá | ESXI 01 | 270 |
| kube-worker-01.juntotelecom.com.br | 177.75.176.41 | 2804:694:3000:8000::41 | Marabá | ESXI 01 | 270 |
| kube-worker-02.juntotelecom.com.br | 177.75.176.42 | 2804:694:3000:8000::42 | Marabá | ESXI 01 | 270 |
- Rede nodes: 177.75.176.32/27,2804:694:3000:8000::/64
- Rede pods: 10.244.0.0/16,fd00::/56
- Rede services: 10.96.0.0/16,fd00:0:0:100::/112
Partição adicional
- /var/lib/containers: Partição usado pelo Container Runtime - CRI-O - para armazenar os pods. Usado em ambos os servidores;
$ cat <<EOF | sudo tee -a /etc/hosts 177.75.176.40 kube-ctrl-pl-01.juntotelecom.com.br kube-ctrl-pl-01 177.75.176.41 kube-worker-01.juntotelecom.com.br kube-worker-01 177.75.176.42 kube-worker-02.juntotelecom.com.br kube-worker-02 2804:694:3000:8000::40 kube-ctrl-pl.juntotelecom.com.br kube-ctrl-pl 2804:694:3000:8000::41 kube-worker-01.juntotelecom.com.br kube-worker-01 2804:694:3000:8000::42 kube-worker-02.juntotelecom.com.br kube-worker-02 EOF
Executar no control plane
$ sudo hostnamectl set-hostname kube-ctrl-pl.juntotelecom.com.br
Executar no worker 01
$ sudo hostnamectl set-hostname kube-worker-01.juntotelecom.com.br
Executar no worker 02
$ sudo hostnamectl set-hostname kube-worker-02.juntotelecom.com.br
Disco adicional
Disco reservado para o pods - containers.
Em ambos os servidores
$ MOUNT_POINT=/var/lib/containers $ DISK_DEVICE=/dev/sdb
$ echo -e "n\np\n1\n\n\nw" | sudo fdisk ${DISK_DEVICE}
$ sudo mkfs.ext4 ${DISK_DEVICE}1
$ UUID=`sudo blkid -o export ${DISK_DEVICE}1 | grep UUID | grep -v PARTUUID` $ sudo mkdir ${MOUNT_POINT} $ sudo cp -p /etc/fstab{,.dist}
$ echo "${UUID} ${MOUNT_POINT} ext4 defaults 1 2" | sudo tee -a /etc/fstab
$ sudo mount ${MOUNT_POINT}
$ df -hT | grep containers
Instalando o CRI-O
Nessa instalação o CRI-O será usado como Container Runtime.
A partir da versão 1.23 do Kubernetes, o Docker não será mais compatível.
$ cat <<EOF | sudo tee /etc/modules-load.d/crio.conf overlay br_netfilter EOF
$ sudo modprobe overlay $ sudo modprobe br_netfilter
$ lsmod | grep br_netfilter br_netfilter 32768 0 bridge 253952 1 br_netfilter
$ lsmod | grep overlay overlay 143360 0
cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf net.ipv4.conf.all.forwarding = 1 net.ipv6.conf.all.forwarding = 1 net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1 EOF
$ sudo sysctl --system
$ OS=Debian_11 $ VERSION=1.23
$ cat <<EOF | sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/ / EOF
$ cat <<EOF | sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable:cri-o:$VERSION.list deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$VERSION/$OS/ / EOF
$ curl -L https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:$VERSION/$OS/Release.key | sudo apt-key --keyring /etc/apt/trusted.gpg.d/libcontainers.gpg add -
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)).
100 389 100 389 0 0 455 0 --:--:-- --:--:-- --:--:-- 454
100 390 100 390 0 0 366 0 0:00:01 0:00:01 --:--:-- 366
100 391 100 391 0 0 307 0 0:00:01 0:00:01 --:--:-- 307
100 392 100 392 0 0 264 0 0:00:01 0:00:01 --:--:-- 264
100 393 100 393 0 0 232 0 0:00:01 0:00:01 --:--:-- 232
100 1093 100 1093 0 0 575 0 0:00:01 0:00:01 --:--:-- 0
OK
$ curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/Release.key | sudo apt-key --keyring /etc/apt/trusted.gpg.d/libcontainers.gpg add - % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)). 100 1093 100 1093 0 0 1272 0 --:--:-- --:--:-- --:--:-- 1270 OK
$ sudo apt update $ sudo apt install cri-o cri-o-runc
Instalando o Kubernets
$ sudo swapoff -a
$ sudo cp -fp /etc/fstab{,.dist}
$ sudo sed -i '/swap/d' /etc/fstab
sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
$ sudo apt update $ sudo apt install kubelet kubeadm kubectl $ sudo apt-mark hold kubelet kubeadm kubectl
$ sudo systemctl daemon-reload
$ sudo systemctl enable crio --now
$ sudo systemctl status crio
$ sudo systemctl enable kubelet --now
Configurando o Kubernets
Executar no master - Control Plane.
$ sudo kubeadm config images pull [config/images] Pulled k8s.gcr.io/kube-apiserver:v1.24.0 [config/images] Pulled k8s.gcr.io/kube-controller-manager:v1.24.0 [config/images] Pulled k8s.gcr.io/kube-scheduler:v1.24.0 [config/images] Pulled k8s.gcr.io/kube-proxy:v1.24.0 [config/images] Pulled k8s.gcr.io/pause:3.7 [config/images] Pulled k8s.gcr.io/etcd:3.5.3-0 [config/images] Pulled k8s.gcr.io/coredns/coredns:v1.8.6
$ mkdir -p yamls/config $ cd yamls/config/
- kubeadm-config.yaml
# vim kubeadm-config.yaml --- apiVersion: kubeadm.k8s.io/v1beta3 kind: ClusterConfiguration networking: podSubnet: 10.244.0.0/16,fd00::/56 serviceSubnet: 10.96.0.0/16,fd00:0:0:100::/112 --- apiVersion: kubeadm.k8s.io/v1beta3 kind: InitConfiguration localAPIEndpoint: advertiseAddress: "177.75.176.40" bindPort: 6443 nodeRegistration: kubeletExtraArgs: node-ip: 177.75.176.40,2804:694:3000:8000::40
$ sudo kubeadm init --config=kubeadm-config.yaml [init] Using Kubernetes version: v1.24.0 [preflight] Running pre-flight checks [WARNING SystemVerification]: missing optional cgroups: blkio [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [kube-ctrl-pl-01.juntotelecom.com.br kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 177.75.176.40] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [kube-ctrl-pl-01.juntotelecom.com.br localhost] and IPs [177.75.176.40 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [kube-ctrl-pl-01.juntotelecom.com.br localhost] and IPs [177.75.176.40 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. [apiclient] All control plane components are healthy after 41.525710 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node kube-ctrl-pl-01.juntotelecom.com.br as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers] [mark-control-plane] Marking the node kube-ctrl-pl-01.juntotelecom.com.br as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule node-role.kubernetes.io/control-plane:NoSchedule] [bootstrap-token] Using token: 9xtviv.hgg7hqw1v51l1bd4 [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 177.75.176.40:6443 --token 9xtviv.hgg7hqw1v51l1bd4 \ --discovery-token-ca-cert-hash sha256:2eb6439778c1dd17ae6ded326fa0cd94a70943511224b2b092a31abaae55f20c
$ mkdir -p $HOME/.kube $ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config $ sudo chown $(id -u):$(id -g) $HOME/.kube/config
$ kubectl get pod --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kube-system coredns-6d4b75cb6d-hwsbh 1/1 Running 0 106s 10.85.0.2 kube-ctrl-pl-01.juntotelecom.com.br <none> <none> kube-system coredns-6d4b75cb6d-x67fg 1/1 Running 0 106s 10.85.0.3 kube-ctrl-pl-01.juntotelecom.com.br <none> <none> kube-system etcd-kube-ctrl-pl-01.juntotelecom.com.br 1/1 Running 0 118s 177.75.176.40 kube-ctrl-pl-01.juntotelecom.com.br <none> <none> kube-system kube-apiserver-kube-ctrl-pl-01.juntotelecom.com.br 1/1 Running 0 119s 177.75.176.40 kube-ctrl-pl-01.juntotelecom.com.br <none> <none> kube-system kube-controller-manager-kube-ctrl-pl-01.juntotelecom.com.br 1/1 Running 0 119s 177.75.176.40 kube-ctrl-pl-01.juntotelecom.com.br <none> <none> kube-system kube-proxy-fkqj5 1/1 Running 0 107s 177.75.176.40 kube-ctrl-pl-01.juntotelecom.com.br <none> <none> kube-system kube-scheduler-kube-ctrl-pl-01.juntotelecom.com.br 1/1 Running 0 119s 177.75.176.40 kube-ctrl-pl-01.juntotelecom.com.br <none> <none>
Adicionando os workers - nodes
$ sudo kubeadm join 177.75.176.40:6443 --token 9xtviv.hgg7hqw1v51l1bd4 \ --discovery-token-ca-cert-hash sha256:2eb6439778c1dd17ae6ded326fa0cd94a70943511224b2b092a31abaae55f20c [preflight] Running pre-flight checks [WARNING SystemVerification]: missing optional cgroups: blkio [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
$ kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME kube-ctrl-pl-01.juntotelecom.com.br Ready control-plane 5m v1.24.0 177.75.176.40 <none> Debian GNU/Linux 11 (bullseye) 5.10.0-9-amd64 cri-o://1.23.2 kube-worker-01.juntotelecom.com.br Ready <none> 50s v1.24.0 177.75.176.41 <none> Debian GNU/Linux 11 (bullseye) 5.10.0-13-amd64 cri-o://1.23.2 kube-worker-02.juntotelecom.com.br Ready <none> 34s v1.24.0 177.75.176.42 <none> Debian GNU/Linux 11 (bullseye) 5.10.0-13-amd64 cri-o://1.23.2
Rede calico
$ kubectl create -f https://projectcalico.docs.tigera.io/manifests/tigera-operator.yaml namespace/tigera-operator created customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io created customresourcedefinition.apiextensions.k8s.io/imagesets.operator.tigera.io created customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io created customresourcedefinition.apiextensions.k8s.io/tigerastatuses.operator.tigera.io created Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ podsecuritypolicy.policy/tigera-operator created serviceaccount/tigera-operator created clusterrole.rbac.authorization.k8s.io/tigera-operator created clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created deployment.apps/tigera-operator created
$ curl -L https://projectcalico.docs.tigera.io/manifests/custom-resources.yaml -o custom-resources.yaml % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 825 100 825 0 0 1412 0 --:--:-- --:--:-- --:--:-- 1410
- custom-resources.yaml
--- # This section includes base Calico installation configuration. # For more information, see: https://projectcalico.docs.tigera.io/v3.23/reference/installation/api#operator.tigera.io/v1.Installation apiVersion: operator.tigera.io/v1 kind: Installation metadata: name: default spec: # Configures Calico networking. calicoNetwork: # Note: The ipPools section cannot be modified post-install. ipPools: - blockSize: 26 cidr: 10.244.0.0/16 encapsulation: IPIP natOutgoing: Enabled nodeSelector: all() - blockSize: 122 cidr: fd00::/56 encapsulation: None natOutgoing: Enabled nodeSelector: all() --- # This section configures the Calico API server. # For more information, see: https://projectcalico.docs.tigera.io/v3.23/reference/installation/api#operator.tigera.io/v1.APIServer apiVersion: operator.tigera.io/v1 kind: APIServer metadata: name: default spec: {}
$ kubectl apply -f custom-resources.yaml installation.operator.tigera.io/default created apiserver.operator.tigera.io/default created
$ kubectl get pod --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES calico-apiserver calico-apiserver-5f794db5-6bhps 1/1 Running 0 44s 10.244.101.65 kube-worker-01.juntotelecom.com.br <none> <none> calico-apiserver calico-apiserver-5f794db5-pgjhs 1/1 Running 0 44s 10.244.213.129 kube-worker-02.juntotelecom.com.br <none> <none> calico-system calico-kube-controllers-79798cc6ff-hxkk6 1/1 Running 0 3m21s 10.85.0.2 kube-worker-02.juntotelecom.com.br <none> <none> calico-system calico-node-flqq6 1/1 Running 0 3m21s 177.75.176.41 kube-worker-01.juntotelecom.com.br <none> <none> calico-system calico-node-mhtpv 1/1 Running 0 3m21s 177.75.176.42 kube-worker-02.juntotelecom.com.br <none> <none> calico-system calico-node-s5jps 1/1 Running 0 3m21s 177.75.176.40 kube-ctrl-pl-01.juntotelecom.com.br <none> <none> calico-system calico-typha-dc4d598d7-7lwfn 1/1 Running 0 3m18s 177.75.176.41 kube-worker-01.juntotelecom.com.br <none> <none> calico-system calico-typha-dc4d598d7-j9z7w 1/1 Running 0 3m22s 177.75.176.42 kube-worker-02.juntotelecom.com.br <none> <none> kube-system coredns-6d4b75cb6d-hwsbh 1/1 Running 0 13m 10.85.0.2 kube-ctrl-pl-01.juntotelecom.com.br <none> <none> kube-system coredns-6d4b75cb6d-x67fg 1/1 Running 0 13m 10.85.0.3 kube-ctrl-pl-01.juntotelecom.com.br <none> <none> kube-system etcd-kube-ctrl-pl-01.juntotelecom.com.br 1/1 Running 0 13m 177.75.176.40 kube-ctrl-pl-01.juntotelecom.com.br <none> <none> kube-system kube-apiserver-kube-ctrl-pl-01.juntotelecom.com.br 1/1 Running 0 13m 177.75.176.40 kube-ctrl-pl-01.juntotelecom.com.br <none> <none> kube-system kube-controller-manager-kube-ctrl-pl-01.juntotelecom.com.br 1/1 Running 0 13m 177.75.176.40 kube-ctrl-pl-01.juntotelecom.com.br <none> <none> kube-system kube-proxy-8m977 1/1 Running 0 9m29s 177.75.176.41 kube-worker-01.juntotelecom.com.br <none> <none> kube-system kube-proxy-fkqj5 1/1 Running 0 13m 177.75.176.40 kube-ctrl-pl-01.juntotelecom.com.br <none> <none> kube-system kube-proxy-jd226 1/1 Running 0 9m13s 177.75.176.42 kube-worker-02.juntotelecom.com.br <none> <none> kube-system kube-scheduler-kube-ctrl-pl-01.juntotelecom.com.br 1/1 Running 0 13m 177.75.176.40 kube-ctrl-pl-01.juntotelecom.com.br <none> <none> tigera-operator tigera-operator-8d54968b9-drbmg 1/1 Running 0 7m19s 177.75.176.41 kube-worker-01.juntotelecom.com.br <none> <none>
após reiniciar o servidor o calico conseguiu atribuir os ips da configuração aos pods
$ kubectl get pod --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES calico-apiserver calico-apiserver-5f794db5-6bhps 1/1 Running 1 7m3s 10.244.101.66 kube-worker-01.juntotelecom.com.br <none> <none> calico-apiserver calico-apiserver-5f794db5-pgjhs 1/1 Running 1 7m3s 10.244.213.131 kube-worker-02.juntotelecom.com.br <none> <none> calico-system calico-kube-controllers-79798cc6ff-hxkk6 1/1 Running 1 9m40s 10.244.213.130 kube-worker-02.juntotelecom.com.br <none> <none> calico-system calico-node-flqq6 1/1 Running 1 9m40s 177.75.176.41 kube-worker-01.juntotelecom.com.br <none> <none> calico-system calico-node-mhtpv 1/1 Running 1 9m40s 177.75.176.42 kube-worker-02.juntotelecom.com.br <none> <none> calico-system calico-node-s5jps 1/1 Running 1 9m40s 177.75.176.40 kube-ctrl-pl-01.juntotelecom.com.br <none> <none> calico-system calico-typha-dc4d598d7-7lwfn 1/1 Running 2 (2m41s ago) 9m37s 177.75.176.41 kube-worker-01.juntotelecom.com.br <none> <none> calico-system calico-typha-dc4d598d7-j9z7w 1/1 Running 2 (2m49s ago) 9m41s 177.75.176.42 kube-worker-02.juntotelecom.com.br <none> <none> kube-system coredns-6d4b75cb6d-hwsbh 1/1 Running 1 19m 10.244.244.66 kube-ctrl-pl-01.juntotelecom.com.br <none> <none> kube-system coredns-6d4b75cb6d-x67fg 1/1 Running 1 19m 10.244.244.65 kube-ctrl-pl-01.juntotelecom.com.br <none> <none> kube-system etcd-kube-ctrl-pl-01.juntotelecom.com.br 1/1 Running 1 19m 177.75.176.40 kube-ctrl-pl-01.juntotelecom.com.br <none> <none> kube-system kube-apiserver-kube-ctrl-pl-01.juntotelecom.com.br 1/1 Running 1 19m 177.75.176.40 kube-ctrl-pl-01.juntotelecom.com.br <none> <none> kube-system kube-controller-manager-kube-ctrl-pl-01.juntotelecom.com.br 1/1 Running 1 19m 177.75.176.40 kube-ctrl-pl-01.juntotelecom.com.br <none> <none> kube-system kube-proxy-8m977 1/1 Running 1 15m 177.75.176.41 kube-worker-01.juntotelecom.com.br <none> <none> kube-system kube-proxy-fkqj5 1/1 Running 1 19m 177.75.176.40 kube-ctrl-pl-01.juntotelecom.com.br <none> <none> kube-system kube-proxy-jd226 1/1 Running 1 15m 177.75.176.42 kube-worker-02.juntotelecom.com.br <none> <none> kube-system kube-scheduler-kube-ctrl-pl-01.juntotelecom.com.br 1/1 Running 1 19m 177.75.176.40 kube-ctrl-pl-01.juntotelecom.com.br <none> <none> tigera-operator tigera-operator-8d54968b9-drbmg 1/1 Running 2 (2m27s ago) 13m 177.75.176.41 kube-worker-01.juntotelecom.com.br <none> <none>
Serviços em dualstack
$ kubectl get services --all-namespaces NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE calico-apiserver calico-api ClusterIP 10.96.160.94 <none> 443/TCP 7m37s calico-system calico-kube-controllers-metrics ClusterIP 10.96.96.246 <none> 9094/TCP 9m14s calico-system calico-typha ClusterIP 10.96.88.251 <none> 5473/TCP 10m default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 20m kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 20m
$ kubectl describe service kubernetes Name: kubernetes Namespace: default Labels: component=apiserver provider=kubernetes Annotations: <none> Selector: <none> Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.96.0.1 IPs: 10.96.0.1 Port: https 443/TCP TargetPort: 6443/TCP Endpoints: 177.75.176.40:6443 Session Affinity: None Events: <none>
$ kubectl describe service kube-dns -n kube-system Name: kube-dns Namespace: kube-system Labels: k8s-app=kube-dns kubernetes.io/cluster-service=true kubernetes.io/name=CoreDNS Annotations: prometheus.io/port: 9153 prometheus.io/scrape: true Selector: k8s-app=kube-dns Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.96.0.10 IPs: 10.96.0.10 Port: dns 53/UDP TargetPort: 53/UDP Endpoints: 10.244.244.65:53,10.244.244.66:53 Port: dns-tcp 53/TCP TargetPort: 53/TCP Endpoints: 10.244.244.65:53,10.244.244.66:53 Port: metrics 9153/TCP TargetPort: 9153/TCP Endpoints: 10.244.244.65:9153,10.244.244.66:9153 Session Affinity: None Events: <none>
$ kubectl describe service calico-typha -n calico-system Name: calico-typha Namespace: calico-system Labels: k8s-app=calico-typha Annotations: <none> Selector: k8s-app=calico-typha Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.96.88.251 IPs: 10.96.88.251 Port: calico-typha 5473/TCP TargetPort: calico-typha/TCP Endpoints: 177.75.176.41:5473,177.75.176.42:5473 Session Affinity: None Events: <none>
$ kubectl describe service calico-kube-controllers-metrics -n calico-system Name: calico-kube-controllers-metrics Namespace: calico-system Labels: k8s-app=calico-kube-controllers Annotations: <none> Selector: k8s-app=calico-kube-controllers Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.96.96.246 IPs: 10.96.96.246 Port: metrics-port 9094/TCP TargetPort: 9094/TCP Endpoints: 10.244.213.130:9094 Session Affinity: None Events: <none>
$ kubectl describe service calico-api -n calico-apiserver Name: calico-api Namespace: calico-apiserver Labels: <none> Annotations: <none> Selector: apiserver=true Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.96.160.94 IPs: 10.96.160.94 Port: apiserver 443/TCP TargetPort: 5443/TCP Endpoints: 10.244.101.66:5443,10.244.213.131:5443 Session Affinity: None Events: <none>
Editar e adicionar:
ipFamilyPolicy: PreferDualStack ipFamilies: - IPv6 - IPv4
$ kubectl edit service kubernetes $ kubectl edit service kube-dns -n kube-system $ kubectl edit service calico-api -n calico-apiserver $ kubectl edit service calico-typha -n calico-system $ kubectl edit service calico-kube-controllers-metrics -n calico-system
$ kubectl describe service kubernetes Name: kubernetes Namespace: default Labels: component=apiserver provider=kubernetes Annotations: <none> Selector: <none> Type: ClusterIP IP Family Policy: PreferDualStack IP Families: IPv4,IPv6 IP: 10.96.0.1 IPs: 10.96.0.1,fd00:0:0:100::8dfe Port: https 443/TCP TargetPort: 6443/TCP Endpoints: 177.75.176.40:6443 Session Affinity: None Events: <none>
$ kubectl describe service kube-dns -n kube-system Name: kube-dns Namespace: kube-system Labels: k8s-app=kube-dns kubernetes.io/cluster-service=true kubernetes.io/name=CoreDNS Annotations: prometheus.io/port: 9153 prometheus.io/scrape: true Selector: k8s-app=kube-dns Type: ClusterIP IP Family Policy: PreferDualStack IP Families: IPv4,IPv6 IP: 10.96.0.10 IPs: 10.96.0.10,fd00:0:0:100::c8ec Port: dns 53/UDP TargetPort: 53/UDP Endpoints: 10.244.244.65:53,10.244.244.66:53 Port: dns-tcp 53/TCP TargetPort: 53/TCP Endpoints: 10.244.244.65:53,10.244.244.66:53 Port: metrics 9153/TCP TargetPort: 9153/TCP Endpoints: 10.244.244.65:9153,10.244.244.66:9153 Session Affinity: None Events: <none>
$ kubectl describe service calico-api -n calico-apiserver Name: calico-api Namespace: calico-apiserver Labels: <none> Annotations: <none> Selector: apiserver=true Type: ClusterIP IP Family Policy: PreferDualStack IP Families: IPv4,IPv6 IP: 10.96.160.94 IPs: 10.96.160.94,fd00:0:0:100::609d Port: apiserver 443/TCP TargetPort: 5443/TCP Endpoints: 10.244.101.66:5443,10.244.213.131:5443 Session Affinity: None Events: <none>
$ kubectl describe service calico-typha -n calico-system Name: calico-typha Namespace: calico-system Labels: k8s-app=calico-typha Annotations: <none> Selector: k8s-app=calico-typha Type: ClusterIP IP Family Policy: PreferDualStack IP Families: IPv4,IPv6 IP: 10.96.88.251 IPs: 10.96.88.251,fd00:0:0:100::7b82 Port: calico-typha 5473/TCP TargetPort: calico-typha/TCP Endpoints: 177.75.176.41:5473,177.75.176.42:5473 Session Affinity: None Events: <none>
$ kubectl describe service calico-kube-controllers-metrics -n calico-system Name: calico-kube-controllers-metrics Namespace: calico-system Labels: k8s-app=calico-kube-controllers Annotations: <none> Selector: k8s-app=calico-kube-controllers Type: ClusterIP IP Family Policy: PreferDualStack IP Families: IPv4,IPv6 IP: 10.96.96.246 IPs: 10.96.96.246,fd00:0:0:100::9a45 Port: metrics-port 9094/TCP TargetPort: 9094/TCP Endpoints: 10.244.213.130:9094 Session Affinity: None Events: <none>
Teste de conectividade
$ kubectl run multitool --image=praqma/network-multitool pod/multitool created
$ kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES multitool 1/1 Running 0 51s 10.244.101.67 kube-worker-01.juntotelecom.com.br <none> <none>
$ kubectl exec -it multitool -- bash
bash-5.1# nslookup kubernetes Server: 10.96.0.10 Address: 10.96.0.10#53 Name: kubernetes.default.svc.cluster.local Address: 10.96.0.1 Name: kubernetes.default.svc.cluster.local Address: fd00:0:0:100::8dfe
bash-5.1# nslookup google.com Server: 10.96.0.10 Address: 10.96.0.10#53 Non-authoritative answer: Name: google.com Address: 142.251.132.238 Name: google.com Address: 2800:3f0:4001:809::200e
$ kubectl delete pod multitool
pod "multitool" deleted
kubernetes_install_debian.1753560544.txt.gz · Last modified: by wikiadm
