User Tools

Site Tools


kubernetes_install_debian

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

kubernetes_install_debian [2025/07/26 17:09] – - Imported by DokuWiki Advanced Plugin wikiadmkubernetes_install_debian [Unknown date] (current) – removed - external edit (Unknown date) 127.0.0.1
Line 1: Line 1:
-====== Instalação Kubernetes ====== 
  
-==== Preparando o Sistema Operacional ==== 
-[[initial_config_deb|Configuração inicial]] 
- 
-===== Cenário ===== 
-^Hostname^IPv4^IPv6^Data Center^ESXI^VLAN^ 
-|kube-ctrl-pl-1.juntotelecom.com.br|177.75.176.40|2804:694:3000:8000::40|Marabá|ESXI 01|270| 
-|kube-worker-01.juntotelecom.com.br|177.75.176.41|2804:694:3000:8000::41|Marabá|ESXI 01|270| 
-|kube-worker-02.juntotelecom.com.br|177.75.176.42|2804:694:3000:8000::42|Marabá|ESXI 01|270| 
- 
-  * **Rede nodes:** 177.75.176.32/27,2804:694:3000:8000::/64 
-  * **Rede pods:** 10.244.0.0/16,fd00::/56 
-  * **Rede services:** 10.96.0.0/16,fd00:0:0:100::/112 
- 
-=== Partição adicional === 
-  * **/var/lib/containers**: Partição usado pelo Container Runtime - CRI-O - para armazenar os pods. Usado em ambos os servidores; 
- 
-<code bash> 
-$ cat <<EOF | sudo tee -a /etc/hosts 
-177.75.176.40 kube-ctrl-pl-01.juntotelecom.com.br kube-ctrl-pl-01 
-177.75.176.41 kube-worker-01.juntotelecom.com.br kube-worker-01 
-177.75.176.42 kube-worker-02.juntotelecom.com.br kube-worker-02 
-2804:694:3000:8000::40 kube-ctrl-pl.juntotelecom.com.br kube-ctrl-pl 
-2804:694:3000:8000::41 kube-worker-01.juntotelecom.com.br kube-worker-01 
-2804:694:3000:8000::42 kube-worker-02.juntotelecom.com.br kube-worker-02 
-EOF 
-</code> 
- 
-Executar no control plane 
-<code bash> 
-$ sudo hostnamectl set-hostname kube-ctrl-pl.juntotelecom.com.br 
-</code> 
- 
-Executar no worker 01 
-<code bash> 
-$ sudo hostnamectl set-hostname kube-worker-01.juntotelecom.com.br 
-</code> 
- 
-Executar no worker 02 
-<code bash> 
-$ sudo hostnamectl set-hostname kube-worker-02.juntotelecom.com.br 
-</code> 
- 
-===== Disco adicional ===== 
-FIXME Disco reservado para o pods - containers. 
- 
-=== Em ambos os servidores === 
-<code bash> 
-$ MOUNT_POINT=/var/lib/containers 
-$ DISK_DEVICE=/dev/sdb 
-</code> 
- 
-<code bash> 
-$ echo -e "n\np\n1\n\n\nw" | sudo fdisk ${DISK_DEVICE} 
-</code> 
- 
-<code bash> 
-$ sudo mkfs.ext4 ${DISK_DEVICE}1 
-</code> 
- 
-<code bash> 
-$ UUID=`sudo blkid -o export ${DISK_DEVICE}1 | grep UUID | grep -v PARTUUID` 
-$ sudo mkdir ${MOUNT_POINT} 
-$ sudo cp -p /etc/fstab{,.dist} 
-</code> 
- 
-<code bash> 
-$ echo "${UUID}  ${MOUNT_POINT}    ext4    defaults 1 2" | sudo tee -a /etc/fstab 
-</code> 
- 
-<code bash> 
-$ sudo mount ${MOUNT_POINT} 
-</code> 
- 
-<code bash> 
-$ df -hT | grep containers 
-</code> 
- 
-===== Instalando o CRI-O ===== 
-Nessa instalação o CRI-O será usado como Container Runtime. 
- 
-FIXME A partir da versão 1.23 do Kubernetes, o Docker não será mais compatível. 
- 
-<code bash> 
-$ cat <<EOF | sudo tee /etc/modules-load.d/crio.conf 
-overlay 
-br_netfilter 
-EOF 
-</code> 
- 
-<code bash> 
-$ sudo modprobe overlay 
-$ sudo modprobe br_netfilter 
-</code> 
- 
-<code bash> 
-$ lsmod | grep br_netfilter 
-br_netfilter           32768  0 
-bridge                253952  1 br_netfilter 
-</code> 
- 
-<code bash> 
-$ lsmod | grep overlay 
-overlay               143360  0 
-</code> 
- 
-<code bash> 
-cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf 
-net.ipv4.conf.all.forwarding            = 1 
-net.ipv6.conf.all.forwarding            = 1 
-net.bridge.bridge-nf-call-iptables      = 1 
-net.bridge.bridge-nf-call-ip6tables     = 1 
-EOF 
-</code> 
- 
-<code bash> 
-$ sudo sysctl --system 
-</code> 
- 
-<code bash> 
-$ OS=Debian_11 
-$ VERSION=1.23 
-</code> 
- 
-<code bash> 
-$ cat <<EOF | sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list 
-deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/ / 
-EOF 
-</code> 
- 
-<code bash> 
-$ cat <<EOF | sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable:cri-o:$VERSION.list 
-deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$VERSION/$OS/ / 
-EOF 
-</code> 
- 
-<code bashs> 
-$ curl -L https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:$VERSION/$OS/Release.key | sudo apt-key --keyring /etc/apt/trusted.gpg.d/libcontainers.gpg add - 
-  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current 
-                                 Dload  Upload   Total   Spent    Left  Speed 
-  0        0        0          0      0 --:--:-- --:--:-- --:--:--     0Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)). 
-100   389  100   389    0        455      0 --:--:-- --:--:-- --:--:--   454 
-100   390  100   390    0        366      0  0:00:01  0:00:01 --:--:--   366 
-100   391  100   391    0        307      0  0:00:01  0:00:01 --:--:--   307 
-100   392  100   392    0        264      0  0:00:01  0:00:01 --:--:--   264 
-100   393  100   393    0        232      0  0:00:01  0:00:01 --:--:--   232 
-100  1093  100  1093    0        575      0  0:00:01  0:00:01 --:--:--     0 
-OK 
-</code> 
- 
-<code bash> 
-$ curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/Release.key | sudo apt-key --keyring /etc/apt/trusted.gpg.d/libcontainers.gpg add - 
-  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current 
-                                 Dload  Upload   Total   Spent    Left  Speed 
-  0        0        0          0      0 --:--:-- --:--:-- --:--:--     0Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)). 
-100  1093  100  1093    0       1272      0 --:--:-- --:--:-- --:--:--  1270 
-OK 
-</code> 
- 
-<code bash> 
-$ sudo apt update 
-$ sudo apt install cri-o cri-o-runc 
-</code> 
- 
-===== Instalando o Kubernets ===== 
-<code bashs> 
-$ sudo swapoff -a 
-$ sudo cp -fp /etc/fstab{,.dist}  
-$ sudo sed -i '/swap/d' /etc/fstab 
-</code> 
- 
-<code bash> 
-sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg 
-</code> 
- 
-<code bash> 
-echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list 
-</code> 
- 
-<code bash> 
-$ sudo apt update 
-$ sudo apt install kubelet kubeadm kubectl 
-$ sudo apt-mark hold kubelet kubeadm kubectl 
-</code> 
- 
-<code bash> 
-$ sudo systemctl daemon-reload 
-</code> 
- 
-<code bash> 
-$ sudo systemctl enable crio --now 
-</code> 
- 
-<code bash> 
-$ sudo systemctl status crio 
-</code> 
- 
-<code bash> 
-$ sudo systemctl enable kubelet --now 
-</code> 
- 
-===== Configurando o Kubernets ===== 
-Executar no master - Control Plane. 
- 
-<code bash> 
-$ sudo kubeadm config images pull 
-[config/images] Pulled k8s.gcr.io/kube-apiserver:v1.24.0 
-[config/images] Pulled k8s.gcr.io/kube-controller-manager:v1.24.0 
-[config/images] Pulled k8s.gcr.io/kube-scheduler:v1.24.0 
-[config/images] Pulled k8s.gcr.io/kube-proxy:v1.24.0 
-[config/images] Pulled k8s.gcr.io/pause:3.7 
-[config/images] Pulled k8s.gcr.io/etcd:3.5.3-0 
-[config/images] Pulled k8s.gcr.io/coredns/coredns:v1.8.6 
-</code> 
- 
-<code bash> 
-$ mkdir -p yamls/config 
-$ cd yamls/config/ 
-</code> 
- 
-<code yaml kubeadm-config.yaml> 
-# vim kubeadm-config.yaml 
---- 
-apiVersion: kubeadm.k8s.io/v1beta3 
-kind: ClusterConfiguration 
-networking: 
-  podSubnet: 10.244.0.0/16,fd00::/56 
-  serviceSubnet: 10.96.0.0/16,fd00:0:0:100::/112 
---- 
-apiVersion: kubeadm.k8s.io/v1beta3 
-kind: InitConfiguration 
-localAPIEndpoint: 
-  advertiseAddress: "177.75.176.40" 
-  bindPort: 6443 
-nodeRegistration: 
-  kubeletExtraArgs: 
-    node-ip: 177.75.176.40,2804:694:3000:8000::40 
-</code> 
- 
-<code bash> 
-$ sudo kubeadm init --config=kubeadm-config.yaml 
-[init] Using Kubernetes version: v1.24.0 
-[preflight] Running pre-flight checks 
-        [WARNING SystemVerification]: missing optional cgroups: blkio 
-[preflight] Pulling images required for setting up a Kubernetes cluster 
-[preflight] This might take a minute or two, depending on the speed of your internet connection 
-[preflight] You can also perform this action in beforehand using 'kubeadm config images pull' 
-[certs] Using certificateDir folder "/etc/kubernetes/pki" 
-[certs] Generating "ca" certificate and key 
-[certs] Generating "apiserver" certificate and key 
-[certs] apiserver serving cert is signed for DNS names [kube-ctrl-pl-01.juntotelecom.com.br kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 177.75.176.40] 
-[certs] Generating "apiserver-kubelet-client" certificate and key 
-[certs] Generating "front-proxy-ca" certificate and key 
-[certs] Generating "front-proxy-client" certificate and key 
-[certs] Generating "etcd/ca" certificate and key 
-[certs] Generating "etcd/server" certificate and key 
-[certs] etcd/server serving cert is signed for DNS names [kube-ctrl-pl-01.juntotelecom.com.br localhost] and IPs [177.75.176.40 127.0.0.1 ::1] 
-[certs] Generating "etcd/peer" certificate and key 
-[certs] etcd/peer serving cert is signed for DNS names [kube-ctrl-pl-01.juntotelecom.com.br localhost] and IPs [177.75.176.40 127.0.0.1 ::1] 
-[certs] Generating "etcd/healthcheck-client" certificate and key 
-[certs] Generating "apiserver-etcd-client" certificate and key 
-[certs] Generating "sa" key and public key 
-[kubeconfig] Using kubeconfig folder "/etc/kubernetes" 
-[kubeconfig] Writing "admin.conf" kubeconfig file 
-[kubeconfig] Writing "kubelet.conf" kubeconfig file 
-[kubeconfig] Writing "controller-manager.conf" kubeconfig file 
-[kubeconfig] Writing "scheduler.conf" kubeconfig file 
-[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" 
-[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" 
-[kubelet-start] Starting the kubelet 
-[control-plane] Using manifest folder "/etc/kubernetes/manifests" 
-[control-plane] Creating static Pod manifest for "kube-apiserver" 
-[control-plane] Creating static Pod manifest for "kube-controller-manager" 
-[control-plane] Creating static Pod manifest for "kube-scheduler" 
-[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" 
-[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s 
-[kubelet-check] Initial timeout of 40s passed. 
-[apiclient] All control plane components are healthy after 41.525710 seconds 
-[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace 
-[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster 
-[upload-certs] Skipping phase. Please see --upload-certs 
-[mark-control-plane] Marking the node kube-ctrl-pl-01.juntotelecom.com.br as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers] 
-[mark-control-plane] Marking the node kube-ctrl-pl-01.juntotelecom.com.br as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule node-role.kubernetes.io/control-plane:NoSchedule] 
-[bootstrap-token] Using token: 9xtviv.hgg7hqw1v51l1bd4 
-[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles 
-[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes 
-[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials 
-[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token 
-[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster 
-[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace 
-[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key 
-[addons] Applied essential addon: CoreDNS 
-[addons] Applied essential addon: kube-proxy 
- 
-Your Kubernetes control-plane has initialized successfully! 
- 
-To start using your cluster, you need to run the following as a regular user: 
- 
-  mkdir -p $HOME/.kube 
-  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config 
-  sudo chown $(id -u):$(id -g) $HOME/.kube/config 
- 
-Alternatively, if you are the root user, you can run: 
- 
-  export KUBECONFIG=/etc/kubernetes/admin.conf 
- 
-You should now deploy a pod network to the cluster. 
-Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: 
-  https://kubernetes.io/docs/concepts/cluster-administration/addons/ 
- 
-Then you can join any number of worker nodes by running the following on each as root: 
- 
-kubeadm join 177.75.176.40:6443 --token 9xtviv.hgg7hqw1v51l1bd4 \ 
-        --discovery-token-ca-cert-hash sha256:2eb6439778c1dd17ae6ded326fa0cd94a70943511224b2b092a31abaae55f20c 
-</code> 
- 
-<code bash> 
-$ mkdir -p $HOME/.kube 
-$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config 
-$ sudo chown $(id -u):$(id -g) $HOME/.kube/config 
-</code> 
- 
-<code bash> 
-$ kubectl get pod --all-namespaces -o wide 
-NAMESPACE     NAME                                                          READY   STATUS    RESTARTS   AGE    IP              NODE                                  NOMINATED NODE   READINESS GATES 
-kube-system   coredns-6d4b75cb6d-hwsbh                                      1/1     Running            106s   10.85.0.2       kube-ctrl-pl-01.juntotelecom.com.br   <none>           <none> 
-kube-system   coredns-6d4b75cb6d-x67fg                                      1/1     Running            106s   10.85.0.3       kube-ctrl-pl-01.juntotelecom.com.br   <none>           <none> 
-kube-system   etcd-kube-ctrl-pl-01.juntotelecom.com.br                      1/1     Running            118s   177.75.176.40   kube-ctrl-pl-01.juntotelecom.com.br   <none>           <none> 
-kube-system   kube-apiserver-kube-ctrl-pl-01.juntotelecom.com.br            1/1     Running            119s   177.75.176.40   kube-ctrl-pl-01.juntotelecom.com.br   <none>           <none> 
-kube-system   kube-controller-manager-kube-ctrl-pl-01.juntotelecom.com.br   1/    Running            119s   177.75.176.40   kube-ctrl-pl-01.juntotelecom.com.br   <none>           <none> 
-kube-system   kube-proxy-fkqj5                                              1/1     Running            107s   177.75.176.40   kube-ctrl-pl-01.juntotelecom.com.br   <none>           <none> 
-kube-system   kube-scheduler-kube-ctrl-pl-01.juntotelecom.com.br            1/1     Running            119s   177.75.176.40   kube-ctrl-pl-01.juntotelecom.com.br   <none>           <none> 
-</code> 
- 
-===== Adicionando os workers - nodes ===== 
-<code bash> 
-$ sudo kubeadm join 177.75.176.40:6443 --token 9xtviv.hgg7hqw1v51l1bd4 \ 
-        --discovery-token-ca-cert-hash sha256:2eb6439778c1dd17ae6ded326fa0cd94a70943511224b2b092a31abaae55f20c 
-[preflight] Running pre-flight checks 
-        [WARNING SystemVerification]: missing optional cgroups: blkio 
-[preflight] Reading configuration from the cluster... 
-[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' 
-[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" 
-[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" 
-[kubelet-start] Starting the kubelet 
-[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... 
- 
-This node has joined the cluster: 
-* Certificate signing request was sent to apiserver and a response was received. 
-* The Kubelet was informed of the new secure connection details. 
- 
-Run 'kubectl get nodes' on the control-plane to see this node join the cluster. 
-</code> 
- 
-<code bash> 
-$ kubectl get nodes -o wide 
-NAME                                  STATUS   ROLES           AGE   VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE                         KERNEL-VERSION    CONTAINER-RUNTIME 
-kube-ctrl-pl-01.juntotelecom.com.br   Ready    control-plane   5m    v1.24.0   177.75.176.40   <none>        Debian GNU/Linux 11 (bullseye)   5.10.0-9-amd64    cri-o://1.23.2 
-kube-worker-01.juntotelecom.com.br    Ready    <none>          50s   v1.24.0   177.75.176.41   <none>        Debian GNU/Linux 11 (bullseye)   5.10.0-13-amd64   cri-o://1.23.2 
-kube-worker-02.juntotelecom.com.br    Ready    <none>          34s   v1.24.0   177.75.176.42   <none>        Debian GNU/Linux 11 (bullseye)   5.10.0-13-amd64   cri-o://1.23.2 
-</code> 
- 
-===== Rede calico ===== 
-<code bash> 
-$ kubectl create -f https://projectcalico.docs.tigera.io/manifests/tigera-operator.yaml 
-namespace/tigera-operator created 
-customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created 
-customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created 
-customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created 
-customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created 
-customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created 
-customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created 
-customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created 
-customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created 
-customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created 
-customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created 
-customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created 
-customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created 
-customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created 
-customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created 
-customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created 
-customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created 
-customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created 
-customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io created 
-customresourcedefinition.apiextensions.k8s.io/imagesets.operator.tigera.io created 
-customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io created 
-customresourcedefinition.apiextensions.k8s.io/tigerastatuses.operator.tigera.io created 
-Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ 
-podsecuritypolicy.policy/tigera-operator created 
-serviceaccount/tigera-operator created 
-clusterrole.rbac.authorization.k8s.io/tigera-operator created 
-clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created 
-deployment.apps/tigera-operator created 
-</code> 
- 
-<code bash> 
-$ curl -L https://projectcalico.docs.tigera.io/manifests/custom-resources.yaml -o custom-resources.yaml 
-  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current 
-                                 Dload  Upload   Total   Spent    Left  Speed 
-100   825  100   825    0       1412      0 --:--:-- --:--:-- --:--:--  1410 
-</code> 
- 
-<file yaml custom-resources.yaml> 
---- 
-# This section includes base Calico installation configuration. 
-# For more information, see: https://projectcalico.docs.tigera.io/v3.23/reference/installation/api#operator.tigera.io/v1.Installation 
-apiVersion: operator.tigera.io/v1 
-kind: Installation 
-metadata: 
-  name: default 
-spec: 
-  # Configures Calico networking. 
-  calicoNetwork: 
-    # Note: The ipPools section cannot be modified post-install. 
-    ipPools: 
-    - blockSize: 26 
-      cidr: 10.244.0.0/16 
-      encapsulation: IPIP 
-      natOutgoing: Enabled 
-      nodeSelector: all() 
-    - blockSize: 122 
-      cidr: fd00::/56 
-      encapsulation: None 
-      natOutgoing: Enabled 
-      nodeSelector: all() 
- 
---- 
- 
-# This section configures the Calico API server. 
-# For more information, see: https://projectcalico.docs.tigera.io/v3.23/reference/installation/api#operator.tigera.io/v1.APIServer 
-apiVersion: operator.tigera.io/v1 
-kind: APIServer 
-metadata: 
-  name: default 
-spec: {} 
-</file> 
- 
-<code bash> 
-$ kubectl apply -f custom-resources.yaml 
-installation.operator.tigera.io/default created 
-apiserver.operator.tigera.io/default created 
-</code> 
- 
-<code bash> 
-$ kubectl get pod --all-namespaces -o wide 
-NAMESPACE          NAME                                                          READY   STATUS    RESTARTS   AGE     IP               NODE                                  NOMINATED NODE   READINESS GATES 
-calico-apiserver   calico-apiserver-5f794db5-6bhps                               1/    Running            44s     10.244.101.65    kube-worker-01.juntotelecom.com.br    <none>           <none> 
-calico-apiserver   calico-apiserver-5f794db5-pgjhs                               1/    Running            44s     10.244.213.129   kube-worker-02.juntotelecom.com.br    <none>           <none> 
-calico-system      calico-kube-controllers-79798cc6ff-hxkk6                      1/1     Running            3m21s   10.85.0.2        kube-worker-02.juntotelecom.com.br    <none>           <none> 
-calico-system      calico-node-flqq6                                             1/    Running            3m21s   177.75.176.41    kube-worker-01.juntotelecom.com.br    <none>           <none> 
-calico-system      calico-node-mhtpv                                             1/    Running            3m21s   177.75.176.42    kube-worker-02.juntotelecom.com.br    <none>           <none> 
-calico-system      calico-node-s5jps                                             1/    Running            3m21s   177.75.176.40    kube-ctrl-pl-01.juntotelecom.com.br   <none>           <none> 
-calico-system      calico-typha-dc4d598d7-7lwfn                                  1/1     Running            3m18s   177.75.176.41    kube-worker-01.juntotelecom.com.br    <none>           <none> 
-calico-system      calico-typha-dc4d598d7-j9z7w                                  1/1     Running            3m22s   177.75.176.42    kube-worker-02.juntotelecom.com.br    <none>           <none> 
-kube-system        coredns-6d4b75cb6d-hwsbh                                      1/1     Running            13m     10.85.0.2        kube-ctrl-pl-01.juntotelecom.com.br   <none>           <none> 
-kube-system        coredns-6d4b75cb6d-x67fg                                      1/1     Running            13m     10.85.0.3        kube-ctrl-pl-01.juntotelecom.com.br   <none>           <none> 
-kube-system        etcd-kube-ctrl-pl-01.juntotelecom.com.br                      1/1     Running            13m     177.75.176.40    kube-ctrl-pl-01.juntotelecom.com.br   <none>           <none> 
-kube-system        kube-apiserver-kube-ctrl-pl-01.juntotelecom.com.br            1/1     Running            13m     177.75.176.40    kube-ctrl-pl-01.juntotelecom.com.br   <none>           <none> 
-kube-system        kube-controller-manager-kube-ctrl-pl-01.juntotelecom.com.br   1/    Running            13m     177.75.176.40    kube-ctrl-pl-01.juntotelecom.com.br   <none>           <none> 
-kube-system        kube-proxy-8m977                                              1/1     Running            9m29s   177.75.176.41    kube-worker-01.juntotelecom.com.br    <none>           <none> 
-kube-system        kube-proxy-fkqj5                                              1/1     Running            13m     177.75.176.40    kube-ctrl-pl-01.juntotelecom.com.br   <none>           <none> 
-kube-system        kube-proxy-jd226                                              1/1     Running            9m13s   177.75.176.42    kube-worker-02.juntotelecom.com.br    <none>           <none> 
-kube-system        kube-scheduler-kube-ctrl-pl-01.juntotelecom.com.br            1/1     Running            13m     177.75.176.40    kube-ctrl-pl-01.juntotelecom.com.br   <none>           <none> 
-tigera-operator    tigera-operator-8d54968b9-drbmg                               1/    Running            7m19s   177.75.176.41    kube-worker-01.juntotelecom.com.br    <none>           <none> 
-</code> 
- 
-FIXME após reiniciar o servidor o calico conseguiu atribuir os ips da configuração aos pods 
- 
-<code bash> 
-$ kubectl get pod --all-namespaces -o wide 
-NAMESPACE          NAME                                                          READY   STATUS    RESTARTS        AGE     IP               NODE                                  NOMINATED NODE   READINESS GATES 
-calico-apiserver   calico-apiserver-5f794db5-6bhps                               1/    Running                 7m3s    10.244.101.66    kube-worker-01.juntotelecom.com.br    <none>           <none> 
-calico-apiserver   calico-apiserver-5f794db5-pgjhs                               1/    Running                 7m3s    10.244.213.131   kube-worker-02.juntotelecom.com.br    <none>           <none> 
-calico-system      calico-kube-controllers-79798cc6ff-hxkk6                      1/1     Running                 9m40s   10.244.213.130   kube-worker-02.juntotelecom.com.br    <none>           <none> 
-calico-system      calico-node-flqq6                                             1/    Running                 9m40s   177.75.176.41    kube-worker-01.juntotelecom.com.br    <none>           <none> 
-calico-system      calico-node-mhtpv                                             1/    Running                 9m40s   177.75.176.42    kube-worker-02.juntotelecom.com.br    <none>           <none> 
-calico-system      calico-node-s5jps                                             1/    Running                 9m40s   177.75.176.40    kube-ctrl-pl-01.juntotelecom.com.br   <none>           <none> 
-calico-system      calico-typha-dc4d598d7-7lwfn                                  1/1     Running   2 (2m41s ago)   9m37s   177.75.176.41    kube-worker-01.juntotelecom.com.br    <none>           <none> 
-calico-system      calico-typha-dc4d598d7-j9z7w                                  1/1     Running   2 (2m49s ago)   9m41s   177.75.176.42    kube-worker-02.juntotelecom.com.br    <none>           <none> 
-kube-system        coredns-6d4b75cb6d-hwsbh                                      1/1     Running                 19m     10.244.244.66    kube-ctrl-pl-01.juntotelecom.com.br   <none>           <none> 
-kube-system        coredns-6d4b75cb6d-x67fg                                      1/1     Running                 19m     10.244.244.65    kube-ctrl-pl-01.juntotelecom.com.br   <none>           <none> 
-kube-system        etcd-kube-ctrl-pl-01.juntotelecom.com.br                      1/1     Running                 19m     177.75.176.40    kube-ctrl-pl-01.juntotelecom.com.br   <none>           <none> 
-kube-system        kube-apiserver-kube-ctrl-pl-01.juntotelecom.com.br            1/1     Running                 19m     177.75.176.40    kube-ctrl-pl-01.juntotelecom.com.br   <none>           <none> 
-kube-system        kube-controller-manager-kube-ctrl-pl-01.juntotelecom.com.br   1/    Running                 19m     177.75.176.40    kube-ctrl-pl-01.juntotelecom.com.br   <none>           <none> 
-kube-system        kube-proxy-8m977                                              1/1     Running                 15m     177.75.176.41    kube-worker-01.juntotelecom.com.br    <none>           <none> 
-kube-system        kube-proxy-fkqj5                                              1/1     Running                 19m     177.75.176.40    kube-ctrl-pl-01.juntotelecom.com.br   <none>           <none> 
-kube-system        kube-proxy-jd226                                              1/1     Running                 15m     177.75.176.42    kube-worker-02.juntotelecom.com.br    <none>           <none> 
-kube-system        kube-scheduler-kube-ctrl-pl-01.juntotelecom.com.br            1/1     Running                 19m     177.75.176.40    kube-ctrl-pl-01.juntotelecom.com.br   <none>           <none> 
-tigera-operator    tigera-operator-8d54968b9-drbmg                               1/    Running   2 (2m27s ago)   13m     177.75.176.41    kube-worker-01.juntotelecom.com.br    <none>           <none> 
-</code> 
-===== Serviços em dualstack ===== 
-<code bash> 
-$ kubectl get services --all-namespaces 
-NAMESPACE          NAME                              TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                  AGE 
-calico-apiserver   calico-api                        ClusterIP   10.96.160.94   <none>        443/TCP                  7m37s 
-calico-system      calico-kube-controllers-metrics   ClusterIP   10.96.96.246   <none>        9094/TCP                 9m14s 
-calico-system      calico-typha                      ClusterIP   10.96.88.251   <none>        5473/TCP                 10m 
-default            kubernetes                        ClusterIP   10.96.0.1      <none>        443/TCP                  20m 
-kube-system        kube-dns                          ClusterIP   10.96.0.10     <none>        53/UDP,53/TCP,9153/TCP   20m 
-</code> 
- 
-<code bash> 
-$ kubectl describe service kubernetes 
-Name:              kubernetes 
-Namespace:         default 
-Labels:            component=apiserver 
-                   provider=kubernetes 
-Annotations:       <none> 
-Selector:          <none> 
-Type:              ClusterIP 
-IP Family Policy:  SingleStack 
-IP Families:       IPv4 
-IP:                10.96.0.1 
-IPs:               10.96.0.1 
-Port:              https  443/TCP 
-TargetPort:        6443/TCP 
-Endpoints:         177.75.176.40:6443 
-Session Affinity:  None 
-Events:            <none> 
-</code> 
- 
-<code bash> 
-$ kubectl describe service kube-dns -n kube-system 
-Name:              kube-dns 
-Namespace:         kube-system 
-Labels:            k8s-app=kube-dns 
-                   kubernetes.io/cluster-service=true 
-                   kubernetes.io/name=CoreDNS 
-Annotations:       prometheus.io/port: 9153 
-                   prometheus.io/scrape: true 
-Selector:          k8s-app=kube-dns 
-Type:              ClusterIP 
-IP Family Policy:  SingleStack 
-IP Families:       IPv4 
-IP:                10.96.0.10 
-IPs:               10.96.0.10 
-Port:              dns  53/UDP 
-TargetPort:        53/UDP 
-Endpoints:         10.244.244.65:53,10.244.244.66:53 
-Port:              dns-tcp  53/TCP 
-TargetPort:        53/TCP 
-Endpoints:         10.244.244.65:53,10.244.244.66:53 
-Port:              metrics  9153/TCP 
-TargetPort:        9153/TCP 
-Endpoints:         10.244.244.65:9153,10.244.244.66:9153 
-Session Affinity:  None 
-Events:            <none> 
-</code> 
- 
-<code bash> 
-$ kubectl describe service calico-typha -n calico-system 
-Name:              calico-typha 
-Namespace:         calico-system 
-Labels:            k8s-app=calico-typha 
-Annotations:       <none> 
-Selector:          k8s-app=calico-typha 
-Type:              ClusterIP 
-IP Family Policy:  SingleStack 
-IP Families:       IPv4 
-IP:                10.96.88.251 
-IPs:               10.96.88.251 
-Port:              calico-typha  5473/TCP 
-TargetPort:        calico-typha/TCP 
-Endpoints:         177.75.176.41:5473,177.75.176.42:5473 
-Session Affinity:  None 
-Events:            <none> 
-</code> 
- 
-<code bash> 
-$ kubectl describe service calico-kube-controllers-metrics -n calico-system 
-Name:              calico-kube-controllers-metrics 
-Namespace:         calico-system 
-Labels:            k8s-app=calico-kube-controllers 
-Annotations:       <none> 
-Selector:          k8s-app=calico-kube-controllers 
-Type:              ClusterIP 
-IP Family Policy:  SingleStack 
-IP Families:       IPv4 
-IP:                10.96.96.246 
-IPs:               10.96.96.246 
-Port:              metrics-port  9094/TCP 
-TargetPort:        9094/TCP 
-Endpoints:         10.244.213.130:9094 
-Session Affinity:  None 
-Events:            <none> 
-</code> 
- 
-<code bash> 
-$ kubectl describe service calico-api -n calico-apiserver 
-Name:              calico-api 
-Namespace:         calico-apiserver 
-Labels:            <none> 
-Annotations:       <none> 
-Selector:          apiserver=true 
-Type:              ClusterIP 
-IP Family Policy:  SingleStack 
-IP Families:       IPv4 
-IP:                10.96.160.94 
-IPs:               10.96.160.94 
-Port:              apiserver  443/TCP 
-TargetPort:        5443/TCP 
-Endpoints:         10.244.101.66:5443,10.244.213.131:5443 
-Session Affinity:  None 
-Events:            <none> 
-</code> 
- 
-**Editar e adicionar:** 
-<code bash> 
-ipFamilyPolicy: PreferDualStack 
-  ipFamilies: 
-  - IPv6 
-  - IPv4 
-</code> 
- 
-<code bash> 
-$ kubectl edit service kubernetes 
-$ kubectl edit service kube-dns -n kube-system 
-$ kubectl edit service calico-api -n calico-apiserver 
-$ kubectl edit service calico-typha -n calico-system 
-$ kubectl edit service calico-kube-controllers-metrics -n calico-system 
-</code> 
- 
-<code bash> 
-$ kubectl describe service kubernetes 
-Name:              kubernetes 
-Namespace:         default 
-Labels:            component=apiserver 
-                   provider=kubernetes 
-Annotations:       <none> 
-Selector:          <none> 
-Type:              ClusterIP 
-IP Family Policy:  PreferDualStack 
-IP Families:       IPv4,IPv6 
-IP:                10.96.0.1 
-IPs:               10.96.0.1,fd00:0:0:100::8dfe 
-Port:              https  443/TCP 
-TargetPort:        6443/TCP 
-Endpoints:         177.75.176.40:6443 
-Session Affinity:  None 
-Events:            <none> 
-</code> 
- 
-<code bash> 
-$ kubectl describe service kube-dns -n kube-system 
-Name:              kube-dns 
-Namespace:         kube-system 
-Labels:            k8s-app=kube-dns 
-                   kubernetes.io/cluster-service=true 
-                   kubernetes.io/name=CoreDNS 
-Annotations:       prometheus.io/port: 9153 
-                   prometheus.io/scrape: true 
-Selector:          k8s-app=kube-dns 
-Type:              ClusterIP 
-IP Family Policy:  PreferDualStack 
-IP Families:       IPv4,IPv6 
-IP:                10.96.0.10 
-IPs:               10.96.0.10,fd00:0:0:100::c8ec 
-Port:              dns  53/UDP 
-TargetPort:        53/UDP 
-Endpoints:         10.244.244.65:53,10.244.244.66:53 
-Port:              dns-tcp  53/TCP 
-TargetPort:        53/TCP 
-Endpoints:         10.244.244.65:53,10.244.244.66:53 
-Port:              metrics  9153/TCP 
-TargetPort:        9153/TCP 
-Endpoints:         10.244.244.65:9153,10.244.244.66:9153 
-Session Affinity:  None 
-Events:            <none> 
-</code> 
- 
-<code bash> 
-$ kubectl describe service calico-api -n calico-apiserver 
-Name:              calico-api 
-Namespace:         calico-apiserver 
-Labels:            <none> 
-Annotations:       <none> 
-Selector:          apiserver=true 
-Type:              ClusterIP 
-IP Family Policy:  PreferDualStack 
-IP Families:       IPv4,IPv6 
-IP:                10.96.160.94 
-IPs:               10.96.160.94,fd00:0:0:100::609d 
-Port:              apiserver  443/TCP 
-TargetPort:        5443/TCP 
-Endpoints:         10.244.101.66:5443,10.244.213.131:5443 
-Session Affinity:  None 
-Events:            <none> 
-</code> 
- 
-<code bash> 
-$ kubectl describe service calico-typha -n calico-system 
-Name:              calico-typha 
-Namespace:         calico-system 
-Labels:            k8s-app=calico-typha 
-Annotations:       <none> 
-Selector:          k8s-app=calico-typha 
-Type:              ClusterIP 
-IP Family Policy:  PreferDualStack 
-IP Families:       IPv4,IPv6 
-IP:                10.96.88.251 
-IPs:               10.96.88.251,fd00:0:0:100::7b82 
-Port:              calico-typha  5473/TCP 
-TargetPort:        calico-typha/TCP 
-Endpoints:         177.75.176.41:5473,177.75.176.42:5473 
-Session Affinity:  None 
-Events:            <none> 
-</code> 
- 
-<code bash> 
-$ kubectl describe service calico-kube-controllers-metrics -n calico-system 
-Name:              calico-kube-controllers-metrics 
-Namespace:         calico-system 
-Labels:            k8s-app=calico-kube-controllers 
-Annotations:       <none> 
-Selector:          k8s-app=calico-kube-controllers 
-Type:              ClusterIP 
-IP Family Policy:  PreferDualStack 
-IP Families:       IPv4,IPv6 
-IP:                10.96.96.246 
-IPs:               10.96.96.246,fd00:0:0:100::9a45 
-Port:              metrics-port  9094/TCP 
-TargetPort:        9094/TCP 
-Endpoints:         10.244.213.130:9094 
-Session Affinity:  None 
-Events:            <none> 
-</code> 
-===== Teste de conectividade ===== 
-<code bash> 
-$ kubectl run multitool --image=praqma/network-multitool 
-pod/multitool created 
-</code> 
- 
-<code bash> 
-$ kubectl get pods -o wide 
-NAME        READY   STATUS    RESTARTS   AGE   IP              NODE                                 NOMINATED NODE   READINESS GATES 
-multitool   1/    Running            51s   10.244.101.67   kube-worker-01.juntotelecom.com.br   <none>           <none> 
-</code> 
- 
-<code bash> 
-$ kubectl exec -it multitool -- bash 
-</code> 
- 
-<code bash> 
-bash-5.1# nslookup kubernetes 
-Server:         10.96.0.10 
-Address:        10.96.0.10#53 
- 
-Name:   kubernetes.default.svc.cluster.local 
-Address: 10.96.0.1 
-Name:   kubernetes.default.svc.cluster.local 
-Address: fd00:0:0:100::8dfe 
-</code> 
- 
-<code bash> 
-bash-5.1# nslookup google.com 
-Server:         10.96.0.10 
-Address:        10.96.0.10#53 
- 
-Non-authoritative answer: 
-Name:   google.com 
-Address: 142.251.132.238 
-Name:   google.com 
-Address: 2800:3f0:4001:809::200e 
-</code> 
- 
-<code bash> 
-$ kubectl delete pod multitool 
-pod "multitool" deleted 
-</code> 
kubernetes_install_debian.1753560544.txt.gz · Last modified: by wikiadm