wikiv3:kube_install_bgp_debian11
Table of Contents
Kubernetes BGP - Debian 11
Preparando o Sistema Operacional
Cenário
| Hostname | IPv4 | IPv6 | Data Center | ESXI | VLAN |
|---|---|---|---|---|---|
| kube-ctrl-pl.juntotelecom.com.br | 172.28.128.98 | 2804:694:4c00:4007::98 | São Paulo | ESXI 03 | 337 |
| kube-worker-01.juntotelecom.com.br | 172.28.128.99 | 2804:694:4c00:4007::99 | São Paulo | ESXI 03 | 337 |
| kube-worker-02.juntotelecom.com.br | 172.28.128.100 | 2804:694:4c00:4007::100 | São Paulo | ESXI 02 | 337 |
- Rede nodes: 172.28.128.96/27,2804:694:4c00:4007::/64
- Rede pods: 10.244.0.0/16,fd00::/56
- Rede services: 10.96.0.0/16,fd00:0:0:100::/112
- Rede metallb: 172.28.128.128/27,2804:694:4c00:4008::/64
Partições adicionais
- /var/lib/containers: Partição usado pelo Container Runtime - CRI-O - para armazenar os pods. Usado em ambos os servidores;
- /volumes: Partição usada para os volumes persistentes - apenas nos servidores workers.
As partições adicionais usadas são do storage.
$ cat <<EOF | sudo tee -a /etc/hosts 172.28.128.98 kube-ctrl-pl.juntotelecom.com.br kube-ctrl-pl 172.28.128.99 kube-worker-01.juntotelecom.com.br kube-worker-01 172.28.128.100 kube-worker-02.juntotelecom.com.br kube-worker-02 2804:694:4c00:4007::98 kube-ctrl-pl.juntotelecom.com.br kube-ctrl-pl 2804:694:4c00:4007::99 kube-worker-01.juntotelecom.com.br kube-worker-01 2804:694:4c00:4007::100 kube-worker-02.juntotelecom.com.br kube-worker-02 EOF
Executar no control plane
$ sudo hostnamectl set-hostname kube-ctrl-pl.juntotelecom.com.br
Executar no worker 01
$ sudo hostnamectl set-hostname kube-worker-01.juntotelecom.com.br
Executar no worker 02
$ sudo hostnamectl set-hostname kube-worker-02.juntotelecom.com.br
Disco adicional
Disco reservado para o pods - containers.
Em ambos os servidores
$ MOUNT_POINT=/var/lib/containers $ DISK_DEVICE=/dev/sdb
$ echo -e "n\np\n1\n\n\nw" | sudo fdisk ${DISK_DEVICE}
$ sudo mkfs.ext4 ${DISK_DEVICE}1
$ UUID=`sudo blkid -o export ${DISK_DEVICE}1 | grep UUID | grep -v PARTUUID` $ sudo mkdir ${MOUNT_POINT} $ sudo cp -p /etc/fstab{,.dist}
$ echo "${UUID} ${MOUNT_POINT} ext4 defaults 1 2" | sudo tee -a /etc/fstab
$ sudo mount ${MOUNT_POINT}
$ df -hT | grep containers
Executar somente nos servidores workers
Disco destinado aos volumes persistentes.
$ MOUNT_POINT=/volumes $ DISK_DEVICE=/dev/sdc
$ echo -e "n\np\n1\n\n\nw" | sudo fdisk ${DISK_DEVICE}
$ sudo mkfs.ext4 ${DISK_DEVICE}1
$ UUID=`sudo blkid -o export ${DISK_DEVICE}1 | grep UUID | grep -v PARTUUID` $ sudo mkdir ${MOUNT_POINT} $ sudo cp -p /etc/fstab{,.dist}
$ echo "${UUID} ${MOUNT_POINT} ext4 defaults 1 2" | sudo tee -a /etc/fstab
$ sudo mount ${MOUNT_POINT}
$ df -hT | grep volumes
$ sudo mkdir /volumes/kubernetes $ sudo chmod 777 /volumes/kubernetes
Instalando o CRI-O
Nessa instalação o CRI-O será usado como Container Runtime.
A partir da versão 1.23 do Kubernetes, o Docker não será mais compatível.
$ cat <<EOF | sudo tee /etc/modules-load.d/crio.conf overlay br_netfilter EOF
$ sudo modprobe overlay $ sudo modprobe br_netfilter
$ lsmod | grep br_netfilter br_netfilter 32768 0 bridge 253952 1 br_netfilter
$ lsmod | grep overlay overlay 143360 0
cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf net.ipv4.ip_forward = 1 net.ipv6.conf.all.forwarding = 1 net.ipv4.conf.all.forwarding = 1 net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1 EOF
$ sudo sysctl --system
$ OS=Debian_11 $ VERSION=1.23
$ cat <<EOF | sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/ / EOF
$ cat <<EOF | sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable:cri-o:$VERSION.list deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$VERSION/$OS/ / EOF
$ curl -L https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:$VERSION/$OS/Release.key | sudo apt-key --keyring /etc/apt/trusted.gpg.d/libcontainers.gpg add -
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)).
100 389 100 389 0 0 455 0 --:--:-- --:--:-- --:--:-- 454
100 390 100 390 0 0 366 0 0:00:01 0:00:01 --:--:-- 366
100 391 100 391 0 0 307 0 0:00:01 0:00:01 --:--:-- 307
100 392 100 392 0 0 264 0 0:00:01 0:00:01 --:--:-- 264
100 393 100 393 0 0 232 0 0:00:01 0:00:01 --:--:-- 232
100 1093 100 1093 0 0 575 0 0:00:01 0:00:01 --:--:-- 0
OK
$ curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/Release.key | sudo apt-key --keyring /etc/apt/trusted.gpg.d/libcontainers.gpg add - % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)). 100 1093 100 1093 0 0 1272 0 --:--:-- --:--:-- --:--:-- 1270 OK
$ sudo apt update $ sudo apt install cri-o cri-o-runc
Instalando o Kubernets
$ sudo swapoff -a
$ sudo cp -fp /etc/fstab{,.dist}
$ sudo sed -i '/swap/d' /etc/fstab
sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
$ sudo apt update $ sudo apt install kubelet kubeadm kubectl $ sudo apt-mark hold kubelet kubeadm kubectl
$ sudo systemctl daemon-reload
$ sudo systemctl enable crio --now
$ sudo systemctl status crio
$ sudo systemctl enable kubelet --now
Configurando o Kubernets
Executar no master - Control Plane.
$ sudo kubeadm config images pull [config/images] Pulled k8s.gcr.io/kube-apiserver:v1.23.5 [config/images] Pulled k8s.gcr.io/kube-controller-manager:v1.23.5 [config/images] Pulled k8s.gcr.io/kube-scheduler:v1.23.5 [config/images] Pulled k8s.gcr.io/kube-proxy:v1.23.5 [config/images] Pulled k8s.gcr.io/pause:3.6 [config/images] Pulled k8s.gcr.io/etcd:3.5.1-0 [config/images] Pulled k8s.gcr.io/coredns/coredns:v1.8.6
$ mkdir -p yamls/config $ cd yamls/config/
- kubeadm-config.yaml
# vim kubeadm-config.yaml --- apiVersion: kubeadm.k8s.io/v1beta3 kind: ClusterConfiguration networking: podSubnet: 10.244.0.0/16,fd00::/56 serviceSubnet: 10.96.0.0/16,fd00:0:0:100::/112 --- apiVersion: kubeadm.k8s.io/v1beta3 kind: InitConfiguration localAPIEndpoint: advertiseAddress: "172.28.128.98" bindPort: 6443 nodeRegistration: kubeletExtraArgs: node-ip: 172.28.128.98,2804:694:4c00:4007::98
$ sudo kubeadm init --config=kubeadm-config.yaml [sudo] senha para suporte: [init] Using Kubernetes version: v1.23.6 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [kube-ctrl-pl.juntotelecom.com.br kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.28.128.98] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [kube-ctrl-pl.juntotelecom.com.br localhost] and IPs [172.28.128.98 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [kube-ctrl-pl.juntotelecom.com.br localhost] and IPs [172.28.128.98 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 32.505992 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently. [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node kube-ctrl-pl.juntotelecom.com.br as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers] [mark-control-plane] Marking the node kube-ctrl-pl.juntotelecom.com.br as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: ne2ehv.camq0i8xuhvx6rm3 [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 172.28.128.98:6443 --token ne2ehv.camq0i8xuhvx6rm3 \ --discovery-token-ca-cert-hash sha256:1e32a3b2e4e3cf3d8e642fb8a89e58f725d238d20375663f46d1ff9c4f56f0d9
$ mkdir -p $HOME/.kube $ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config $ sudo chown $(id -u):$(id -g) $HOME/.kube/config
$ kubectl get pod --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kube-system coredns-64897985d-bsqqs 1/1 Running 0 2m33s 10.85.0.2 kube-ctrl-pl.juntotelecom.com.br <none> <none> kube-system coredns-64897985d-mjg6l 1/1 Running 0 2m33s 10.85.0.3 kube-ctrl-pl.juntotelecom.com.br <none> <none> kube-system etcd-kube-ctrl-pl.juntotelecom.com.br 1/1 Running 0 2m42s 172.28.128.98 kube-ctrl-pl.juntotelecom.com.br <none> <none> kube-system kube-apiserver-kube-ctrl-pl.juntotelecom.com.br 1/1 Running 0 2m49s 172.28.128.98 kube-ctrl-pl.juntotelecom.com.br <none> <none> kube-system kube-controller-manager-kube-ctrl-pl.juntotelecom.com.br 1/1 Running 0 2m41s 172.28.128.98 kube-ctrl-pl.juntotelecom.com.br <none> <none> kube-system kube-proxy-zjmnb 1/1 Running 0 2m33s 172.28.128.98 kube-ctrl-pl.juntotelecom.com.br <none> <none> kube-system kube-scheduler-kube-ctrl-pl.juntotelecom.com.br 1/1 Running 0 2m48s 172.28.128.98 kube-ctrl-pl.juntotelecom.com.br <none> <none>
Rede Calico
$ kubectl create -f https://projectcalico.docs.tigera.io/manifests/tigera-operator.yaml customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io created customresourcedefinition.apiextensions.k8s.io/imagesets.operator.tigera.io created customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io created customresourcedefinition.apiextensions.k8s.io/tigerastatuses.operator.tigera.io created namespace/tigera-operator created Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ podsecuritypolicy.policy/tigera-operator created serviceaccount/tigera-operator created clusterrole.rbac.authorization.k8s.io/tigera-operator created clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created deployment.apps/tigera-operator created
$ curl -L https://projectcalico.docs.tigera.io/manifests/custom-resources.yaml -o custom-resources.yaml % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 825 100 825 0 0 1287 0 --:--:-- --:--:-- --:--:-- 1287
$ curl -L https://github.com/projectcalico/calico/releases/download/v3.22.2/calicoctl-linux-amd64 -o calicoctl % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 658 100 658 0 0 2632 0 --:--:-- --:--:-- --:--:-- 2632 100 38.5M 100 38.5M 0 0 5997k 0 0:00:06 0:00:06 --:--:-- 7237k
$ chmod 755 calicoctl $ sudo mv calicoctl /usr/local/bin/calicoctl
$ calicoctl --help Usage: calicoctl [options] <command> [<args>...] create Create a resource by file, directory or stdin. replace Replace a resource by file, directory or stdin. apply Apply a resource by file, directory or stdin. This creates a resource if it does not exist, and replaces a resource if it does exists. patch Patch a pre-exisiting resource in place. delete Delete a resource identified by file, directory, stdin or resource type and name. get Get a resource identified by file, directory, stdin or resource type and name. label Add or update labels of resources. convert Convert config files between different API versions. ipam IP address management. node Calico node management. version Display the version of this binary. datastore Calico datastore management. Options: -h --help Show this screen. -l --log-level=<level> Set the log level (one of panic, fatal, error, warn, info, debug) [default: panic] --context=<context> The name of the kubeconfig context to use. --allow-version-mismatch Allow client and cluster versions mismatch. Description: The calicoctl command line tool is used to manage Calico network and security policy, to view and manage endpoint configuration, and to manage a Calico node instance. See 'calicoctl <command> --help' to read about a specific subcommand.
- custom-resources.yaml
# This section includes base Calico installation configuration. # For more information, see: https://projectcalico.docs.tigera.io/v3.22/reference/installation/api#operator.tigera.io/v1.Installation apiVersion: operator.tigera.io/v1 kind: Installation metadata: name: default spec: # Configures Calico networking. calicoNetwork: # Note: The ipPools section cannot be modified post-install. ipPools: - blockSize: 26 cidr: 10.244.0.0/16 encapsulation: None natOutgoing: Disabled nodeSelector: all() - blockSize: 122 cidr: fd00::/56 encapsulation: None natOutgoing: Disabled nodeSelector: all() --- # This section configures the Calico API server. # For more information, see: https://projectcalico.docs.tigera.io/v3.22/reference/installation/api#operator.tigera.io/v1.APIServer apiVersion: operator.tigera.io/v1 kind: APIServer metadata: name: default spec: {}
$ kubectl apply -f custom-resources.yaml installation.operator.tigera.io/default created apiserver.operator.tigera.io/default unchanged
$ kubectl get pod -n calico-system -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES calico-kube-controllers-557cb7fd8b-5rhjr 1/1 Running 0 2m41s 10.85.0.4 kube-ctrl-pl.juntotelecom.com.br <none> <none> calico-node-q7ftw 1/1 Running 0 2m41s 172.28.128.98 kube-ctrl-pl.juntotelecom.com.br <none> <none> calico-typha-99b998f9b-dbdh7 1/1 Running 0 2m41s 172.28.128.98 kube-ctrl-pl.juntotelecom.com.br <none> <none>
$ kubectl get pod -n calico-apiserver -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES calico-apiserver-79444bb87-27hmd 1/1 Running 0 75s 10.244.32.193 kube-ctrl-pl.juntotelecom.com.br <none> <none> calico-apiserver-79444bb87-4vdbs 1/1 Running 0 75s 10.244.32.192 kube-ctrl-pl.juntotelecom.com.br <none> <none>
$ kubectl describe pod calico-kube-controllers-557cb7fd8b-5rhjr -n calico-system Name: calico-kube-controllers-557cb7fd8b-5rhjr Namespace: calico-system Priority: 2000000000 Priority Class Name: system-cluster-critical Node: kube-ctrl-pl.juntotelecom.com.br/172.28.128.98 Start Time: Wed, 27 Apr 2022 11:51:01 -0300 Labels: k8s-app=calico-kube-controllers pod-template-hash=557cb7fd8b Annotations: <none> Status: Running IP: 10.85.0.4 IPs: IP: 10.85.0.4 IP: 1100:200::4 Controlled By: ReplicaSet/calico-kube-controllers-557cb7fd8b Containers: calico-kube-controllers: Container ID: cri-o://14f0519230b9e7043d0478424c54a918ffc0c65cfec571cfb35791306540521b Image: docker.io/calico/kube-controllers:v3.22.2 Image ID: docker.io/calico/kube-controllers@sha256:751f1a8ba0af09a0feb2ea296a76739f5c86b80b46f9b2b84888ffef51ca5099 Port: <none> Host Port: <none> State: Running Started: Wed, 27 Apr 2022 11:51:38 -0300 Ready: True Restart Count: 0 Liveness: exec [/usr/bin/check-status -l] delay=10s timeout=10s period=10s #success=1 #failure=6 Readiness: exec [/usr/bin/check-status -r] delay=0s timeout=10s period=10s #success=1 #failure=3 Environment: KUBE_CONTROLLERS_CONFIG_NAME: default DATASTORE_TYPE: kubernetes ENABLED_CONTROLLERS: node KUBERNETES_SERVICE_HOST: 10.96.0.1 KUBERNETES_SERVICE_PORT: 443 Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sjckh (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-sjckh: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: kubernetes.io/os=linux Tolerations: CriticalAddonsOnly op=Exists node-role.kubernetes.io/master:NoSchedule node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 5m37s default-scheduler Successfully assigned calico-system/calico-kube-controllers-557cb7fd8b-5rhjr to kube-ctrl-pl.juntotelecom.com.br Normal Pulling 5m34s kubelet Pulling image "docker.io/calico/kube-controllers:v3.22.2" Normal Pulled 5m kubelet Successfully pulled image "docker.io/calico/kube-controllers:v3.22.2" in 33.920776341s Normal Created 5m kubelet Created container calico-kube-controllers Normal Started 5m kubelet Started container calico-kube-controllers
Nodes
worker-01
- kubeadm-config.yaml
--- apiVersion: kubeadm.k8s.io/v1beta3 kind: JoinConfiguration discovery: bootstrapToken: apiServerEndpoint: 172.28.128.98:6443 token: "ne2ehv.camq0i8xuhvx6rm3" caCertHashes: - "sha256:1e32a3b2e4e3cf3d8e642fb8a89e58f725d238d20375663f46d1ff9c4f56f0d9" nodeRegistration: kubeletExtraArgs: node-ip: 172.28.128.99,2804:694:4c00:4007::99
$ sudo kubeadm join --config=kubeadm-config.yaml [preflight] Running pre-flight checks [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
worker-01
- kubeadm-config.yaml
kubeadm-config.yaml apiVersion: kubeadm.k8s.io/v1beta3 kind: JoinConfiguration discovery: bootstrapToken: apiServerEndpoint: 172.28.128.98:6443 token: "ne2ehv.camq0i8xuhvx6rm3" caCertHashes: - "sha256:1e32a3b2e4e3cf3d8e642fb8a89e58f725d238d20375663f46d1ff9c4f56f0d9" nodeRegistration: kubeletExtraArgs: node-ip: 172.28.128.100,2804:694:4c00:4007::100
$ sudo kubeadm join --config=kubeadm-config.yaml [sudo] senha para suporte: [preflight] Running pre-flight checks [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
$ kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME kube-ctrl-pl.juntotelecom.com.br Ready control-plane,master 81m v1.23.6 172.28.128.98 <none> Debian GNU/Linux 11 (bullseye) 5.10.0-13-amd64 cri-o://1.23.2 kube-worker-01.juntotelecom.com.br Ready <none> 5m5s v1.23.6 172.28.128.99 <none> Debian GNU/Linux 11 (bullseye) 5.10.0-13-amd64 cri-o://1.23.2 kube-worker-02.juntotelecom.com.br Ready <none> 3m57s v1.23.6 172.28.128.100 <none> Debian GNU/Linux 11 (bullseye) 5.10.0-13-amd64 cri-o://1.23.2
$ kubectl get pod -A -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES calico-apiserver calico-apiserver-79444bb87-27hmd 1/1 Running 0 37m 10.244.32.193 kube-ctrl-pl.juntotelecom.com.br <none> <none> calico-apiserver calico-apiserver-79444bb87-4vdbs 1/1 Running 0 37m 10.244.32.192 kube-ctrl-pl.juntotelecom.com.br <none> <none> calico-system calico-kube-controllers-557cb7fd8b-5rhjr 1/1 Running 0 39m 10.85.0.4 kube-ctrl-pl.juntotelecom.com.br <none> <none> calico-system calico-node-8vkml 1/1 Running 0 4m42s 172.28.128.100 kube-worker-02.juntotelecom.com.br <none> <none> calico-system calico-node-mz8g8 1/1 Running 0 5m50s 172.28.128.99 kube-worker-01.juntotelecom.com.br <none> <none> calico-system calico-node-q7ftw 1/1 Running 0 39m 172.28.128.98 kube-ctrl-pl.juntotelecom.com.br <none> <none> calico-system calico-typha-99b998f9b-7tw98 1/1 Running 0 4m36s 172.28.128.99 kube-worker-01.juntotelecom.com.br <none> <none> calico-system calico-typha-99b998f9b-dbdh7 1/1 Running 0 39m 172.28.128.98 kube-ctrl-pl.juntotelecom.com.br <none> <none> kube-system coredns-64897985d-bsqqs 1/1 Running 0 81m 10.85.0.2 kube-ctrl-pl.juntotelecom.com.br <none> <none> kube-system coredns-64897985d-mjg6l 1/1 Running 0 81m 10.85.0.3 kube-ctrl-pl.juntotelecom.com.br <none> <none> kube-system etcd-kube-ctrl-pl.juntotelecom.com.br 1/1 Running 0 82m 172.28.128.98 kube-ctrl-pl.juntotelecom.com.br <none> <none> kube-system kube-apiserver-kube-ctrl-pl.juntotelecom.com.br 1/1 Running 0 82m 172.28.128.98 kube-ctrl-pl.juntotelecom.com.br <none> <none> kube-system kube-controller-manager-kube-ctrl-pl.juntotelecom.com.br 1/1 Running 0 82m 172.28.128.98 kube-ctrl-pl.juntotelecom.com.br <none> <none> kube-system kube-proxy-6gv4c 1/1 Running 0 4m42s 172.28.128.100 kube-worker-02.juntotelecom.com.br <none> <none> kube-system kube-proxy-gwc2n 1/1 Running 0 5m50s 172.28.128.99 kube-worker-01.juntotelecom.com.br <none> <none> kube-system kube-proxy-zjmnb 1/1 Running 0 81m 172.28.128.98 kube-ctrl-pl.juntotelecom.com.br <none> <none> kube-system kube-scheduler-kube-ctrl-pl.juntotelecom.com.br 1/1 Running 0 82m 172.28.128.98 kube-ctrl-pl.juntotelecom.com.br <none> <none> tigera-operator tigera-operator-75b96586c9-dxczb 1/1 Running 0 77m 172.28.128.98 kube-ctrl-pl.juntotelecom.com.br <none> <none>
BGP
- BGPConfiguration.yaml
--- apiVersion: projectcalico.org/v3 kind: BGPConfiguration metadata: name: default spec: logSeverityScreen: Info nodeToNodeMeshEnabled: false serviceLoadBalancerIPs: - cidr: 172.28.128.128/27 - cidr: 2804:694:4c00:4008::/64
$ calicoctl apply -f BGPConfiguration.yaml Successfully applied 1 'BGPConfiguration' resource(s)
- ctrl-pl-bgp.yaml
--- apiVersion: projectcalico.org/v3 kind: Node metadata: name: kube-ctrl-pl.juntotelecom.com.br spec: bgp: ipv4Address: 172.28.128.98/27 ipv6Address: 2804:694:4c00:4007::98/64 asNumber: 63551 --- apiVersion: projectcalico.org/v3 kind: BGPPeer metadata: name: kube-ctrl-pl110 spec: peerIP: 172.28.128.110 asNumber: 63500 node: kube-ctrl-pl.juntotelecom.com.br --- apiVersion: projectcalico.org/v3 kind: BGPPeer metadata: name: kube-ctrl-pl111 spec: peerIP: 172.28.128.111 asNumber: 63500 node: kube-ctrl-pl.juntotelecom.com.br
$ calicoctl apply -f ctrl-pl-bgp.yaml Successfully applied 3 resource(s)
- worker-01-bgp.yaml
--- apiVersion: projectcalico.org/v3 kind: Node metadata: name: kube-worker-01.juntotelecom.com.br spec: bgp: ipv4Address: 172.28.128.99/27 ipv6Address: 2804:694:4c00:4007::99/64 asNumber: 63552 --- apiVersion: projectcalico.org/v3 kind: BGPPeer metadata: name: kube-worker-01110 spec: peerIP: 172.28.128.110 asNumber: 63500 node: kube-worker-01.juntotelecom.com.br --- apiVersion: projectcalico.org/v3 kind: BGPPeer metadata: name: kube-worker-01111 spec: peerIP: 172.28.128.111 asNumber: 63500 node: kube-worker-01.juntotelecom.com.br
$ calicoctl apply -f worker-01-bgp.yaml Successfully applied 3 resource(s)
- worker-02-bgp.yaml
--- apiVersion: projectcalico.org/v3 kind: Node metadata: name: kube-worker-02.juntotelecom.com.br spec: bgp: ipv4Address: 172.28.128.100/27 ipv6Address: 2804:694:4c00:4007::100/64 asNumber: 63553 --- apiVersion: projectcalico.org/v3 kind: BGPPeer metadata: name: kube-worker-02110 spec: peerIP: 172.28.128.110 asNumber: 63500 node: kube-worker-02.juntotelecom.com.br --- apiVersion: projectcalico.org/v3 kind: BGPPeer metadata: name: kube-worker-02111 spec: peerIP: 172.28.128.111 asNumber: 63500 node: kube-worker-02.juntotelecom.com.br
$ calicoctl apply -f worker-02-bgp.yaml Successfully applied 3 resource(s)
$ calicoctl get bgpPeer NAME PEERIP NODE ASN kube-ctrl-pl110 172.28.128.110 kube-ctrl-pl.juntotelecom.com.br 63500 kube-ctrl-pl111 172.28.128.111 kube-ctrl-pl.juntotelecom.com.br 63500 kube-worker-01110 172.28.128.110 kube-worker-01.juntotelecom.com.br 63500 kube-worker-01111 172.28.128.111 kube-worker-01.juntotelecom.com.br 63500 kube-worker-02110 172.28.128.110 kube-worker-02.juntotelecom.com.br 63500 kube-worker-02111 172.28.128.111 kube-worker-02.juntotelecom.com.br 63500
$ sudo calicoctl node status Calico process is running. IPv4 BGP status +----------------+---------------+-------+----------+--------------------------------+ | PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO | +----------------+---------------+-------+----------+--------------------------------+ | 172.28.128.110 | node specific | start | 19:05:07 | Active Socket: No route to | | | | | | host | | 172.28.128.111 | node specific | start | 19:05:07 | Active Socket: No route to | | | | | | host | +----------------+---------------+-------+----------+--------------------------------+ IPv6 BGP status No IPv6 peers found.
wikiv3/kube_install_bgp_debian11.txt · Last modified: by 127.0.0.1
