====== Volume dinâmico com NFS ======
Passo anterior: [[kubernetes_install_debian|Instalação Kubernetes]]
$ sudo apt install nfs-kernel-server nfs-common
===== Volume a ser exportado =====
$ MOUNT_POINT=/nfs
$ DISK_DEVICE=/dev/sdb
$ echo -e "n\np\n1\n\n\nw" | sudo fdisk ${DISK_DEVICE}
Bem-vindo ao fdisk (util-linux 2.36.1).
As alterações permanecerão apenas na memória, até que você decida gravá-las.
Tenha cuidado antes de usar o comando de gravação.
A unidade não contém uma tabela de partição conhecida.
Criado um novo rótulo de disco DOS com o identificador de disco 0x7e42ac59.
Comando (m para ajuda): Tipo da partição
p primária (0 primárias, 0 estendidas, 4 livre)
e estendida (recipiente para partições lógicas)
Selecione (padrão p): Número da partição (1-4, padrão 1): Primeiro setor (2048-268435455, padrão 2048): Último setor, +/-setores ou +/-tamanho{K,M,G,T,P} (2048-268435455, padrão 268435455):
Criada uma nova partição 1 do tipo "Linux" e de tamanho 128 GiB.
Comando (m para ajuda): A tabela de partição foi alterada.
Chamando ioctl() para reler tabela de partição.
Sincronizando discos.
$ sudo mkfs.ext4 ${DISK_DEVICE}1
mke2fs 1.46.2 (28-Feb-2021)
Creating filesystem with 33554176 4k blocks and 8388608 inodes
Filesystem UUID: 348ddaa1-0b59-4def-81bc-2e737460013c
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872
Allocating group tables: done
Writing inode tables: done
Creating journal (131072 blocks): done
Writing superblocks and filesystem accounting information: done
$ UUID=`sudo blkid -o export ${DISK_DEVICE}1 | grep UUID | grep -v PARTUUID`
$ sudo mkdir ${MOUNT_POINT}
$ sudo cp -p /etc/fstab{,.dist}
$ echo "${UUID} ${MOUNT_POINT} ext4 defaults 1 2" | sudo tee -a /etc/fstab
UUID=348ddaa1-0b59-4def-81bc-2e737460013c /nfs ext4 defaults 1 2
$ sudo mount ${MOUNT_POINT}
gean@nfs-kube:~$ df -hT | grep nfs
/dev/sdb1 ext4 126G 24K 120G 1% /nfs
$ sudo mkdir /nfs/kubedata
$ sudo chown nobody:nogroup /nfs/kubedata
$ sudo chmod 0777 /nfs/kubedata
$ echo '/nfs/kubedata 177.75.176.32/27(rw,sync,no_subtree_check)' | sudo tee /etc/exports
/nfs/kubedata 177.75.176.32/27(rw,sync,no_subtree_check)
$ sudo systemctl restart nfs-kernel-server
$ sudo exportfs -s
/nfs/kubedata 177.75.176.32/27(rw,wdelay,root_squash,no_subtree_check,sec=sys,rw,secure,root_squash,no_all_squash)
$ sudo exportfs -arv
exporting 177.75.176.32/27:/nfs/kubedata
===== Clientes onde os volumes serão montados - Kube nodes =====
$ sudo apt install nfs-common
$ sudo showmount -e 177.75.176.43
Export list for 177.75.176.43:
/nfs/kubedata 177.75.176.32/27
===== Control Plane =====
$ git clone https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner.git
Cloning into 'nfs-subdir-external-provisioner'...
remote: Enumerating objects: 7248, done.
remote: Counting objects: 100% (4/4), done.
remote: Compressing objects: 100% (4/4), done.
remote: Total 7248 (delta 0), reused 1 (delta 0), pack-reused 7244
Receiving objects: 100% (7248/7248), 7.99 MiB | 2.13 MiB/s, done.
Resolving deltas: 100% (3900/3900), done.
$ cd nfs-subdir-external-provisioner/deploy/
$ ls
class.yaml deployment.yaml objects rbac.yaml test-claim.yaml test-pod.yaml
$ kubectl create ns nfs-system
namespace/nfs-system created
$ vim rbac.yaml
:%s/default/nfs-system/g
$ kubectl apply -f rbac.yaml
serviceaccount/nfs-client-provisioner created
clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner created
role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
$ vim class.yaml
:%s/false/true/g
$ kubectl apply -f class.yaml
storageclass.storage.k8s.io/nfs-client created
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
labels:
app: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: nfs-system
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-client-provisioner
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: k8s-sigs.io/nfs-subdir-external-provisioner
- name: NFS_SERVER
value: 177.75.176.43
- name: NFS_PATH
value: /nfs/kubedata
volumes:
- name: nfs-client-root
nfs:
server: 177.75.176.43
path: /nfs/kubedata
$ kubectl apply -f deployment.yaml
deployment.apps/nfs-client-provisioner created
$ kubectl get all -n nfs-system
NAME READY STATUS RESTARTS AGE
pod/nfs-client-provisioner-8686cbd686-dbrgn 1/1 Running 0 71s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nfs-client-provisioner 1/1 1 1 71s
NAME DESIRED CURRENT READY AGE
replicaset.apps/nfs-client-provisioner-8686cbd686 1 1 1 71s
$ kubectl describe pod nfs-client-provisioner-8686cbd686-dbrgn -n nfs-system
Name: nfs-client-provisioner-8686cbd686-dbrgn
Namespace: nfs-system
Priority: 0
Node: kube-worker-01.juntotelecom.com.br/177.75.176.41
Start Time: Thu, 12 May 2022 18:16:25 -0300
Labels: app=nfs-client-provisioner
pod-template-hash=8686cbd686
Annotations: cni.projectcalico.org/containerID: e7c519c1567b8b149a46b8c72060989ea32cc5aa8360f388a1c8bd247d7f892f
cni.projectcalico.org/podIP: 10.244.101.70/32
cni.projectcalico.org/podIPs: 10.244.101.70/32,fd00::33:2603:770f:59b3:71c5/128
Status: Running
IP: 10.244.101.70
IPs:
IP: 10.244.101.70
IP: fd00::33:2603:770f:59b3:71c5
Controlled By: ReplicaSet/nfs-client-provisioner-8686cbd686
Containers:
nfs-client-provisioner:
Container ID: cri-o://db4e6e624eec0e8250644e2fb2a78ef6a8a6e8a09a866a9509dc94a135c2c1de
Image: k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
Image ID: k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner@sha256:374f80dde8bbd498b1083348dd076b8d8d9f9b35386a793f102d5deebe593626
Port:
Host Port:
State: Running
Started: Thu, 12 May 2022 18:16:43 -0300
Ready: True
Restart Count: 0
Environment:
PROVISIONER_NAME: k8s-sigs.io/nfs-subdir-external-provisioner
NFS_SERVER: 177.75.176.43
NFS_PATH: /nfs/kubedata
Mounts:
/persistentvolumes from nfs-client-root (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mwt8r (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
nfs-client-root:
Type: NFS (an NFS mount that lasts the lifetime of a pod)
Server: 177.75.176.43
Path: /nfs/kubedata
ReadOnly: false
kube-api-access-mwt8r:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional:
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m24s default-scheduler Successfully assigned nfs-system/nfs-client-provisioner-8686cbd686-dbrgn to kube-worker-01.juntotelecom.com.br
Normal Pulling 2m22s kubelet Pulling image "k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2"
Normal Pulled 2m6s kubelet Successfully pulled image "k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2" in 16.200017039s
Normal Created 2m6s kubelet Created container nfs-client-provisioner
Normal Started 2m6s kubelet Started container nfs-client-provisioner
===== Teste =====
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-claim
spec:
storageClassName: nfs-client
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Mi
$ kubectl apply -f test-claim.yaml
persistentvolumeclaim/test-claim created
$ kubectl get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pvc-523aeaea-f637-4ab5-90d4-2b311586a814 1Mi RWX Delete Bound default/test-claim nfs-client 24s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/test-claim Bound pvc-523aeaea-f637-4ab5-90d4-2b311586a814 1Mi RWX nfs-client 24s
---
kind: Pod
apiVersion: v1
metadata:
name: test-pod
spec:
containers:
- name: test-pod
image: busybox:stable
command:
- "/bin/sh"
args:
- "-c"
- "touch /mnt/SUCCESS && exit 0 || exit 1"
volumeMounts:
- name: nfs-pvc
mountPath: "/mnt"
restartPolicy: "Never"
volumes:
- name: nfs-pvc
persistentVolumeClaim:
claimName: test-claim
$ kubectl apply -f test-pod.yaml
pod/test-pod created
**No servidor NFS:**
$ ls /nfs/kubedata/default-test-claim-pvc-523aeaea-f637-4ab5-90d4-2b311586a814/SUCCESS
/nfs/kubedata/default-test-claim-pvc-523aeaea-f637-4ab5-90d4-2b311586a814/SUCCESS
$ kubectl delete -f test-pod.yaml -f test-claim.yaml
pod "test-pod" deleted
persistentvolumeclaim "test-claim" deleted