1. 搭建NFS服务器

NFS服务器的搭建参考了K8s持久化存储【使用NFS】CentOS 7 下 yum 安装和配置 NFS 两篇文章,其中要注意客户端部分的安装需要在K8s所有节点进行。
本文中NFS服务器的IP为:192.168.111.103,共享目录为/data,相对应地,/etc/exports中添加内容为
/data/ 192.168.111.0/24(insecure,rw,sync,no_root_squash,no_all_squash)

以下内容主要参考了程立笔记以及《kubernetes in action中文版》

2. 创建RBAC授权

创建provisioner的ServiceAccount,并对其进行RBAC授权。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
# rbac.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["list", "watch", "create", "update", "patch"]
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]

---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: kube-system
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io

3. 创建StorageClass

1
2
3
4
5
6
7
8
9
# storageclass-nfs.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: managed-nfs-storage
provisioner: nfs-provisioner-01
parameters:
archiveOnDelete: "false"

4. 创建provisioner

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
# nfs-client-provisioner.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-provisioner-01
# replace with namespace where provisioner is deployed
namespace: kube-system
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-provisioner-01
template:
metadata:
labels:
app: nfs-provisioner-01
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: sunjqv/nfs-subdir-external-provisioner:v4.0.0
# image: jmgao1983/nfs-client-provisioner:latest
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: nfs-provisioner-01
- name: NFS_SERVER
value: 192.168.111.103
- name: NFS_PATH
value: /data
volumes:
- name: nfs-client-root
nfs:
server: 192.168.111.103
path: /data

在网上许多文章中,spec.template.spec.containers.image配置的provisioner不再适合1.20.0+版本的k8s,会出现无法动态构建pv的问题,具体问题信息表现为

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
[root@master k8sfiles]# kubectl describe pvc data-kubia-0
Name: data-kubia-0
Namespace: default
StorageClass: managed-nfs-storage
Status: Pending
Volume:
Labels: app=kubia
Annotations: volume.beta.kubernetes.io/storage-provisioner: fuseim.pri/ifs
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Used By: kubia-0
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ExternalProvisioning 6s (x10 over 2m2s) persistentvolume-controller waiting for a volume to be created, either by external provisioner "fuseim.pri/ifs" or manually created by system administrator

1
2
3
4
5
6
7
8
9
10
11
12
13
#查看对应pod的日志

[root@master k8sfiles]# kubectl logs -f nfs-client-provisioner-9547dc76b-6g4wj
I0819 01:26:01.017938 1 leaderelection.go:185] attempting to acquire leader lease default/fuseim.pri-ifs...
E0819 01:26:18.445583 1 event.go:259] Could not construct reference to: '&v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"fuseim.pri-ifs", GenerateName:"", Namespace:"default", SelfLink:"", UID:"c55af7a2-b250-4c45-9cb2-19aa95610dbf", ResourceVersion:"1083335", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63764873777, loc:(*time.Location)(0x1956800)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"nfs-client-provisioner-9547dc76b-6g4wj_69af896c-008c-11ec-b671-620ce41148af\",\"leaseDurationSeconds\":15,\"acquireTime\":\"2021-08-19T01:26:18Z\",\"renewTime\":\"2021-08-19T01:26:18Z\",\"leaderTransitions\":2}"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Subsets:[]v1.EndpointSubset(nil)}' due to: 'selfLink was empty, can't make reference'. Will not report event: 'Normal' 'LeaderElection' 'nfs-client-provisioner-9547dc76b-6g4wj_69af896c-008c-11ec-b671-620ce41148af became leader'
I0819 01:26:18.445645 1 leaderelection.go:194] successfully acquired lease default/fuseim.pri-ifs
I0819 01:26:18.445737 1 controller.go:631] Starting provisioner controller fuseim.pri/ifs_nfs-client-provisioner-9547dc76b-6g4wj_69af896c-008c-11ec-b671-620ce41148af!
I0819 01:26:18.546716 1 controller.go:680] Started provisioner controller fuseim.pri/ifs_nfs-client-provisioner-9547dc76b-6g4wj_69af896c-008c-11ec-b671-620ce41148af!
I0819 01:39:26.997185 1 controller.go:987] provision "default/data-kubia-0" class "managed-nfs-storage": started
E0819 01:39:27.006858 1 controller.go:1004] provision "default/data-kubia-0" class "managed-nfs-storage": unexpected error getting claim reference: selfLink was empty, can't make reference
I0819 01:41:18.449551 1 controller.go:987] provision "default/data-kubia-0" class "managed-nfs-storage": started
E0819 01:41:18.452684 1 controller.go:1004] provision "default/data-kubia-0" class "managed-nfs-storage": unexpected error getting claim reference: selfLink was empty, can't make reference

即pvc在“waiting for a volume to be created”,错误原因则是“selfLink was empty, can’t make reference”。经过搜索得知,原来selfLink已经在k8sv1.20.0及以后的版本上弃用,这里也给出了新的provisioner image:gcr.io/k8s-staging-sig-storage/nfs-subdir-external-provisioner:v4.0.0。但由于gcr.io在大陆访问不便,在上面的yaml文件中,我将其替换成了dockerhub镜像:sunjqv/nfs-subdir-external-provisioner:v4.0.0。

5. 创建Statefulset

在部署Statufulset之前,创建一个用于在有状态的pod之间提供网络标识的headless Service

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# kubia-svc-headless.yaml

apiVersion: v1
kind: Service
metadata:
name: kubia
labels:
app: kubia
spec:
ports:
- port: 80
name: http
clusterIP: None
selector:
app: kubia

最后创建Statefulset

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
# kubia-statefulset

apiVersion: apps/v1
kind: StatefulSet
metadata:
name: kubia
spec:
serviceName: kubia
replicas: 2
selector:
matchLabels:
app: kubia # has to match .spec.template.metadata.labels
template:
metadata:
labels:
app: kubia
spec:
terminationGracePeriodSeconds: 10
serviceAccount: nfs-client-provisioner
containers:
- name: kubia
image: luksa/kubia-pet
ports:
- containerPort: 8080
name: http
volumeMounts:
- name: data
mountPath: /var/data
volumeClaimTemplates:
- metadata:
name: data
spec:
storageClassName: managed-nfs-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Mi

两个pods依次创建

1
2
3
4
5
6
7
8
[root@master k8sfiles]# kubectl get po -w
NAME READY STATUS RESTARTS AGE
kubia-0 0/1 ContainerCreating 0 53s
nfs-client-provisioner-774645ccfd-p9l8w 1/1 Running 0 77s
kubia-0 1/1 Running 0 71s
kubia-1 0/1 ContainerCreating 0 15s
kubia-1 1/1 Running 0 39s

pvc和pv已经是bound状态

1
2
3
4
5
6
7
8
[root@master k8sfiles]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
data-kubia-0 Bound pvc-25b1d7ef-0b77-45b4-beda-5478628f64c8 10Mi RWO managed-nfs-storage 47m
data-kubia-1 Bound pvc-3405016b-5f57-499c-b418-3b8438e94551 10Mi RWO managed-nfs-storage 46m
[root@master k8sfiles]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-25b1d7ef-0b77-45b4-beda-5478628f64c8 10Mi RWO Delete Bound default/data-kubia-0 managed-nfs-storage 47m
pvc-3405016b-5f57-499c-b418-3b8438e94551 10Mi RWO Delete Bound default/data-kubia-1 managed-nfs-storage 47m