[k8s]k8s基于csi使用rbd存储

博客 分享
0 235
张三
张三 2022-03-09 18:56:22
悬赏:0 积分 收藏

[k8s] k8s基于csi使用rbd存储

描述

ceph-csi扩展各种存储类型的卷的管理能力,实现第三方存储ceph的各种操作能力与k8s存储系统的结合。通过 ceph-csi 使用 ceph rbd块设备,它动态地提供rbd以支持 Kubernetes 持久化存储,并将这些rbd映射给 pod做为块设备持久化数据使用。 Ceph 将pod存在块设备的数据以副本机制的方式存储在多个osd上,实现pod数据具有更高的可靠性。

部署环境信息

OS: CentOS Linux release 7.9.2009 (Core)
Kubectl Version:v1.20.2
Ceph Versions:14.2.2
需要部署环境部署可以看我以前的文章基于kolla容器化部署ceph和基于ansible自动化部署k8s

配置ceph

# 创建名为kubernetes存储池。

pg与pgs要根据实际情况修改。

$ ceph osd pool create kubernetes 64 64$ ceph osd pool application enable kubernetes rbd

# 新创建的池使用前进行rbd初始化。

$ rbd pool init kubernetes

# 为csi创建一个新用户kubernetes访问kubernetes池。执行以下命令并记录生成的密钥。

$ ceph auth get-or-create  client.kubernetes mon 'profile rbd' osd 'profile rbd pool=kubernetes' mgr 'profile rbd'[client.kubernetes]	key = AQDStCFiN0JMFxAAK8EHEnEIRIN+SbACY0T2lw==

下面配置cephx secret对象时,会要用到userID=kubernetes和userKey=AQDStCFiN0JMFxAAK8EHEnEIRIN+SbACY0T2lw==

# csi需要一个存储在 Kubernetes 中的ConfigMap对象, 对象定义 的是Ceph 集群的 Ceph 监视器地址和fsid。收集 Ceph 集群唯一fsid和监视器地址。

$ ceph mon dump.....fsid 4a9e463a-4853-4237-a5c5-9ae9d25bacda0: [v2:172.20.163.52:3300/0,v1:172.20.163.52:6789/0] mon.172.20.163.52

ceph-csi目前只支持旧版 V1 协议

配置ceph-csi configmap

# 创建ConfigMap对象,将fsid替换为"clusterID",并将监视器地址替换为"monitors"。

$ cat > csi-config-map.yaml << EOF---apiVersion: v1kind: ConfigMapdata:  config.json: |-    [      {        "clusterID": "4a9e463a-4853-4237-a5c5-9ae9d25bacda",        "monitors": [          "172.20.163.52:6789",          "172.20.163.52:6789",          "172.20.163.52:6789"        ]      }    ]metadata:  name: ceph-csi-configEOF$ kubectl apply -f csi-config-map.yaml

# 新版本的csi还需要一个额外的ConfigMap对象来定义密钥管理服务 (KMS) 提供者的详细信息, 空配置即可以。

$ cat > csi-kms-config-map.yaml << EOF---apiVersion: v1kind: ConfigMapdata:  config.json: |-    {}metadata:  name: ceph-csi-encryption-kms-configEOF$ kubectl apply -f csi-kms-config-map.yaml

# 查看ceph.conf文件

$ docker exec ceph_mon cat /etc/ceph/ceph.conf[global]log file = /var/log/kolla-ceph/$cluster-$name.loglog to syslog = falseerr to syslog = falselog to stderr = falseerr to stderr = falsefsid = 4a9e463a-4853-4237-a5c5-9ae9d25bacdamon initial members = 172.20.163.52mon host = 172.20.163.52mon addr = 172.20.163.52:6789auth cluster required = cephxauth service required = cephxauth client required = cephxosd pool default size = 1osd pool default min size = 1setuser match path = /var/lib/ceph/$type/$cluster-$idosd crush update on start = false

# 通过ConfigMap对象来定义 Ceph 配置,以添加到 CSI 容器内的 ceph.conf 文件中。ceph.conf文件内容替换以下的内容。

$  cat > ceph-config-map.yaml << EOF---apiVersion: v1kind: ConfigMapdata:  ceph.conf: |    [global]    log file = /var/log/kolla-ceph/$cluster-$name.log    log to syslog = false    err to syslog = false    log to stderr = false    err to stderr = false    fsid = 4a9e463a-4853-4237-a5c5-9ae9d25bacda    mon initial members = 172.20.163.52    mon host = 172.20.163.52    mon addr = 172.20.163.52:6789    auth cluster required = cephx    auth service required = cephx    auth client required = cephx    osd pool default size = 1    osd pool default min size = 1    setuser match path = /var/lib/ceph/$type/$cluster-$id    osd crush update on start = false  # keyring is a required key and its value should be empty  keyring: |metadata:  name: ceph-configEOF$ kubectl apply -f ceph-config-map.yaml
配置ceph-csi cephx secret

# 创建secret对象, csi需要cephx凭据才能与ceph集群通信。

$ cat > csi-rbd-secret.yaml << EOF---apiVersion: v1kind: Secretmetadata:  name: csi-rbd-secret  namespace: defaultstringData:  userID: kubernetes  userKey: AQDStCFiN0JMFxAAK8EHEnEIRIN+SbACY0T2lw==EOF$ kubectl apply -f csi-rbd-secret.yaml
配置ceph-csi插件

# 创建所需的ServiceAccountRBAC ClusterRole/ClusterRoleBinding Kubernetes 对象。

点击查看csi-provisioner-rbac代码
$ cat > csi-provisioner-rbac.yaml << EOF---apiVersion: v1kind: ServiceAccountmetadata:  name: rbd-csi-provisioner  # replace with non-default namespace name  namespace: default---kind: ClusterRoleapiVersion: rbac.authorization.k8s.io/v1metadata:  name: rbd-external-provisioner-runnerrules:  - apiGroups: [""]    resources: ["nodes"]    verbs: ["get", "list", "watch"]  - apiGroups: [""]    resources: ["secrets"]    verbs: ["get", "list", "watch"]  - apiGroups: [""]    resources: ["events"]    verbs: ["list", "watch", "create", "update", "patch"]  - apiGroups: [""]    resources: ["persistentvolumes"]    verbs: ["get", "list", "watch", "create", "update", "delete", "patch"]  - apiGroups: [""]    resources: ["persistentvolumeclaims"]    verbs: ["get", "list", "watch", "update"]  - apiGroups: [""]    resources: ["persistentvolumeclaims/status"]    verbs: ["update", "patch"]  - apiGroups: ["storage.k8s.io"]    resources: ["storageclasses"]    verbs: ["get", "list", "watch"]  - apiGroups: ["snapshot.storage.k8s.io"]    resources: ["volumesnapshots"]    verbs: ["get", "list", "patch"]  - apiGroups: ["snapshot.storage.k8s.io"]    resources: ["volumesnapshots/status"]    verbs: ["get", "list", "patch"]  - apiGroups: ["snapshot.storage.k8s.io"]    resources: ["volumesnapshotcontents"]    verbs: ["create", "get", "list", "watch", "update", "delete", "patch"]  - apiGroups: ["snapshot.storage.k8s.io"]    resources: ["volumesnapshotclasses"]    verbs: ["get", "list", "watch"]  - apiGroups: ["storage.k8s.io"]    resources: ["volumeattachments"]    verbs: ["get", "list", "watch", "update", "patch"]  - apiGroups: ["storage.k8s.io"]    resources: ["volumeattachments/status"]    verbs: ["patch"]  - apiGroups: ["storage.k8s.io"]    resources: ["csinodes"]    verbs: ["get", "list", "watch"]  - apiGroups: ["snapshot.storage.k8s.io"]    resources: ["volumesnapshotcontents/status"]    verbs: ["update", "patch"]  - apiGroups: [""]    resources: ["configmaps"]    verbs: ["get"]  - apiGroups: [""]    resources: ["serviceaccounts"]    verbs: ["get"]---kind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata:  name: rbd-csi-provisioner-rolesubjects:  - kind: ServiceAccount    name: rbd-csi-provisioner    # replace with non-default namespace name    namespace: defaultroleRef:  kind: ClusterRole  name: rbd-external-provisioner-runner  apiGroup: rbac.authorization.k8s.io---kind: RoleapiVersion: rbac.authorization.k8s.io/v1metadata:  # replace with non-default namespace name  namespace: default  name: rbd-external-provisioner-cfgrules:  - apiGroups: [""]    resources: ["configmaps"]    verbs: ["get", "list", "watch", "create", "update", "delete"]  - apiGroups: ["coordination.k8s.io"]    resources: ["leases"]    verbs: ["get", "watch", "list", "delete", "update", "create"]---kind: RoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata:  name: rbd-csi-provisioner-role-cfg  # replace with non-default namespace name  namespace: defaultsubjects:  - kind: ServiceAccount    name: rbd-csi-provisioner    # replace with non-default namespace name    namespace: defaultroleRef:  kind: Role  name: rbd-external-provisioner-cfg  apiGroup: rbac.authorization.k8s.ioEOF $ $ kubectl apply -f csi-provisioner-rbac.yaml
点击查看csi-nodeplugin-rbac代码
$ cat > csi-nodeplugin-rbac.yaml << EOF---apiVersion: v1kind: ServiceAccountmetadata:  name: rbd-csi-nodeplugin  # replace with non-default namespace name  namespace: default---kind: ClusterRoleapiVersion: rbac.authorization.k8s.io/v1metadata:  name: rbd-csi-nodepluginrules:  - apiGroups: [""]    resources: ["nodes"]    verbs: ["get"]  # allow to read Vault Token and connection options from the Tenants namespace  - apiGroups: [""]    resources: ["secrets"]    verbs: ["get"]  - apiGroups: [""]    resources: ["configmaps"]    verbs: ["get"]  - apiGroups: [""]    resources: ["serviceaccounts"]    verbs: ["get"]  - apiGroups: [""]    resources: ["persistentvolumes"]    verbs: ["get"]  - apiGroups: ["storage.k8s.io"]    resources: ["volumeattachments"]    verbs: ["list", "get"]---kind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata:  name: rbd-csi-nodepluginsubjects:  - kind: ServiceAccount    name: rbd-csi-nodeplugin    # replace with non-default namespace name    namespace: defaultroleRef:  kind: ClusterRole  name: rbd-csi-nodeplugin  apiGroup: rbac.authorization.k8s.ioEOF$ kubectl apply -f csi-nodeplugin-rbac.yaml

# 创建所需的ceph-csi容器

点击查看csi-rbdplugin-provisioner代码
$ cat > csi-rbdplugin-provisioner.yaml << EOF---kind: ServiceapiVersion: v1metadata:  name: csi-rbdplugin-provisioner  # replace with non-default namespace name  namespace: default  labels:    app: csi-metricsspec:  selector:    app: csi-rbdplugin-provisioner  ports:    - name: http-metrics      port: 8080      protocol: TCP      targetPort: 8681---kind: DeploymentapiVersion: apps/v1metadata:  name: csi-rbdplugin-provisioner  # replace with non-default namespace name  namespace: defaultspec:  replicas: 1  selector:    matchLabels:      app: csi-rbdplugin-provisioner  template:    metadata:      labels:        app: csi-rbdplugin-provisioner    spec:      affinity:        podAntiAffinity:          requiredDuringSchedulingIgnoredDuringExecution:            - labelSelector:                matchExpressions:                  - key: app                    operator: In                    values:                      - csi-rbdplugin-provisioner              topologyKey: "kubernetes.io/hostname"      serviceAccountName: rbd-csi-provisioner      priorityClassName: system-cluster-critical      hostNetwork: true      containers:        - name: csi-provisioner          image: antidebug/csi-provisioner:v3.0.0          args:            - "--csi-address=$(ADDRESS)"            - "--v=5"            - "--timeout=150s"            - "--retry-interval-start=500ms"            - "--leader-election=true"            #  set it to true to use topology based provisioning            - "--feature-gates=Topology=false"            # if fstype is not specified in storageclass, ext4 is default            - "--default-fstype=ext4"            - "--extra-create-metadata=true"          env:            - name: ADDRESS              value: unix:///csi/csi-provisioner.sock          imagePullPolicy: "IfNotPresent"          volumeMounts:            - name: socket-dir              mountPath: /csi        - name: csi-snapshotter          image: antidebug/csi-snapshotter:v4.1.1          args:            - "--csi-address=$(ADDRESS)"            - "--v=5"            - "--timeout=150s"            - "--leader-election=true"          env:            - name: ADDRESS              value: unix:///csi/csi-provisioner.sock          imagePullPolicy: "IfNotPresent"          volumeMounts:            - name: socket-dir              mountPath: /csi        - name: csi-attacher          image: antidebug/csi-attacher:v3.2.1          args:            - "--v=5"            - "--csi-address=$(ADDRESS)"            - "--leader-election=true"            - "--retry-interval-start=500ms"          env:            - name: ADDRESS              value: /csi/csi-provisioner.sock          imagePullPolicy: "IfNotPresent"          volumeMounts:            - name: socket-dir              mountPath: /csi        - name: csi-resizer          image: antidebug/csi-resizer:v1.2.0          args:            - "--csi-address=$(ADDRESS)"            - "--v=5"            - "--timeout=150s"            - "--leader-election"            - "--retry-interval-start=500ms"            - "--handle-volume-inuse-error=false"          env:            - name: ADDRESS              value: unix:///csi/csi-provisioner.sock          imagePullPolicy: "IfNotPresent"          volumeMounts:            - name: socket-dir              mountPath: /csi        - name: csi-rbdplugin          # for stable functionality replace canary with latest release version          image: quay.io/cephcsi/cephcsi:v3.5.1          args:            - "--nodeid=$(NODE_ID)"            - "--type=rbd"            - "--controllerserver=true"            - "--endpoint=$(CSI_ENDPOINT)"            - "--csi-addons-endpoint=$(CSI_ADDONS_ENDPOINT)"            - "--v=5"            - "--drivername=rbd.csi.ceph.com"            - "--pidlimit=-1"            - "--rbdhardmaxclonedepth=8"            - "--rbdsoftmaxclonedepth=4"            - "--enableprofiling=false"          env:            - name: POD_IP              valueFrom:                fieldRef:                  fieldPath: status.podIP            - name: NODE_ID              valueFrom:                fieldRef:                  fieldPath: spec.nodeName            - name: POD_NAMESPACE              valueFrom:                fieldRef:                  fieldPath: metadata.namespace            # - name: KMS_CONFIGMAP_NAME            #   value: encryptionConfig            - name: CSI_ENDPOINT              value: unix:///csi/csi-provisioner.sock            - name: CSI_ADDONS_ENDPOINT              value: unix:///csi/csi-addons.sock          imagePullPolicy: "IfNotPresent"          volumeMounts:            - name: socket-dir              mountPath: /csi            - mountPath: /dev              name: host-dev            - mountPath: /sys              name: host-sys            - mountPath: /lib/modules              name: lib-modules              readOnly: true            - name: ceph-csi-config              mountPath: /etc/ceph-csi-config/            - name: ceph-csi-encryption-kms-config              mountPath: /etc/ceph-csi-encryption-kms-config/            - name: keys-tmp-dir              mountPath: /tmp/csi/keys            - name: ceph-config              mountPath: /etc/ceph/        - name: csi-rbdplugin-controller          # for stable functionality replace canary with latest release version          image: quay.io/cephcsi/cephcsi:v3.5.1          args:            - "--type=controller"            - "--v=5"            - "--drivername=rbd.csi.ceph.com"            - "--drivernamespace=$(DRIVER_NAMESPACE)"          env:            - name: DRIVER_NAMESPACE              valueFrom:                fieldRef:                  fieldPath: metadata.namespace          imagePullPolicy: "IfNotPresent"          volumeMounts:            - name: ceph-csi-config              mountPath: /etc/ceph-csi-config/            - name: keys-tmp-dir              mountPath: /tmp/csi/keys            - name: ceph-config              mountPath: /etc/ceph/        - name: liveness-prometheus          image: quay.io/cephcsi/cephcsi:v3.5.1          args:            - "--type=liveness"            - "--endpoint=$(CSI_ENDPOINT)"            - "--metricsport=8681"            - "--metricspath=/metrics"            - "--polltime=60s"            - "--timeout=3s"          env:            - name: CSI_ENDPOINT              value: unix:///csi/csi-provisioner.sock            - name: POD_IP              valueFrom:                fieldRef:                  fieldPath: status.podIP          volumeMounts:            - name: socket-dir              mountPath: /csi          imagePullPolicy: "IfNotPresent"      volumes:        - name: host-dev          hostPath:            path: /dev        - name: host-sys          hostPath:            path: /sys        - name: lib-modules          hostPath:            path: /lib/modules        - name: socket-dir          emptyDir: {            medium: "Memory"          }        - name: ceph-config          configMap:            name: ceph-config        - name: ceph-csi-config          configMap:            name: ceph-csi-config        - name: ceph-csi-encryption-kms-config          configMap:            name: ceph-csi-encryption-kms-config        - name: keys-tmp-dir          emptyDir: {            medium: "Memory"          }EOF$ kubectl apply -f csi-rbdplugin-provisioner.yaml
点击csi-rbdplugin代码
$ cat > csi-rbdplugin.yaml << EOF---kind: DaemonSetapiVersion: apps/v1metadata:  name: csi-rbdplugin  # replace with non-default namespace name  namespace: defaultspec:  selector:    matchLabels:      app: csi-rbdplugin  template:    metadata:      labels:        app: csi-rbdplugin    spec:      serviceAccountName: rbd-csi-nodeplugin      hostNetwork: true      hostPID: true      priorityClassName: system-node-critical      # to use e.g. Rook orchestrated cluster, and mons' FQDN is      # resolved through k8s service, set dns policy to cluster first      dnsPolicy: ClusterFirstWithHostNet      containers:        - name: driver-registrar          # This is necessary only for systems with SELinux, where          # non-privileged sidecar containers cannot access unix domain socket          # created by privileged CSI driver container.          securityContext:            privileged: true          image: antidebug/csi-node-driver-registrar:v2.2.0          args:            - "--v=5"            - "--csi-address=/csi/csi.sock"            - "--kubelet-registration-path=/var/lib/kubelet/plugins/rbd.csi.ceph.com/csi.sock"          env:            - name: KUBE_NODE_NAME              valueFrom:                fieldRef:                  fieldPath: spec.nodeName          volumeMounts:            - name: socket-dir              mountPath: /csi            - name: registration-dir              mountPath: /registration        - name: csi-rbdplugin          securityContext:            privileged: true            capabilities:              add: ["SYS_ADMIN"]            allowPrivilegeEscalation: true          # for stable functionality replace canary with latest release version          image: quay.io/cephcsi/cephcsi:v3.5.1          args:            - "--nodeid=$(NODE_ID)"            - "--pluginpath=/var/lib/kubelet/plugins"            - "--stagingpath=/var/lib/kubelet/plugins/kubernetes.io/csi/pv/"            - "--type=rbd"            - "--nodeserver=true"            - "--endpoint=$(CSI_ENDPOINT)"            - "--csi-addons-endpoint=$(CSI_ADDONS_ENDPOINT)"            - "--v=5"            - "--drivername=rbd.csi.ceph.com"            - "--enableprofiling=false"            # If topology based provisioning is desired, configure required            # node labels representing the nodes topology domain            # and pass the label names below, for CSI to consume and advertise            # its equivalent topology domain            # - "--domainlabels=failure-domain/region,failure-domain/zone"          env:            - name: POD_IP              valueFrom:                fieldRef:                  fieldPath: status.podIP            - name: NODE_ID              valueFrom:                fieldRef:                  fieldPath: spec.nodeName            - name: POD_NAMESPACE              valueFrom:                fieldRef:                  fieldPath: metadata.namespace            # - name: KMS_CONFIGMAP_NAME            #   value: encryptionConfig            - name: CSI_ENDPOINT              value: unix:///csi/csi.sock            - name: CSI_ADDONS_ENDPOINT              value: unix:///csi/csi-addons.sock          imagePullPolicy: "IfNotPresent"          volumeMounts:            - name: socket-dir              mountPath: /csi            - mountPath: /dev              name: host-dev            - mountPath: /sys              name: host-sys            - mountPath: /run/mount              name: host-mount            - mountPath: /etc/selinux              name: etc-selinux              readOnly: true            - mountPath: /lib/modules              name: lib-modules              readOnly: true            - name: ceph-csi-config              mountPath: /etc/ceph-csi-config/            - name: ceph-csi-encryption-kms-config              mountPath: /etc/ceph-csi-encryption-kms-config/            - name: plugin-dir              mountPath: /var/lib/kubelet/plugins              mountPropagation: "Bidirectional"            - name: mountpoint-dir              mountPath: /var/lib/kubelet/pods              mountPropagation: "Bidirectional"            - name: keys-tmp-dir              mountPath: /tmp/csi/keys            - name: ceph-logdir              mountPath: /var/log/ceph            - name: ceph-config              mountPath: /etc/ceph/        - name: liveness-prometheus          securityContext:            privileged: true          image: quay.io/cephcsi/cephcsi:v3.5.1          args:            - "--type=liveness"            - "--endpoint=$(CSI_ENDPOINT)"            - "--metricsport=8680"            - "--metricspath=/metrics"            - "--polltime=60s"            - "--timeout=3s"          env:            - name: CSI_ENDPOINT              value: unix:///csi/csi.sock            - name: POD_IP              valueFrom:                fieldRef:                  fieldPath: status.podIP          volumeMounts:            - name: socket-dir              mountPath: /csi          imagePullPolicy: "IfNotPresent"      volumes:        - name: socket-dir          hostPath:            path: /var/lib/kubelet/plugins/rbd.csi.ceph.com            type: DirectoryOrCreate        - name: plugin-dir          hostPath:            path: /var/lib/kubelet/plugins            type: Directory        - name: mountpoint-dir          hostPath:            path: /var/lib/kubelet/pods            type: DirectoryOrCreate        - name: ceph-logdir          hostPath:            path: /var/log/ceph            type: DirectoryOrCreate        - name: registration-dir          hostPath:            path: /var/lib/kubelet/plugins_registry/            type: Directory        - name: host-dev          hostPath:            path: /dev        - name: host-sys          hostPath:            path: /sys        - name: etc-selinux          hostPath:            path: /etc/selinux        - name: host-mount          hostPath:            path: /run/mount        - name: lib-modules          hostPath:            path: /lib/modules        - name: ceph-config          configMap:            name: ceph-config        - name: ceph-csi-config          configMap:            name: ceph-csi-config        - name: ceph-csi-encryption-kms-config          configMap:            name: ceph-csi-encryption-kms-config        - name: keys-tmp-dir          emptyDir: {            medium: "Memory"          }---# This is a service to expose the liveness metricsapiVersion: v1kind: Servicemetadata:  name: csi-metrics-rbdplugin  # replace with non-default namespace name  namespace: default  labels:    app: csi-metricsspec:  ports:    - name: http-metrics      port: 8080      protocol: TCP      targetPort: 8680  selector:    app: csi-rbdpluginEOF

此处csi-rbdplugin-provisioner代码与官方不同之处: pod采用hostNetwork: true。

使用ceph块设备

# 创建一个存储类
kubernetes storageclass定义了一个存储类。 可以创建多个storageclass对象以映射到不同的服务质量级别(即 NVMe 与基于 HDD 的池)和功能。例如,要创建一个映射到上面创建的kubernetes池的storageclass ,确保"clusterID"与您的ceph集群的fsid一致。

$ cat > csi-rbd-sc.yaml << EOF---apiVersion: storage.k8s.io/v1kind: StorageClassmetadata:   name: csi-rbd-scprovisioner: rbd.csi.ceph.comparameters:   clusterID: 4a9e463a-4853-4237-a5c5-9ae9d25bacda   pool: kubernetes   imageFeatures: layering   csi.storage.k8s.io/provisioner-secret-name: csi-rbd-secret   csi.storage.k8s.io/provisioner-secret-namespace: default   csi.storage.k8s.io/controller-expand-secret-name: csi-rbd-secret   csi.storage.k8s.io/controller-expand-secret-namespace: default   csi.storage.k8s.io/node-stage-secret-name: csi-rbd-secret   csi.storage.k8s.io/node-stage-secret-namespace: defaultreclaimPolicy: DeleteallowVolumeExpansion: truemountOptions:   - discardEOF$ kubectl apply -f csi-rbd-sc.yaml

# 创建pvc
使用上面创建的storageclass创建PersistentVolumeClaim

cat > raw-block-pvc.yaml << EOF---apiVersion: v1kind: PersistentVolumeClaimmetadata:  name: raw-block-pvcspec:  accessModes:    - ReadWriteOnce  volumeMode: Block  resources:    requests:      storage: 1Gi  storageClassName: csi-rbd-scEOF$ kubectl apply -f raw-block-pvc.yaml

# 查看pvc的状态为Bound即正常。

$ kubectl get pvcNAME            STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGEraw-block-pvc   Bound    pvc-d57ba3b8-c916-4182-966f-eb8680955cb7   2Gi        RWO            csi-rbd-sc     132m
参考文献

[1] ceph.com 作者 202203

posted @ 2022-03-09 18:36 我是一个平民 阅读(4) 评论(0) 编辑 收藏 举报
回帖
    张三

    张三 (王者 段位)

    821 积分 (2)粉丝 (41)源码

     

    温馨提示

    亦奇源码

    最新会员