K8S 通过 ceph-csi-rbd 对接 Ceph RBD

https://github.com/ceph/ceph-csi

下边的内容都可以在 ceph-csi 官网地址找到,官网地址还有一些模板。

https://github.com/ceph/ceph-csi/tree/devel/examples

通过 ceph-csi-rbd 对接 Ceph RBD

前置条件

有一个 K8S 集群和 Ceph 集群,K8S 管理节点安装好 helm

配置 Ceph 集群

创建用户

[root@ceph-1 ~]# ceph auth add client.kubernetes mon "allow *" osd 'allow *' mds 'allow *' mgr "allow

这里为了方便给了全部权限。

创建 RBD Pool

[root@ceph-1 ~]# ceph osd pool create rbd-pool
pool 'rbd-pool' created
[root@ceph-1 ~]# ceph osd pool application enable rbd-pool rbd
enabled application 'rbd' on pool 'rbd-pool'

获取集群和上文用户信息

[root@ceph-1 ~]# ceph auth get client.kubernetes
[client.kubernetes]
        key = AQCxDqNnTy4KDhAAk7K40yR1LaubuU4SD/c/GQ==
        caps mds = "allow *"
        caps mgr = "allow *"
        caps mon = "allow *"
        caps osd = "allow *"
[root@ceph-1 ~]# ceph mon dump
epoch 4
fsid 4702af42-e30e-11ef-b2c6-bc241143359c
last_changed 2025-02-04T16:00:17.383917+0000
created 2025-02-04T15:39:58.851613+0000
min_mon_release 18 (reef)
election_strategy: 1
0: [v2:192.168.50.91:3300/0,v1:192.168.50.91:6789/0] mon.ceph-1
1: [v2:192.168.50.92:3300/0,v1:192.168.50.92:6789/0] mon.ceph-2
2: [v2:192.168.50.93:3300/0,v1:192.168.50.93:6789/0] mon.ceph-3
3: [v2:192.168.50.94:3300/0,v1:192.168.50.94:6789/0] mon.ceph-4
dumped monmap epoch 4
[root@ceph-1 ~]# cat /etc/ceph/ceph.conf
# minimal ceph.conf for 4702af42-e30e-11ef-b2c6-bc241143359c
[global]
        fsid = 4702af42-e30e-11ef-b2c6-bc241143359c
        mon_host = [v2:192.168.50.91:3300/0,v1:192.168.50.91:6789/0] [v2:192.168.50.92:3300/0,v1:192.168.50.92:6789/0] [v2:192.168.50.93:3300/0,v1:192.168.50.93:6789/0] [v2:192.168.50.94:3300/0,v1:192.168.50.94:6789/0]

集群 ID:4702af42-e30e-11ef-b2c6-bc241143359c

对接用户 name:kubernetes

对接用户 key:AQCxDqNnTy4KDhAAk7K40yR1LaubuU4SD/c/GQ==

集群 mon 地址:192.168.50.91:6789,192.168.50.92:6789,192.168.50.93:6789,192.168.50.94:6789

配置 K8S

添加 Helm repo

[root@base-k8s-master-1 rbd]# helm repo add ceph-csi https://ceph.github.io/csi-charts

如果想要提前设置可以通过 helm show values ceph-csi/ceph-csi-rbd > ceph-csi-rbd-values.yml 命令导出配置文件。

[root@base-k8s-master-1 rbd]# helm show values ceph-csi/ceph-csi-rbd > ceph-csi-rbd-values.yml
[root@base-k8s-master-1 rbd]# grep replicaCount ceph-csi-rbd-values.yml
  replicaCount: 2

因为我只有两个 Worker 节点,默认 provisioner 是 3 个 pod,我修改成了 2。

部署 ceph-csi-rbd

[root@base-k8s-master-1 rbd]# kubectl create namespace "ceph-csi-rbd"
namespace/ceph-csi-rbd created
[root@base-k8s-master-1 rbd]# helm install --values ceph-csi-rbd-values.yml -n ceph-csi-rbd ceph-csi-rbd ceph-csi/ceph-csi-rbd
NAME: ceph-csi-rbd
LAST DEPLOYED: Wed Feb  5 16:57:37 2025
NAMESPACE: ceph-csi-rbd
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Examples on how to configure a storage class and start using the driver are here:
https://github.com/ceph/ceph-csi/tree/v3.13.0/examples/rbd
[root@base-k8s-master-1 rbd]# kubectl get pod -n ceph-csi-rbd
NAME                                        READY   STATUS    RESTARTS   AGE
ceph-csi-rbd-nodeplugin-lwtgc               3/3     Running   0          29s
ceph-csi-rbd-nodeplugin-p2pqz               3/3     Running   0          29s
ceph-csi-rbd-provisioner-75944bc95c-64xvp   7/7     Running   0          29s
ceph-csi-rbd-provisioner-75944bc95c-dpfrw   7/7     Running   0          29s
[root@base-k8s-master-1 rbd]# kubectl get cm -n ceph-csi-rbd
NAME                             DATA   AGE
ceph-config                      2      7m29s
ceph-csi-config                  2      7m29s
ceph-csi-encryption-kms-config   1      7m29s
kube-root-ca.crt                 1      134m
[root@base-k8s-master-1 rbd]# kubectl get secret -n ceph-csi-rbd
NAME                                 TYPE                 DATA   AGE
csi-rbd-secret                       Opaque               3      46s
sh.helm.release.v1.ceph-csi-rbd.v1   helm.sh/release.v1   1      7m34s

配置 ceph-csi-rbd

需要配置的有 项,修改 configMap ceph-csi-config,并创建一个带有 ceph 里 kubernetes 用户的 secret。

配置 ceph-csi-config

获取 ceph-csi-config 文件:

[root@base-k8s-master-1 rbd]# kubectl get cm -n ceph-csi-rbd ceph-csi-config -o yaml > ceph-csi-config.yml

修改 ceph-csi-config 内容如下:

apiVersion: v1
data:
  cluster-mapping.json: '[]'
  config.json: |-
    [
      {
        "clusterID": "4702af42-e30e-11ef-b2c6-bc241143359c",
        "monitors": [
          "192.168.50.91:6789",
          "192.168.50.92:6789",
          "192.168.50.93:6789",
          "192.168.50.94:6789"
        ]
      }
    ]
kind: ConfigMap
metadata:
  annotations:
    meta.helm.sh/release-name: ceph-csi-rbd
    meta.helm.sh/release-namespace: ceph-csi-rbd
  labels:
    app: ceph-csi-rbd
    app.kubernetes.io/managed-by: Helm
    chart: ceph-csi-rbd-3.13.0
    component: nodeplugin
    heritage: Helm
    release: ceph-csi-rbd
  name: ceph-csi-config
  namespace: ceph-csi-rbd

重新 apply configMap

[root@base-k8s-master-1 rbd]# kubectl apply -f ceph-csi-config.yml
Warning: resource configmaps/ceph-csi-config is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
configmap/ceph-csi-config configured

创建包含 ceph 对接用户的 secret

Secret 内容如下:

---
apiVersion: v1
kind: Secret
metadata:
  name: csi-rbd-secret
  namespace: ceph-csi-rbd
stringData:
  userID: kubernetes
  userKey: AQCxDqNnTy4KDhAAk7K40yR1LaubuU4SD/c/GQ==
  encryptionPassphrase: test_passphrase

创建 Secret:

[root@base-k8s-master-1 rbd]# kubectl apply -f csi-rbd-secret.yaml
secret/csi-rbd-secret created

重启 provisioner

[root@base-k8s-master-1 rbd]# kubectl rollout restart deployment -n ceph-csi-rbd ceph-csi-rbd-provisioner
deployment.apps/ceph-csi-rbd-provisioner restarted

创建 PVC 测试

PVC 文件内容:

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: rbd-pvc
  labels:
    group: test
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: csi-rbd-sc

创建并测试

[root@base-k8s-master-1 rbd]# kubectl apply -f pvc.yml
persistentvolumeclaim/rbd-pvc created
[root@base-k8s-master-1 rbd]# kubectl get pvc
NAME      STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
rbd-pvc   Bound    pvc-be885590-569b-4a08-af09-46657eb024d1   1Gi        RWO            csi-rbd-sc     <unset>                 24s

再去 Ceph 集群看看

[root@ceph-1 ~]# rbd ls rbd-pool
csi-vol-9d650a5f-28ed-4b5a-bf50-ffadb212ced7
[root@ceph-1 ~]# rbd info rbd-pool/csi-vol-9d650a5f-28ed-4b5a-bf50-ffadb212ced7
rbd image 'csi-vol-9d650a5f-28ed-4b5a-bf50-ffadb212ced7':
        size 1 GiB in 256 objects
        order 22 (4 MiB objects)
        snapshot_count: 0
        id: b207b2e8e3e1
        block_name_prefix: rbd_data.b207b2e8e3e1
        format: 2
        features: layering
        op_features:
        flags:
        create_timestamp: Wed Feb  5 17:08:03 2025
        access_timestamp: Wed Feb  5 17:08:03 2025
        modify_timestamp: Wed Feb  5 17:08:03 2025
K8S 通过 ceph-csi-rbd 对接 Ceph RBD
https://www.linuxstudynotes.com/2025/02/06/k8s/k8s-%e9%80%9a%e8%bf%87-ceph-csi-rbd-%e5%af%b9%e6%8e%a5-ceph-rbd/
暂无评论

发送评论 编辑评论


				
|´・ω・)ノ
ヾ(≧∇≦*)ゝ
(☆ω☆)
(╯‵□′)╯︵┴─┴
 ̄﹃ ̄
(/ω\)
∠( ᐛ 」∠)_
(๑•̀ㅁ•́ฅ)
→_→
୧(๑•̀⌄•́๑)૭
٩(ˊᗜˋ*)و
(ノ°ο°)ノ
(´இ皿இ`)
⌇●﹏●⌇
(ฅ´ω`ฅ)
(╯°A°)╯︵○○○
φ( ̄∇ ̄o)
ヾ(´・ ・`。)ノ"
( ง ᵒ̌皿ᵒ̌)ง⁼³₌₃
(ó﹏ò。)
Σ(っ °Д °;)っ
( ,,´・ω・)ノ"(´っω・`。)
╮(╯▽╰)╭
o(*////▽////*)q
>﹏<
( ๑´•ω•) "(ㆆᴗㆆ)
😂
😀
😅
😊
🙂
🙃
😌
😍
😘
😜
😝
😏
😒
🙄
😳
😡
😔
😫
😱
😭
💩
👻
🙌
🖕
👍
👫
👬
👭
🌚
🌝
🙈
💊
😶
🙏
🍦
🍉
😣
Source: github.com/k4yt3x/flowerhd
颜文字
Emoji
小恐龙
花!
上一篇
下一篇