docs: add docs for cephfs #445

Merged
unkinben merged 1 commits from benvin/cephcsi_docs into develop 2026-02-03 19:56:15 +11:00

View File

@ -28,6 +28,98 @@ Always refer back to the official documentation at https://docs.ceph.com/en/late
sudo ceph fs set mediafs max_mds 2
```
## managing cephfs with subvolumes
Create erasure code profiles. The K and M values are equivalent to the number of data disks (K) and parity disks (M) in RAID5, RAID6, etc.
sudo ceph osd erasure-code-profile set ec_6_2 k=6 m=2
sudo ceph osd erasure-code-profile set ec_4_1 k=4 m=1
Create data pools using the erasure-code-profile, set some required options
sudo ceph osd pool create cephfs_data_ssd_ec_6_2 erasure ec_6_2
sudo ceph osd pool set cephfs_data_ssd_ec_6_2 allow_ec_overwrites true
sudo ceph osd pool set cephfs_data_ssd_ec_6_2 bulk true
sudo ceph osd pool create cephfs_data_ssd_ec_4_1 erasure ec_4_1
sudo ceph osd pool set cephfs_data_ssd_ec_4_1 allow_ec_overwrites true
sudo ceph osd pool set cephfs_data_ssd_ec_4_1 bulk true
Add the pool to the fs `cephfs`
sudo ceph fs add_data_pool cephfs cephfs_data_ssd_ec_6_2
sudo ceph fs add_data_pool cephfs cephfs_data_ssd_ec_4_1
Create a subvolumegroup using the new data pool
sudo ceph fs subvolumegroup create cephfs csi_ssd_ec_6_2 --pool_layout cephfs_data_ssd_ec_6_2
sudo ceph fs subvolumegroup create cephfs csi_ssd_ec_4_1 --pool_layout cephfs_data_ssd_ec_4_1
All together:
sudo ceph osd erasure-code-profile set ec_6_2 k=6 m=2
sudo ceph osd pool create cephfs_data_ssd_ec_6_2 erasure ec_6_2
sudo ceph osd pool set cephfs_data_ssd_ec_6_2 allow_ec_overwrites true
sudo ceph osd pool set cephfs_data_ssd_ec_6_2 bulk true
sudo ceph fs add_data_pool cephfs cephfs_data_ssd_ec_6_2
sudo ceph fs subvolumegroup create cephfs csi_ssd_ec_6_2 --pool_layout cephfs_data_ssd_ec_6_2
sudo ceph osd erasure-code-profile set ec_4_1 k=4 m=1
sudo ceph osd pool create cephfs_data_ssd_ec_4_1 erasure ec_4_1
sudo ceph osd pool set cephfs_data_ssd_ec_4_1 allow_ec_overwrites true
sudo ceph osd pool set cephfs_data_ssd_ec_4_1 bulk true
sudo ceph fs add_data_pool cephfs cephfs_data_ssd_ec_4_1
sudo ceph fs subvolumegroup create cephfs csi_ssd_ec_4_1 --pool_layout cephfs_data_ssd_ec_4_1
Create a key with access to the new subvolume groups. Check if the user already exists first:
sudo ceph auth get client.kubernetes-cephfs
If it doesnt:
sudo ceph auth get-or-create client.kubernetes-cephfs \
mgr 'allow rw' \
osd 'allow rw tag cephfs metadata=cephfs, allow rw tag cephfs data=cephfs' \
mds 'allow r fsname=cephfs path=/volumes, allow rws fsname=cephfs path=/volumes/csi_ssd_ec_6_2, allow rws fsname=cephfs path=/volumes/csi_ssd_ec_4_1' \
mon 'allow r fsname=cephfs'
If it does, use `sudo ceph auth caps client.kubernetes-cephfs ...` instead to update existing capabilities.
## removing a cephfs subvolumegroup from cephfs
This will cleanup the subvolumegroup, and subvolumes if they exist, then remove the pool.
Check for subvolumegroups first, then for subvolumes in it
sudo ceph fs subvolumegroup ls cephfs
sudo ceph fs subvolume ls cephfs --group_name csi_raid6
If subvolumes exist, remove each one-by-one:
sudo ceph fs subvolume rm cephfs <subvol_name> --group_name csi_raid6
If you have snapshots, remove snapshots first:
sudo ceph fs subvolume snapshot ls cephfs <subvol_name> --group_name csi_raid6
sudo ceph fs subvolume snapshot rm cephfs <subvol_name> <snap_name> --group_name csi_raid6
Once the group is empty, remove it:
sudo ceph fs subvolumegroup rm cephfs csi_raid6
If it complains its not empty, go back as theres still a subvolume or snapshot.
If you added it with `ceph fs add_data_pool`. Undo with `rm_data_pool`:
sudo ceph fs rm_data_pool cephfs cephfs_data_csi_raid6
After its detached from CephFS, you can delete it.
sudo ceph osd pool rm cephfs_data_csi_raid6 cephfs_data_csi_raid6 --yes-i-really-really-mean-it
## creating authentication tokens
- this will create a client keyring named media