| .. | ||
| README.md | ||
managing ceph
Always refer back to the official documentation at https://docs.ceph.com/en/latest
adding new cephfs
- create a erasure code profile which will allow you to customise the raid level
- raid5 with 3 disks? k=2,m=1
- raid5 with 6 disks? k=5,m=1
- raid6 with 4 disks? k=2,m=2, etc
- create osd pool using custom profile for data
- create osd pool using default replicated profile for metadata
- enable ec_overwrites for the data pool
- create the ceph fs volume using data/metadata pools
- set ceph fs settings
- specify minimum number of metadata servers (mds)
- set fs to be for bulk data
- set mds fast failover with standby reply
sudo ceph osd erasure-code-profile set ec_4_1 k=4 m=1
sudo ceph osd pool create media_data 128 erasure ec_4_1
sudo ceph osd pool create media_metadata 32 replicated_rule
sudo ceph osd pool set media_data allow_ec_overwrites true
sudo ceph osd pool set media_data bulk true
sudo ceph fs new mediafs media_metadata media_data --force
sudo ceph fs set mediafs allow_standby_replay true
sudo ceph fs set mediafs max_mds 2
creating authentication tokens
- this will create a client keyring named media
- this client will have the following capabilities:
- mon: read
- mds:
- read /
- read/write /media
- read/write /common
- osd: read/write to cephfs_data pool
sudo ceph auth get-or-create client.media \
mon 'allow r' \
mds 'allow r path=/, allow rw path=/media, allow rw path=/common' \
osd 'allow rw pool=cephfs_data'
list the authentication tokens and permissions
ceph auth ls
change the capabilities of a token
this will overwrite the current capabilities of a given client.user
sudo ceph auth caps client.media \
mon 'allow r' \
mds 'allow rw path=/' \
osd 'allow rw pool=media_data'