Storage pool type: cephfs
CephFS implements a POSIX-compliant filesystem, using a Ceph storage cluster to store its data. As CephFS builds upon Ceph, it shares most of its properties. This includes redundancy, scalability, self-healing, and high availability.
Tip
|
{pve} can manage Ceph setups, which makes configuring a CephFS storage easier. As modern hardware offers a lot of processing power and RAM, running storage services and VMs on same node is possible without a significant performance impact. |
To use the CephFS storage plugin, you must replace the stock Debian Ceph client,
by adding our Ceph repository.
Once added, run apt update
, followed by apt dist-upgrade
, in order to get
the newest packages.
Warning
|
Please ensure that there are no other Ceph repositories configured. Otherwise the installation will fail or there will be mixed package versions on the node, leading to unexpected behavior. |
This backend supports the common storage properties nodes
,
disable
, content
, as well as the following cephfs
specific properties:
- monhost
-
List of monitor daemon addresses. Optional, only needed if Ceph is not running on the PVE cluster.
- path
-
The local mount point. Optional, defaults to
/mnt/pve/<STORAGE_ID>/
. - username
-
Ceph user id. Optional, only needed if Ceph is not running on the PVE cluster, where it defaults to
admin
. - subdir
-
CephFS subdirectory to mount. Optional, defaults to
/
. - fuse
-
Access CephFS through FUSE, instead of the kernel client. Optional, defaults to
0
.
/etc/pve/storage.cfg
)cephfs: cephfs-external monhost 10.1.1.20 10.1.1.21 10.1.1.22 path /mnt/pve/cephfs-external content backup username admin
Note
|
Don’t forget to set up the client’s secret key file, if cephx was not disabled. |
If you use cephx
authentication, which is enabled by default, you need to copy
the secret from your external Ceph cluster to a Proxmox VE host.
Create the directory /etc/pve/priv/ceph
with
mkdir /etc/pve/priv/ceph
Then copy the secret
scp cephfs.secret <proxmox>:/etc/pve/priv/ceph/<STORAGE_ID>.secret
The secret must be renamed to match your <STORAGE_ID>
. Copying the
secret generally requires root privileges. The file must only contain the
secret key itself, as opposed to the rbd
backend which also contains a
[client.userid]
section.
A secret can be received from the Ceph cluster (as Ceph admin) by issuing the
command below, where userid
is the client ID that has been configured to
access the cluster. For further information on Ceph user management, see the
Ceph docs [1].
ceph auth get-key client.userid > cephfs.secret
If Ceph is installed locally on the PVE cluster, that is, it was set up using
pveceph
, this is done automatically.
The cephfs
backend is a POSIX-compliant filesystem, on top of a Ceph cluster.
cephfs
Content types | Image formats | Shared | Snapshots | Clones |
---|---|---|---|---|
|
|
yes |
yes[1] |
no |
[1] While no known bugs exist, snapshots are not yet guaranteed to be stable, as they lack sufficient testing.