Below is a step-by-step guide to install Ceph on three Rocky Linux nodes using Cephadm.
- Three Rocky Linux nodes.
- Root or sudo access to all nodes.
- Networking setup to allow all nodes to communicate with each other.
- Time synchronization (e.g., via NTP).
- SSH access configured between the nodes.
-
Update all packages and reboot if necessary:
sudo dnf update -y
-
Install necessary packages:
sudo dnf install -y chrony lvm2 podman python3.9 sudo systemctl enable --now chronyd
-
Set up passwordless SSH access:
ssh-keygen -t rsa ssh-copy-id <user>@<node1> ssh-copy-id <user>@<node2> ssh-copy-id <user>@<node3>
-
Stop and disable firewall
systemctl stop firewalld.service systemctl disable firewalld.service
-
Configure time synchronization
cat >> /etc/chrony.conf <<EOF allow 192.168.0.0/16 server 192.168.56.7 iburst # replace this ip address with your node ip address EOF
systemctl restart chronyd systemctl status chronyd chronyc sources timedatectl status timedatectl set-timezone Asia/Tehran timedatectl status date
-
Download and make Cephadm executable:
CEPH_RELEASE=18.2.4 # replace this with the active release curl --silent --remote-name --location https://download.ceph.com/rpm-${CEPH_RELEASE}/el9/noarch/cephadm chmod +x cephadm
-
Add Ceph repository:
sudo ./cephadm add-repo --release reef # replace this with the active release sudo ./cephadm install
-
Bootstrap the cluster:
sudo ./cephadm bootstrap --mon-ip <IP_OF_FIRST_NODE>
-
Copy the SSH key to other nodes:
sudo cephadm shell ceph cephadm get-pub-key > ~/ceph.pub ssh-copy-id -f -i ~/ceph.pub root@<node2> ssh-copy-id -f -i ~/ceph.pub root@<node3>
-
Add the nodes:
sudo ceph orch host add <node2> <IP_OF_NODE2> sudo ceph orch host add <node3> <IP_OF_NODE3>
-
Verify hosts are added:
sudo ceph orch host ls
-
Deploy additional monitors (if needed):
sudo ceph orch apply mon --placement="node2,node3"
-
Deploy manager daemons:
sudo ceph orch apply mgr --placement="node1,node2,node3"
-
Identify disks to use for OSDs:
lsblk
-
Create OSDs (replace
/dev/sdX
with actual disk):sudo ceph orch daemon add osd <node1>:/dev/sdX sudo ceph orch daemon add osd <node2>:/dev/sdY sudo ceph orch daemon add osd <node3>:/dev/sdZ
- Check the status of the cluster:
sudo ceph -s
-
Deploy Metadata Server (MDS) for CephFS:
sudo ceph orch apply mds fs_name --placement="node1,node2"
-
Deploy RGW (RADOS Gateway) for object storage:
sudo ceph orch apply rgw rgw_name --placement="node1,node2,node3"
-
Retrieve the URL and admin credentials for the Ceph Dashboard:
ceph mgr services ceph dashboard create-self-signed-cert echo -n "admin" > /tmp/dashboard-password ceph dashboard set-login-credentials admin -i /tmp/dashboard-password rm /tmp/dashboard-password
-
Access the dashboard using the provided URL.
- Regularly check the cluster status with
ceph -s
. - For advanced configurations, refer to the official Ceph documentation.
- Monitor logs and health alerts to maintain cluster integrity.
By following these steps, you should have a functioning Ceph cluster on your Rocky Linux nodes.