Skip to content

A Guide on Creating and Exposing CephFS as NFS

In a previous article, we looked into how to setup Rook Ceph within our Kubernetes setup.

Within this article, I’m going to show how we can create a CephFS shared file system and later how we can expose it as a NFS Ganesha NFS shared volume.

  1. First, we’ll define a StorageClass for CephFS, which will allow us to dynamically create persistent volumes.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: rook-cephfs
provisioner: rook-ceph.cephfs.csi.ceph.com  # Update provisioner prefix if needed
parameters:
  clusterID: rook-ceph                    # Namespace where the Rook cluster is running
  fsName: myfs                               # CephFS filesystem name for volume creation
  pool: myfs-replicated  
  csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner
  csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph
  csi.storage.k8s.io/controller-expand-secret-name: rook-csi-cephfs-provisioner
  csi.storage.k8s.io/controller-expand-secret-namespace: rook-ceph
  csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node
  csi.storage.k8s.io/node-stage-secret-namespace: rook-ceph
reclaimPolicy: Delete

Note: Ensure the rook-ceph namespace matches the Rook operator namespace.

2. Next, define the CephFilesystem resource, which includes the Ceph metadata and data pools.

apiVersion: ceph.rook.io/v1
kind: CephFilesystem
metadata:
  name: myfs
  namespace: rook-ceph
spec:
  metadataPool:
    replicated:
      size: 3
  dataPools:
    - name: replicated
      replicated:
        size: 3
  preserveFilesystemOnDelete: true
  metadataServer:
    activeCount: 1
    activeStandby: true

3. We can now define a PersistentVolumeClaim (PVC) for our CephFS storage. In this example, we skip creating a PersistentVolume and directly request storage using the PVC.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-pvc
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 3Gi
  storageClassName: rook-cephfs

4. Although we’re using a PVC directly, here’s an example of a standalone PersistentVolume configuration for completion.

apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-pv
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteMany
  nfs:
    server: 10.43.162.232  # Replace with your server IP
    path: /test

5. We’ll now deploy an Nginx application using our CephFS PVC. This setup will allow the shared file system to be mounted to multiple containers.

Let’s call the first deployment as nginx-deployment.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
        - name: nginx
          image: nginx:latest
          volumeMounts:
            - mountPath: /usr/share/nginx/html
              name: nfs-volume
      volumes:
        - name: nfs-volume
          persistentVolumeClaim:
            claimName: nfs-pvc

So, if it’s a shared file system, then we should be able to mount this to multiple containers.

Let’s mount the same to a second deployment. We’ll name this as nginx-deployment-secondary.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment-secondary
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
        - name: nginx
          image: nginx:latest
          volumeMounts:
            - name: nfs-volume
              mountPath: /usr/share/nginx/html
      volumes:
        - name: nfs-volume
          persistentVolumeClaim:
            claimName: nfs-pvc

With this you would be able to notice that the same volume can be mounted on two separate pods. So, we got a CephFS filesystem mounted into our pods.

6. Install nfs-utils on your system (assuming a RedHat/Fedora environment):

sudo yum install -y nfs-utils

7. We can check our available services under the rook-ceph namespace.

kubectl get svc -n rook-ceph

Once we verify the CephFS related service name, we can utilize this information to export it to NFS.

9. Login to the rook-ceph-tools pod.

kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- bash

Execute below commands within the above rook-ceph-tools pod.

ceph mgr module enable
rook ceph mgr module enable nfs
ceph orch set backend rook

Here, the first two commands could be already enabled. This ceph orch set backend rook is required for the next steps.

Next, we can create the nfs exports as below and verify the details of the nfs export.

ceph nfs export create cephfs my-nfs /test myfs
ceph nfs export ls my-nfs
ceph nfs export info my-nfs /test

Here, this command will export an existing cephfs of type myfs (Check the above step 1 and step 2, storage class kind fsname).

After executing above, can exit from the rook-ceph-tools.

To view the running nfs-service, can check the available services. Compared to the services shown within step 8, there would be a new NFS service that is running now.

kubectl get svc -n rook-ceph

10. Final step is to mount this to the VM and make sure to add an /etc/fstab entry.

Below is the syntax.
mount -t nfs4 -o proto=tcp <nfs-service-address>:/<export-path> <mount-location>
An example would be:

mount -t nfs 10.43.205.7:/ /mnt/nfs/

Here, we have assumed 10.43.205.7 is the NFS service IP.

Example /etc/fstab entry as follows:

10.43.205.7:/ /mnt/nfs/ nfs defaults 0 0

That’s it. Within this article, we have looked into how to setup rook-ceph ceph file-system and later how we can expose it as a NFS Ganesha NFS export and mount it to a VM.

Until next time, see you 👋

References:

https://rook.io/docs/rook/latest-release/Storage-Configuration/NFS/nfs/#creating-exports

Leave a Reply

Your email address will not be published. Required fields are marked *