Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

how to understand "When adding a new driver make sure to use the same values in user and kernel space”? #6

Open
zhenshuitieniu opened this issue Apr 6, 2022 · 1 comment

Comments

@zhenshuitieniu
Copy link

zhenshuitieniu commented Apr 6, 2022

It means that when nvidia-fs.ko supports the new file system, and the new-fs-client.ko also supports RDMA and GDS-related API calls, does it need to be supported at the cuda library level?

// These are same key values as the one's in the user space.
// When adding a new driver make sure to use the same values in user and kernel space.
#define NVFS_PROC_MOD_NVME_KEY "nvme"
#define NVFS_PROC_MOD_NVME_RDMA_KEY "nvme_rdma"
#define NVFS_PROC_MOD_SCSI_KEY "scsi_mod"
#define NVFS_PROC_MOD_SCALEFLUX_CSD_KEY "sfxvdriver"
#define NVFS_PROC_MOD_NVMESH_KEY "nvmeib_common"
#define NVFS_PROC_MOD_DDN_LUSTRE_KEY "lnet"
#define NVFS_PROC_MOD_GPFS_KEY "mmfslinux"
#define NVFS_PROC_MOD_NFS_KEY "rpcrdma"
#define NVFS_PROC_MOD_WEKAFS_KEY "wekafsio"

Take nfs as an example,
First need to increase symbol item in nvidia-fs.ko
struct module_entry modules_list[] = // nvfs symbol table
{
1,
0,
NVFS_PROC_MOD_NFS_KEY,
0,
"rpcrdma_register_nvfs_dma_ops",
0,
"rpcrdma_unregister_nvfs_dma_ops",
0,
&nvfs_dev_dma_rw_ops
},
Secondly, rpcrdma_register_nvfs_dma_ops is supported in the nfs driver.
In addition to the above two steps, do I need special settings or support for the cuda library?

@fredfany
Copy link

Hi, We are also facing a similar issue. I was wondering if you have had success adding new distributed filesystems using the above method. Also, if we want to add a new file system to use GDS, do we only need to adapt it in nvidia-fs, or do we have to support it at the cuda library level. Thanks in advance!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants