-
Notifications
You must be signed in to change notification settings - Fork 17
Available Infrastructure for the ChRIS Project
Jennings Zhang edited this page Jun 5, 2023
·
1 revision
We need to provide two deployments of CUBE:
- private: BCH-only, PHI-allowed
- public: anyone can use
To deploy CUBE, are are concerned about 3 components:
- compute resource
- storage
- location of CUBE (backend)
For simplification, in this article we will not consider where the compute resource will be located since it can typically be colocated with CUBE.
Name | Owner | Physical Location | Advantages | Disadvantages | Work Needed |
---|---|---|---|---|---|
fnndsc.childrens.harvard.edu |
BCH Research Computing (RC) | BCH | easy | low resources | none |
cube-next.tch.harvard.edu |
FNNDSC | Landmark Center | easy | slow | none |
FNNDSC Galena | FNNDSC | Landmark Center | physical access, root access | Nodes vary greatly in capacity, there are only a few "good" nodes which are multi-purpose | run CUBE on k8s |
bch-mghpcc | MGHPCC | MGHPCC at Holyoke | 50TB of free space | must use SLURM, Singularity, NFS; no PHI; not exposed to public | Run CUBE using Singularity |
NERC OpenStack Compute | NERC | MGHPCC at Holyoke | outside of BCH network | Ask NERC about quotas, must manage VMs ourselves | good-to-go |
NERC OpenStack Storage | NERC | MGHPCC at Holyoke | outside of BCH network; professionally operated Swift storage is performant+durable | Ask NERC about quotas | good-to-go |
NERC OpenShift | NERC | MGHPCC at Holyoke | outside of BCH network; Kubernetes | Ask NERC about quotas | run CUBE on k8s |
CUBE Location | Storage Location | CUBE<-->Storage Connection | Size | Storage Reliability | Performance |
---|---|---|---|---|---|
fnndsc.childrens.harvard.edu | fnndsc.childrens.harvard.edu | host | 200GB | ⭐ | ⭐⭐⭐ |
cube-next.tch.harvard.edu | cube-next.tch.harvard.edu | host | 1TB | ⭐ | ⭐ |
FNNDSC galena | BCH rc-nfs | in-network | 10TB | ⭐⭐⭐ | ⭐⭐⭐ |
NERC OpenShift | mghpcc-bch | WWW SSH tunnel ?[1] | 50TB | ⭐⭐ [2] | ⭐ |
mghpcc-bch ?[1] | mghpcc-bch | in-network | 50TB | ⭐⭐ [2] | ⭐⭐⭐ |
NERC OpenStack Compute | NERC OpenStack Swift | in-network | 2TB? | ⭐⭐⭐⭐⭐ | ⭐⭐⭐⭐⭐ |
NERC OpenShift | NERC OpenShift NooBaa? [3] | in-network | 100GB? | ⭐⭐⭐⭐⭐ | ⭐⭐⭐⭐ |
NERC OpenShift | NERC OpenShift Volume | in-network | 100GB? | ⭐⭐⭐ | ⭐⭐⭐ |
NERC OpenShift | NERC OpenStack Swift | WWW | 2TB? | ⭐⭐⭐⭐⭐ | ⭐⭐⭐ |
- May we open ports on bch-mghpcc? May they be public-facing, or BCH-only?
- What is the storage deployment and policy of bch-mghpcc? (RAID? NFS? Backup frequency? Backup accessibility?)
- Does NERC OpenShift support Noobaa object storage?
- If bch-mghpcc allows us to open ports publicly (or reverse-proxy) and we can get it to work without SSH tunneling bottleneck, then it is our best choice for public CUBE, because of the size of the free storage.
- If NERC OpenShift offers NooBaa object storage and is generous with quotas, it would be our best choice for public CUBE, because of Kubernetes and efficiency.
- If NERC OpenStack is generous with its quotas, it would be a good place to host our public CUBE.