Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DO NOT MERGE: this is a work-in-progress for the post F&A rate change updates to the website #246

Draft
wants to merge 2 commits into
base: main
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
119 changes: 49 additions & 70 deletions rates.md → accounts.md
Original file line number Diff line number Diff line change
@@ -1,25 +1,21 @@
---
title: Rates
title: Accounts

tagTitle: Rates - Center for Computation and Visualization
tagDescription: Check rates for advanced research computing that require extra resources.
tagTitle: Accounts - Center for Computation and Visualization
tagDescription: Check account types for advanced research computing that require extra resources.

date: 2020-08-05T18:04:51.000+00:00
lead: We provide services with limited resources at no cost to all
members affiliated with Brown. For advanced computing that requires extra resources,
we charge a monthly fee. See below the rates for FY24.

members affiliated with Brown. See below the account tiers and limits for FY24.
---

# High Performance Computing Cluster (Oscar)

> The number and size of jobs allowed on Oscar vary with both partition and type of user account. The following partitions are available to all Oscar users:

* Batch - General Purpose Computing
* GPU - GPU Nodes
* BigMem - Large Memory Nodes


- Batch - General Purpose Computing
- GPU - GPU Nodes
- BigMem - Large Memory Nodes

<div>

Expand All @@ -32,105 +28,90 @@ lead: We provide services with limited resources at no cost to all
<th>Memory(GB)</th>
<th>GPU</th>
<th>Max Walltime* (Hours)</th>
<th>Cost per Month</th>
<th>GPU Type</th>
</tr>
</thead>
<tbody>
<tr>
<td width="25%"> Exploratory <br> <!-- Make it Single Line. -->
<td width="25%"> Basic <br> <!-- Make it Single Line. -->
<td> <br> batch <br> gpu <br> bigmem</td>
<td> <br> 32 <br> 12 <br> 32 <br> </td>
<td> <br> 246 <br> 192 <br> 752 </td>
<td> <br> 64 <br> 12 <br> 32 <br> </td>
<td> <br> 492 <br> 192 <br> 752 </td>
<td> <br> None <br> 2 Std. <br> None </td>
<td> 48</td>
<td> $0</td>
</tr>
<tr>
<td>HPC Priority</td>
<td>batch</td>
<td>208</td>
<td>1,500</td>
<td>2 Std.</td>
<td>96</td>
<td>$67</td>
<td> A2, Titan RTX, QuadroRTX, RTX3090,A5000, A5500 </td>
</tr>
<tr>
<td>HPC Priority+</td>
<td>Priority CPU </td>
<td>batch</td>
<td>416</td>
<td>3,000</td>
<td>2 Std.</td>
<td>N/A</td>
<td>96</td>
<td>$133</td>
<td>N/A</td>
</tr>
<tr>
<td>Standard GPU Priority</td>
<td>gpu</td>
<td>Priority GPU </td>
<td>batch</td>
<td>24</td>
<td>192</td>
<td>4 Std.</td>
<td>96</td>
<td>$67</td>
</tr>
<tr>
<td>Standard GPU Priority+</td>
<td>gpu</td>
<td>48</td>
<td>384</td>
<td>8 Std.</td>
<td>96</td>
<td>$133</td>
<td>A2, Titan RTX, QuadroRTX, RTX3090,A5000, A5500</td>
</tr>
<tr>
<td>High End GPU Priority</td>
<td>Priority High-End GPU</td>
<td>gpu-he</td>
<td>24</td>
<td>256</td>
<td>4 high-end</td>
<td>96</td>
<td>$133</td>
<td>V100,A40,A6000</td>
</tr>
<tr>
<td>Large Memory Priority</td>
<td>Priority Bigmem </td>
<td>bigmem</td>
<td>32</td>
<td>2TB</td>
<td>-</td>
<td>96</td>
<td>$33</td>
<td>N/A</td>
</tr>
</tbody>
</table>

</div>

* Note, these values are subject to periodic review and changes
* Each account is assigned 100G Home, 512G Scratch (purged every 30 days).
* Priority accounts and Exploratory accounts associated with a PI get a data directory.
* Exploratory accounts without a PI have no data directory provided.
* Priority accounts have a higher Quality-of-Service (QOS) i.e. priority accounts will have faster job start times.
* The maximum number of cores and duration may change based on cluster utilization.
* HPC Priority account has a Quality-of-Service (QOS) allowing up to 208 cores, 1TB memory, and a total per-job limit of 1,198,080 core-minutes. This allows a 208-core job to run for 96 hours, a 104-core job to run for 192 hours, or 208 1-core jobs to run for 96 hours.
* Exploratory account has a Quality-of-Service (QOS) allowing up to 2 GPUs and a total of 5760 GPU-minutes. This allows a 2 GPU job to run for 48 hours or 1 GPU job to run for 96 hours.
* GPU Definitions:
* Std - QuadroRTX or lower
* High End - Tesla V100
* For more technical details, please see this [link.](https://docs.ccv.brown.edu/oscar/system-overview)
- All Priority accounts require PI approval

- If you need resoucrses beyond this, please send a project proposal (fillout this google form). Its subject to approval from RCAC

- Note, these values are subject to periodic review and changes
- Each account is assigned 100G Home, 512G Scratch (purged every 30 days).
- Basic accounts without a PI have no data directory provided.
- The various priority accounts have a higher Quality-of-Service (QOS); thus, priority accounts will faster job start times than basic accounts.
- The maximum number of cores and duration may change based on cluster utilization.
- Priority CPU account has a Quality-of-Service (QOS) allowing up to 208 cores, 1TB memory, and a total per-job limit of 1,198,080 core-minutes. This allows a 208-core job to run for 96 hours, a 104-core job to run for 192 hours, or 208 1-core jobs to run for 96 hours.
- Basic account has a Quality-of-Service (QOS) allowing up to 2 GPUs and a total of 5760 GPU-minutes. This allows a 2 GPU job to run for 48 hours or 1 GPU job to run for 96 hours.
- GPU Definitions:
- Std: QuadroRTX or lower
- High End: Tesla V100
- For more technical details, please see this [link.](https://docs.ccv.brown.edu/oscar/system-overview)

# Condo Purchase

> An investigator may choose to purchase a condo to support their high performance computing needs.

## Benefits of condo ownership

* **5 year lifecycle** - condo resources will be available for a duration of 5 years.
* **Access to more cpu cores than purchased** - condo will have access to 1.25 times the number of cpu cores purchased for the first 3 years of its lifecycle. For the remaining 2 years, condo will have access to the same number of cpu cores purchased.
* **Support** - CCV staff will install, upgrade and maintain condo hardware throughout its lifecycle.
* **High job priority** - Jobs submitted to a condo have the highest priority to schedule.
- **5 year lifecycle** - condo resources will be available for a duration of 5 years.
- **Access to more cpu cores than purchased** - condo will have access to 1.25 times the number of cpu cores purchased for the first 3 years of its lifecycle. For the remaining 2 years, condo will have access to the same number of cpu cores purchased.
- **Support** - CCV staff will install, upgrade and maintain condo hardware throughout its lifecycle.
- **High job priority** - Jobs submitted to a condo have the highest priority to schedule.

## How to get started

* Contact [[email protected]](mailto:[email protected]) to discuss your needs and review purchase options
- Contact [[email protected]](mailto:[email protected]) to discuss your needs and review purchase options

# Staff Services

Expand Down Expand Up @@ -163,15 +144,13 @@ lead: We provide services with limited resources at no cost to all
</table>
</div>

# Research Data Storage

* 1TB per Brown Faculty Member - Free
* 10TB per awarded Grant at the request of the Brown PI - an active grant account number will be required to provide this allocation and the data will be migrated to archive storage at the end of the grant.
* Additional Storage Allocation
* Rdata: $8.3 / Terabyte / Month ($100 / Terabyte / Year)
* Stronghold Storage: $8.3 / Terabyte / Month ($100 / Terabyte / Year)
* Campus File Storage (replicated): $8.3 / Terabyte / Month ($100 / Terabyte / Year)
* Campus File Storage (non-replicated): $4.2 / Terabyte / Month ($50 / Terabyte / Year)
# Research Data Storage Allocations

- 5 TB per Brown Faculty Member
- 10 TB per awarded Grant at the request of the Brown PI
- An active grant account number will be required to provide this allocation and the data will be migrated to archive storage at the end of the grant.
- Additional Storage Allocation
- If faculty require additional storage beyond the 5 TB per PI, and 10 TB per active grant, this request should be submitted to the Research Computing Adviosry Committee (RCAC) sub-committee for allocations and exceptions.

## Pooled Storage Allocations

Expand Down
17 changes: 8 additions & 9 deletions meta/main/services.yml
Original file line number Diff line number Diff line change
@@ -1,12 +1,11 @@
title: Services
description: |
CCV provides computing infrastructure, consulting, and support to the Brown
Community. We have HPC specialists, data scientists, and research software engineers
available to work with researchers. We frequently partner with researchers on
projects that may span weeks, months, or years. These partnerships can in some
cases involve a researcher using grant funds to support one of our data scientists
or research software engineers.
CCV provides computing infrastructure, consulting, and support to the Brown
Community. We have HPC specialists, data scientists, and research software engineers
available to work with researchers. We frequently partner with researchers on
projects that may span weeks, months, or years. These partnerships can in some
cases involve a researcher using grant funds to support one of our data scientists
or research software engineers.
call-for-action:
text: See our rates
href: /rates

text: See our account tiers
href: /accounts
Loading