Skip to content

Commit

Permalink
Add dividing line instead of <br>
Browse files Browse the repository at this point in the history
  • Loading branch information
mjaehn committed Jun 21, 2024
1 parent 0b57dfd commit ac2fc2b
Showing 1 changed file with 7 additions and 7 deletions.
14 changes: 7 additions & 7 deletions events/icon_meetings/2024-2.md
Original file line number Diff line number Diff line change
Expand Up @@ -118,7 +118,7 @@ ML asks how WS's benchmarking will be able to be used.

WS responds: It can be used, but not exactly the same.

<br>
---

ML asks about achieving higher resolution with less use of the power cabinet.

Expand All @@ -128,7 +128,7 @@ DL asks if WS can correlate the different steps. There are various steps in R2B7

WS explains that initialization is a key step, and in the first timestep, reading boundary conditions takes a significant amount of time. This process is much quicker for lower resolutions.

<br>
---

DL remarks that it would be valuable to have these results documented somewhere on the wiki. He also speculates that WS has many tricks for conducting benchmarks and asks if WS could share them.

Expand All @@ -138,7 +138,7 @@ DL then mentions he's working on a recommended configuration. He asks if not usi

WS clarifies that it doesn't. Personally, WS disagrees with using just-fit mode and suggests running on the best-fit configuration instead. He points out that the machine is so powerful that it will still yield a good time-to-solutionnot .

<br>
---

FB mentions she is writing a proposal using ICON-CLM, which is still in the development phase for GPU porting. They tested it on Daint because it doesn't run on ALPS. Maria Grazia mentioned that scaling on Daint will likely be rejected.

Expand All @@ -154,19 +154,19 @@ ML suggests that benchmarking for ICON-NWP could also be useful for ICON-CLM. Th

WS agrees to share the namelist and mentions that he can also provide access to weak scalability tests for certain people.

<br>
---

DB asks how much faster a given configuration runs on Alps compared to Daint and if one can estimate from that comparison.

WS responds that If you run GPU-to-GPU configuration, Alps is roughly a factor of 9 faster. However, this approach wouldn't fully utilize memory, so you would use fewer nodes.

<br>
---

JC asks about the power cap per cabinet.

WS corrects that it's actually per GPU. In terms of energy-to-solution, you can make it run faster but not in a linear way. Grace CPU is very powerful and shares memory with GPU. Some people may want to run components on CPU, but this gives priority to the CPU, potentially slowing down the GPU. WS notes they need to caution users because running large components on the CPU might not perform well. Ocean-atmosphere coupling is an example of such a component. CSCS is advocating for a GPU port to address these issues.

<br>
---

ML asks how reducing the cap would affect scaling.

Expand All @@ -176,7 +176,7 @@ ML queries if the power cap could vary across different clusters.

WS replies that he is unsure but can inquire about it.

<br>
---

DF mentions that all tests were conducted within one cabinet and asks if it's expected for individual jobs to run within a single cabinet.

Expand Down

0 comments on commit ac2fc2b

Please sign in to comment.