Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Compile tentative application case checklist #6

Open
uekerman opened this issue Mar 28, 2024 · 13 comments
Open

Compile tentative application case checklist #6

uekerman opened this issue Mar 28, 2024 · 13 comments

Comments

@uekerman
Copy link
Member

uekerman commented Mar 28, 2024

To standardize application cases, make them reproducible, and make them easier to test automatically, we compile a checklist of quality criteria similar to the adapter standardization. We again want to list all "preCICE checked" application cases on the preCICE website. The cases itself, however, do not have to be hosted under the preCICE organization. We neither want to provide DOIs ourselves. Instead, users could host their cases at any data repository (e.g., Zenodo or DaRUS) and get a DOI from there. The review process could be started by opening a pull request to the website. The checklist could contain manual and automatic checks.

Potential criteria could include:

  • DOI of case
  • contact information
  • with which versions of solvers, adapters, and preCICE is the case compatible
  • preCICE configuration passes offline check developed in work package WP1.1 and is formatted
    according to defined standard (automatically checked)
    • could also include a fixed order of elements in the configuration
  • folder structure follows defined standard
  • run scripts following a defined standard
  • if results are included, exact information on how to reproduce them
  • Docker recipe to run case
@uekerman
Copy link
Member Author

uekerman commented Jun 25, 2024

Outcome of the first discussion with @MakisH. Further discussion is planned for the coding days.

Main idea

We distinguish between metadata and best practices. The latter has three level, i.e. bronze, silver, and gold. We list on the website every application case that has all required metadata and at least bronze level.

Metadata

The usual things. Non-complete list:

  • title of the case
  • authors
  • contact information
  • which solvers are involved (we probably need a hierarchy here: openfoam, openfoam:com, openfoam:com:v2312)
  • used preCICE version
  • versions of dependencies that are known to work
  • doi (optional)
  • dev repo url (optional)
  • tick what features are used in the case from a curated list (quasi-Newton, GPU, ...)
  • license

Best practices

Each level has a main theme to simplify orientation. The lists are not complete yet, but should rather give some first impression.

Bronze

Findable and Accessible, i.e., the case is working and documented. Moreover, contributing the case should not require installing any additional tools (e.g. config visualizer). This is the bare minimum. We need a low entry barrier.

  • Solvers are available (FOSS or commercial license) or part of the case.
  • There is a README with:
    • physics: explains the setup, boundary conditions, figure with domains
    • how to run
    • dependencies: at least one version for each such that everything works
  • preCICE configuration passes offline check (in review)
  • run script?
  • ...

Silver

Interoperable, i.e., follow the standards. The case plays well with other cases from the community and feels part of the preCICE ecosystem.

  • configuration formatted according to defined standard (automatically checked)
    • could also include a fixed order of elements in the configuration
  • folder structure follows defined standard, incl. precice-config.xml as name
  • run and clean scripts following a defined standard
  • .gitignore
  • README with standardized sections, incl. visualization of configuration
  • follow naming scheme
  • ...

Gold

Reusable, Reproducible, and Integrated. We can integrate the case into our development workflows and make sure that we will not break it. Other users can easily reuse and extend the case for their own needs, including adding further solvers.

  • Docker recipe such that the system tests can run the case
  • visualization scripts or documentation
  • include or link results, with exact dependency versions
  • ASTE and system tests compatible exports
  • if possible, include a coarse variant (runnable on a laptop)

Other checklists

@BenjaminRodenberg
Copy link
Member

For my taste the topic reproducibility and reference results comes a bit late, but I also understand the argument of accessibility and a low entry barrier. However, at the moment it is theoretically possible to submit a case that just produces colorful pictures and reach the silver stage.

Another point: Does this list only refer to new application cases e.g. perpendicular-flap or do we also consider extensions like perpendicular-flap/solid-fenicsx? I'm thinking about the situation that an expert using FEniCSx wants to contribute a FEniCSx version of the case to the existing tutorial. We still would like to keep the entry barrier low but we also would like to have consistent results with the other flavors of the case. I imagine that in this situation the criteria mentioned under bronze and silver are really low hanging fruits for us while the gold level criteria are the main points where we would rely on the external knowledge.

@MakisH
Copy link
Member

MakisH commented Jun 27, 2024

However, at the moment it is theoretically possible to submit a case that just produces colorful pictures and reach the silver stage.

These are also many of our tutorials.

Another point: Does this list only refer to new application cases e.g. perpendicular-flap or do we also consider extensions like perpendicular-flap/solid-fenicsx?

We need to distinguish here between tutorials and generally application cases. Our tutorials are application cases, which are also integrated into our development workflows and have some additional restrictions (e.g., showcasing some feature, not taking too long to run, being rather simplistic, etc).

If one wants to contribute a tutorial, they should still follow the tutorial contributing guidelines. After we have these checklists published, the guidelines will refer to these checklists (requiring gold level) and have a few additional requirements.

Good point about contributing a subcase. If it is really one extension of our tutorials, it should be integrated into the tutorials, with the same procedure as now. If we cannot invest the resources to make it gold, then they can still be published as an application case hosted elsewhere, or contributed to the non-reviewed community projects forum section.

@uekerman
Copy link
Member Author

uekerman commented Jul 2, 2024

Beyond the tutorials, we should consider application cases that are published along papers. Example:

@MakisH
Copy link
Member

MakisH commented Jul 4, 2024

I am wondering if we also need a level below bronze (with significantly reduced visibility): Something where everyone can link to any already published case, just by providing the metadata. This could allow to also list on our collection some older cases that nobody can invest the effort in updating to fit our requirements.

Such a level could be called "Gray", which would essentially mean "not (yet) reviewed".

At the same time, we need additional ways to incentivize going to higher levels. Maybe we integrate Gold cases into some test suites, we only mention new Silver and Gold cases in the workshops, or even we have some kind of community (financial) award for best new Gold-level case in the workshops.

@uekerman
Copy link
Member Author

Results of the internal mini world café.

Metadata

  • build config
  • (either DOI or URL) mandatory
  • short description
  • related publication (optional)

Bronze

  • how to run should incl. pre-processing (if necessary)
  • some clean script (one should be able to run the case twice)
  • better clarify "additional tools"
  • (config format should not be a mess)
  • rough estimate on runtime (an example)
  • docs on how the case was created (geometries, meshes, ...)
  • some pointer to check expected outcome (e.g. watchpoint, paraview screenshot, complete results)

Silver

  • .gitignore only if Git repo
  • follow naming conventions for data and meshes
  • make DOI mandatory?
  • no hacking of preCICE allowed (e.g. vertices are not locations in space)

Gold

  • instead of Docker, a Spack lock file could also be enough (one can generate a Docker image from that)
  • what about alternatives to Docker, e.g. Singularity

Open problems

  • what should be do with system-specific config options (e.g. network="inf0" in m2n)?
  • how to treat multiple cases in one data set or a parameter study
  • better explain when user should submit to us (within their paper / data set submission and review cycle)
  • how to update a case? when to do a new case instead? how to treat similar cases (e.g. from different publications)

@precice-bot
Copy link

This issue has been mentioned on preCICE Forum on Discourse. There might be relevant details there:

https://precice.discourse.group/t/shape-the-future-of-the-precice-ecosystem-the-preeco-project/2019/1

@uekerman
Copy link
Member Author

About the open problem ...

how to treat multiple cases in one data set or a parameter study

My current suggestion would be ...

  • to allow multiple subfolder with a standard-conforming case in each subfolder.
  • to forbid parameter studies that use different preCICE configuration files (e.g. precice-config-IQN5.xml). Instead we should provide examples on how to easily change things within one (templated) file (e.g. jinja2).
  • Any parameter study (if it is scripted and not only documented) should use an outer script which repeatedly calls run.sh and clean.sh.

@uekerman
Copy link
Member Author

If any code still needs compilation, should we define where source files should go?

Could be per data set (if required for multiple experiments):

- src
- experiment1
  - fluid
  - solid 
- experiment2

Or per experiment (if required for multiple participants):

- src
- fluid
- solid

Or per participant:

- fluid
  - src
- solid

@MakisH
Copy link
Member

MakisH commented Jul 31, 2024

In our contributing guidelines, we have solver-<code> as name, and we already specify in another criterion that adapters should be configurable. But, since these are not then stand-alone adapters, we should probably clarify the naming scheme here as well.

Is there any reason to keep the src/? What else could go there? Where should the helper scripts go into?

@uekerman
Copy link
Member Author

I stumbled over this question while checking the ICIAM muscle data set as a test run. https://doi.org/10.18419/darus-4228
There, src contains multiple different OpenDiHu solvers.

Do our tutorials contributions guidelines run into problems if we have multiple participants using the same framework, but with different implementations? Take the partitioned heat conduction as an example: What if the Dirichlet and the Neumann implementation in OpenFOAM would have nothing in common?

https://github.com/precice/tutorials/tree/develop/partitioned-heat-conduction

@MakisH
Copy link
Member

MakisH commented Aug 29, 2024

Do our tutorials contributions guidelines run into problems if we have multiple participants using the same framework, but with different implementations?

We already have an example in the partitioned-pipe, of two different OpenFOAM-based solvers: https://github.com/precice/tutorials/tree/develop/partitioned-pipe

We could be more specific and name every tutorial case with the exact name of the solver (e.g., fluid-pimplefoam instead of fluid-openfoam), but so far there was no particular reason to do so, as we only had one solver per framework, and the framework name is typically more recognizable. Our guidelines already specify solver name, not framework name.

FYI, I have not downloaded the muscle data set yet (too large for my current connection), but I assume this is the same situation, without clear names for the different solvers.

@MakisH
Copy link
Member

MakisH commented Sep 9, 2024

After discussing with @uekerman on how to handle bundles of application cases, some notes:

  • We distinguish between cases and experiments. Experiments can be referred to as one case entry in our list.
  • Two significantly different cases (e.g., a CHT case and an FSI case) would better be listed as two different cases, even if they are published in the same paper:
    • Having the cases as different entries helps with categorization and tagging (is this an FSI case? a CHT case?)
    • Having the cases as different entries also helps listing cases with different maturity separately, with different conformance levels
    • Multiple entries pointing to the same DOI is fine

We also need a clear policy and a mechanism to handle cases that are great to have listed, but they do not yet fulfill all the standards. We probably already handle this with the bronze and potentially gray levels.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: WP2 Compile standard (2024-2025)
Development

No branches or pull requests

4 participants