-
-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Compile tentative application case checklist #6
Comments
Outcome of the first discussion with @MakisH. Further discussion is planned for the coding days. Main ideaWe distinguish between metadata and best practices. The latter has three level, i.e. bronze, silver, and gold. We list on the website every application case that has all required metadata and at least bronze level. MetadataThe usual things. Non-complete list:
Best practicesEach level has a main theme to simplify orientation. The lists are not complete yet, but should rather give some first impression. BronzeFindable and Accessible, i.e., the case is working and documented. Moreover, contributing the case should not require installing any additional tools (e.g. config visualizer). This is the bare minimum. We need a low entry barrier.
SilverInteroperable, i.e., follow the standards. The case plays well with other cases from the community and feels part of the preCICE ecosystem.
GoldReusable, Reproducible, and Integrated. We can integrate the case into our development workflows and make sure that we will not break it. Other users can easily reuse and extend the case for their own needs, including adding further solvers.
Other checklists
|
For my taste the topic reproducibility and reference results comes a bit late, but I also understand the argument of accessibility and a low entry barrier. However, at the moment it is theoretically possible to submit a case that just produces colorful pictures and reach the silver stage. Another point: Does this list only refer to new application cases e.g. |
These are also many of our tutorials.
We need to distinguish here between If one wants to contribute a tutorial, they should still follow the tutorial contributing guidelines. After we have these checklists published, the guidelines will refer to these checklists (requiring gold level) and have a few additional requirements. Good point about contributing a subcase. If it is really one extension of our tutorials, it should be integrated into the tutorials, with the same procedure as now. If we cannot invest the resources to make it gold, then they can still be published as an application case hosted elsewhere, or contributed to the non-reviewed community projects forum section. |
Beyond the tutorials, we should consider application cases that are published along papers. Example: |
I am wondering if we also need a level below bronze (with significantly reduced visibility): Something where everyone can link to any already published case, just by providing the metadata. This could allow to also list on our collection some older cases that nobody can invest the effort in updating to fit our requirements. Such a level could be called "Gray", which would essentially mean "not (yet) reviewed". At the same time, we need additional ways to incentivize going to higher levels. Maybe we integrate Gold cases into some test suites, we only mention new Silver and Gold cases in the workshops, or even we have some kind of community (financial) award for best new Gold-level case in the workshops. |
Results of the internal mini world café. Metadata
Bronze
Silver
Gold
Open problems
|
This issue has been mentioned on preCICE Forum on Discourse. There might be relevant details there: |
About the open problem ...
My current suggestion would be ...
|
If any code still needs compilation, should we define where source files should go? Could be per data set (if required for multiple experiments):
Or per experiment (if required for multiple participants):
Or per participant:
|
In our contributing guidelines, we have Is there any reason to keep the |
I stumbled over this question while checking the ICIAM muscle data set as a test run. https://doi.org/10.18419/darus-4228 Do our tutorials contributions guidelines run into problems if we have multiple participants using the same framework, but with different implementations? Take the partitioned heat conduction as an example: What if the Dirichlet and the Neumann implementation in OpenFOAM would have nothing in common? https://github.com/precice/tutorials/tree/develop/partitioned-heat-conduction |
We already have an example in the partitioned-pipe, of two different OpenFOAM-based solvers: https://github.com/precice/tutorials/tree/develop/partitioned-pipe We could be more specific and name every tutorial case with the exact name of the solver (e.g., FYI, I have not downloaded the muscle data set yet (too large for my current connection), but I assume this is the same situation, without clear names for the different solvers. |
After discussing with @uekerman on how to handle bundles of application cases, some notes:
We also need a clear policy and a mechanism to handle cases that are great to have listed, but they do not yet fulfill all the standards. We probably already handle this with the bronze and potentially gray levels. |
Potential criteria could include:
according to defined standard (automatically checked)
The text was updated successfully, but these errors were encountered: