Code
from IPython.display import display, Markdown
@@ -275,138 +275,137 @@ Criteria
# Criteria and their definitions
= {
- criteria 'archive': 'Stored in a permanent archive that is publicly and openly accessible',
- 'id': 'Has a persistent identifier',
- 'license': 'Includes an open license',
- 'relevant': '''Artefacts are relevant to and contribute to the article's results''',
- 'complete': 'Complete set of materials shared (as would be needed to fully reproduce article)',
- 'structure': 'Artefacts are well structured/organised (e.g. to the extent that reuse and repurposing is facilitated, adhering to norms and standards of research community)',
- 'documentation_sufficient': 'Artefacts are sufficiently documented (i.e. to understand how it works, to enable it to be run, including package versions)',
- 'documentation_careful': 'Artefacts are carefully documented (more than sufficient - i.e. to the extent that reuse and repurposing is facilitated - e.g. changing parameters, reusing for own purpose)',
- # This criteria is kept seperate to documentation_careful, as it specifically requires a README file
- 'documentation_readme': 'Artefacts are clearly documented and accompanied by a README file with step-by-step instructions on how to reproduce results in the manuscript',
- 'execute': 'Scripts can be successfully executed',
- 'regenerated': 'Independent party regenerated results using the authors research artefacts',
- 'hour': 'Reproduced within approximately one hour (excluding compute time)',
-
- }
-# Evaluation for this study
-eval = pd.Series({
-'archive': 0,
- 'id': 0,
- 'license': 1,
- 'relevant': 1,
- 'complete': 0,
- 'structure': 0,
- 'documentation_sufficient': 0,
- 'documentation_careful': 0,
- 'documentation_readme': 0,
- 'execute': 1,
- 'regenerated': 0,
- 'hour': 0,
-
- })
-# Get list of criteria met (True/False) overall
-= list(eval)
- eval_list
-# Define function for creating the markdown formatted list of criteria met
-def create_criteria_list(criteria_dict):
-'''
- Creates a string which contains a Markdown formatted list with icons to
- indicate whether each criteria was met
-
- Parameters:
- -----------
- criteria_dict : dict
- Dictionary where keys are the criteria (variable name) and values are
- Boolean (True/False of whether this study met the criteria)
-
- Returns:
- --------
- formatted_list : string
- Markdown formatted list
- '''
-= {True: '✅',
- callout_icon False: '❌'}
- # Create list with...
- = ''.join([
- formatted_list '* ' +
- eval[key]] + # Icon based on whether it met criteria
- callout_icon[' ' +
- + # Full text description of criteria
- value '\n' for key, value in criteria_dict.items()])
- return(formatted_list)
-
-# Define groups of criteria
-= ['archive', 'id', 'license']
- criteria_share_how = ['relevant', 'complete']
- criteria_share_what = ['structure', 'documentation_sufficient', 'documentation_careful', 'documentation_readme']
- criteria_doc_struc = ['execute', 'regenerated', 'hour']
- criteria_run
-# Create text section
-f'''
- display(Markdown(To assess whether the author's materials met the requirements of each badge, a list of criteria was produced. Between each badge (and between categories of badge), there is often alot of overlap in criteria.
-
-This study met **{sum(eval_list)} of the {len(eval_list)}** unique criteria items. These were as follows:
-
-Criteria related to how artefacts are shared -
-
-{create_criteria_list({k: criteria[k] for k in criteria_share_how})}
-
-Criteria related to what artefacts are shared -
-
-{create_criteria_list({k: criteria[k] for k in criteria_share_what})}
-
-Criteria related to the structure and documentation of the artefacts -
-
-{create_criteria_list({k: criteria[k] for k in criteria_doc_struc})}
-
-Criteria related to running and reproducing results -
-
-{create_criteria_list({k: criteria[k] for k in criteria_run})}
-'''))
To assess whether the author’s materials met the requirements of each badge, a list of criteria was produced. Between each badge (and between categories of badge), there is often alot of overlap in criteria.
This study met 3 of the 12 unique criteria items. These were as follows:
Criteria related to how artefacts are shared -
-
-
- ❌ Stored in a permanent archive that is publicly and openly accessible -
- ❌ Has a persistent identifier -
- ✅ Includes an open license +
- ❌ Artefacts are archived in a repository that is: (a) public (b) guarantees persistence (c) gives a unique identifier (e.g. DOI) +
- ✅ Open licence
Criteria related to what artefacts are shared -
-
-
- ✅ Artefacts are relevant to and contribute to the article’s results -
- ❌ Complete set of materials shared (as would be needed to fully reproduce article) +
- ❌ Complete (all relevant artefacts available) +
- ✅ Artefacts relevant to paper
Criteria related to the structure and documentation of the artefacts -
-
-
- ❌ Artefacts are well structured/organised (e.g. to the extent that reuse and repurposing is facilitated, adhering to norms and standards of research community) -
- ❌ Artefacts are sufficiently documented (i.e. to understand how it works, to enable it to be run, including package versions) -
- ❌ Artefacts are carefully documented (more than sufficient - i.e. to the extent that reuse and repurposing is facilitated - e.g. changing parameters, reusing for own purpose) -
- ❌ Artefacts are clearly documented and accompanied by a README file with step-by-step instructions on how to reproduce results in the manuscript +
- ❌ Documents (a) how code is used (b) how it relates to article (c) software, systems, packages and versions +
- ❌ Documents (a) inventory of artefacts (b) sufficient description for artefacts to be exercised +
- ❌ Artefacts are carefully documented and well-structured to the extent that reuse and repurposing is facilitated, adhering to norms and standards +
- ❌ README file with step-by-step instructions to run analysis +
- ❌ Dependencies (e.g. package versions) stated +
- ❌ Clear how output of analysis corresponds to article
Criteria related to running and reproducing results -
- ✅ Scripts can be successfully executed -
- ❌ Independent party regenerated results using the authors research artefacts -
- ❌ Reproduced within approximately one hour (excluding compute time) +
- ❌ Reproduced results (assuming (a) acceptably similar (b) reasonable time frame (c) only minor troubleshooting)