Skip to content

Commit

Permalink
Merge pull request #15 from dfe-analytical-services/peer-review-check…
Browse files Browse the repository at this point in the history
…list

Updates to peer review guidance
  • Loading branch information
jen-machin authored Oct 19, 2023
2 parents a2be1f0 + 9ccc39b commit 31cdd70
Show file tree
Hide file tree
Showing 2 changed files with 34 additions and 16 deletions.
30 changes: 24 additions & 6 deletions RAP/rap-statistics.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -570,24 +570,42 @@ If you ever find yourself writing HTML, or creating it through RMarkdown, you ca
## Peer reviewing code


Peer review is an important element of quality assuring our work. We often do it without realising by bouncing ideas off of one another and by getting others to 'idiot check' our work. When writing code, ensuring that we get our work formally peer reviewed is particularly important for ensuring it's quality and value.
Peer review is an important element of quality assuring our work. We often do it without realising by bouncing ideas off of one another and by getting others to 'idiot check' our work. When writing code, ensuring that we get our work formally peer reviewed is particularly important for ensuring it's quality and value. The [Duck Book](https://best-practice-and-impact.github.io/qa-of-code-guidance/peer_review.html) contains detailed guidance on peer review, but we have summarised some of the information here for you as well.

Prior to receiving code for peer review, the author should ensure that all code files are clean, commented appropriately and for larger projects should be held in a repo with an appropriate [README](#writing-a-readme-file) file.

When peer reviewing code you should be consider the following questions -
When peer reviewing code you should consider the following questions -

* Do you understand what the code does? If not, is there supporting documentation or code comments that allow you to understand it?
* Does the code do what the author intended?
* If you’re able to run the code, does it run without errors? If warnings are displayed, are they explained?
* If the project has unit/integration tests, do they pass?
* Have any dependencies (either on separate pieces of code, data files, or packages) been documented?
* Are there any tests / checks that could be added into the code that would help to give greater confidence that it is doing what it is intended to?
* Are there comments explaining why any decisions have been made?
* Is the code written and structured sensibly?
* Are there any ways to make the code more efficient (either in number of lines or raw speed)?
* Are there any ways to make the code more efficient (either in number of lines or raw speed)? Is there duplication that could be simplified using functions?
* Does the code follow best practice for styling and structure?
* Are there any other teams/bits of code you're aware of that do similar things and would be useful to point the authors towards?
* At the end of the review, was there any information you needed to ask about that should be made more apparent in the code or documentation?

Depending on your access you may or may not be able to run the code yourself, but there should be enough information within the code and documentation to be able to respond to these questions.
Depending on your access you may or may not be able to run the code yourself, but there should be enough information within the code and documentation to be able to respond to the questions above. If you are able to run the code, you could also check -

* Does the code run without errors? If warnings are displayed, are they explained?
* If the project has unit/integration tests, do they pass?
* Can you replicate previous output using the same code and input data?

If you would like a more thorough list of questions to follow, then the Duck Book has checklists available for three levels of peer review, based on risk:

* [Lower](https://best-practice-and-impact.github.io/qa-of-code-guidance/checklist_lower.html)
* [Moderate](https://best-practice-and-impact.github.io/qa-of-code-guidance/checklist_moderate.html)
* [Higher](https://best-practice-and-impact.github.io/qa-of-code-guidance/checklist_higher.html)

If you're unfamiliar with giving feedback on someone's code then it can be daunting at first. Feedback should always be constructive and practical. It is recommended that you use the CEDAR model to structure your comments:

* Context - describe the issue and the potential impact
* Examples - give specific examples of when and where the issue has been present (specifying the line numbers of the code where the issue can be found can be useful here)
* Diagnosis - use the example to discuss why this approach was taken, what could have been done differently and why alternatives could be an improvement
* Actions - ask the person receiving feedback to suggest actions that they could follow to avoid this issue in future
* Review - if you have time, revisit the discussion to look for progress following on from the feedback

---

Expand Down
20 changes: 10 additions & 10 deletions writing-visualising/dashboards.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -187,7 +187,7 @@ At a minumum you should be requesting feedback from users via a survey hosted on

![](../images/betaBanner.png)

Google Analytics is a free service that collects information on who visits your webpage and how they interact with it. You can set up basic Google Analytics for your published dashboard in a few simple steps outlined in this article: [Add Google Analytics to a Shiny app](https://shiny.rstudio.com/articles/google-analytics.html) and view a more complex example in this [example file](https://github.com/dfe-analytical-services/exclusion-statistics/blob/master/www/google-analytics.js). If you're planning to publish a dashboard, or to set up Google Analytics for a published dashboard, please contact [email protected].
Google Analytics is a free service that collects information on who visits your webpage and how they interact with it. You can set up basic Google Analytics for your published dashboard in a few simple steps outlined in this article: [Add Google Analytics to a Shiny app](https://shiny.rstudio.com/articles/google-analytics.html) and view a more complex example in this [example file](https://github.com/dfe-analytical-services/exclusion-statistics/blob/master/www/google-analytics.js). If you're planning to publish a dashboard, or to set up Google Analytics for a published dashboard, please contact [Statistics Development Team](mailto:[email protected]).

---

Expand All @@ -199,7 +199,7 @@ Peer review is a quality assurance activity, where an analyst other than the ori
- Dashboards must always be peer reviewed within the team they are created.
- Dashboards should also be peer reviewed by analysts outside of the subject area of the team.

The Central Statistics Unit has a number of analysts experience with R Shiny dashboards and are happy to review any dashboards created, contact statistcs[email protected] if you're interested in this. For more guidance on how to peer review, see the [peer review section of the Duck Book](https://best-practice-and-impact.github.io/qa-of-code-guidance/peer_review.html).
The Central Statistics Unit has a number of analysts experience with R Shiny dashboards and are happy to review any dashboards created, contact [Statistics Development Team](mailto:statistics[email protected]) if you're interested in this. For more guidance on how to peer review, see the [peer review section of the Duck Book](https://best-practice-and-impact.github.io/qa-of-code-guidance/peer_review.html).

---

Expand Down Expand Up @@ -241,7 +241,7 @@ You will need:
* Approval from your DD
* If the data underlying the dashboard is currently unpublished, you will need to create dummy data to use in GitHub until the data becomes published (see [dummy data guidance](#dummy-data) section).

To set up a new app, send the above to [email protected]. If your code is not yet hosted in the dfe-analytical-services area you can request for the repository to be moved at the same time as sending approvals.
To set up a new app, send the above to [Statistics Development Team](mailto:[email protected]). If your code is not yet hosted in the dfe-analytical-services area you can request for the repository to be moved at the same time as sending approvals.

---

Expand Down Expand Up @@ -274,7 +274,7 @@ Adding the file name alone will ensure it is ignored no matter where in the proj

This [.GitIgnore guidance page](https://linuxize.com/post/gitignore-ignoring-files-in-git/#:~:text=.gitignore%20is%20a%20plain%20text%20file%20in%20which,a%20single%20backslash%20%28%29%20to%20escape%20the%20character.) has great guidance on how you can utilize wildcards to capture all the files you might want to ignore.

If you have any questions on this process please do contact us at [email protected].
If you have any questions on this process please do contact us at [Statistics Development Team](mailto:[email protected]).

---

Expand Down Expand Up @@ -313,7 +313,7 @@ This checklist outlines the standard procedure for teams who are wishing to prod
Getting set up:

* Create an account on [GitHub](https://github.com/)
* Ask the [Statistics Deveopment Team](mailto:[email protected]) to create you a repository in the [DfE analytical services area](https://github.com/dfe-analytical-services), providing the name of the dashboard and the GitHub accounts of anyone who will be contributing to the code. You should aim to have two analysts working on the code development and a line manager for review purposes. Further colleagues with review responsibilities (policy colleagues, G6 and above, etc.) can be given access to a demo-site, rather than the repository (see guidance for this below in 'Setting up a development/demo dashboard area').
* Ask the [Statistics Development Team](mailto:[email protected]) to create you a repository in the [DfE analytical services area](https://github.com/dfe-analytical-services), providing the name of the dashboard and the GitHub accounts of anyone who will be contributing to the code. You should aim to have two analysts working on the code development and a line manager for review purposes. Further colleagues with review responsibilities (policy colleagues, G6 and above, etc.) can be given access to a demo-site, rather than the repository (see guidance for this below in 'Setting up a development/demo dashboard area').
* Clone the repo to your device so you can develop your code. Open the repo page in GitHub, click the green 'Code' button, and copy the URL it provides. Then, open R Studio on your device, click file > new project > version control > git, paste the repository URL you copied from GitHub, give your local project area a name, and choose where to save it (i.e. on your computer's C:\ drive, outside of the OneDrive-synced folders).

Once you're set up, there are certain parts of the code you need to update:
Expand All @@ -334,7 +334,7 @@ Setting up a development/demo dashboard area:

* While developing your dashboard, you may want a private, demo-version to share with policy or senior colleagues for review and feedback. This version must use either published data or dummy data and can not use unpublished data, since this cannot be uploaded to GitHub until the day of publication (see our [dummy data guidance](#dummy-data) for public dashboards).
* Ensure that prior to contacting the statistics development team, you have updated all of the URL's and other items listed above.
* You must contact the [Statistics Deveopment Team](mailto:[email protected]) to add the shinyapps.io secret and token to your GitHub repo, therefore enabling the app to be hosted on shinyapps.io. Once this is done you will have a browser link you can use to access the dashboard. We can make this private such that there is a list of approved viewers who must log in to view the dashboard - please provide the email addresses of any colleagues who you wish to have access to the private version during development.
* You must contact the [Statistics Development Team](mailto:[email protected]) to add the shinyapps.io secret and token to your GitHub repo, therefore enabling the app to be hosted on shinyapps.io. Once this is done you will have a browser link you can use to access the dashboard. We can make this private such that there is a list of approved viewers who must log in to view the dashboard - please provide the email addresses of any colleagues who you wish to have access to the private version during development.


You must have done the following before a dashboard can be published (the statistics development team must review and approve that these have been met):
Expand All @@ -356,7 +356,7 @@ You must have done the following before a dashboard can be published (the statis

We expect all dashboards to follow a minimum set of standards to ensure coherence between our products and a minimum standard of quality for our end users.

These standards are constantly evolving, and all feedback and contributions are welcome, contact us at [email protected].
These standards are constantly evolving, and all feedback and contributions are welcome, contact us at [Statistics Development Team](mailto:[email protected]).

---

Expand Down Expand Up @@ -588,14 +588,14 @@ See our [Git](../learning-development/git.html#storing-secure-variables) page fo
---

DfE Shiny applications are published via the DfE Analytical Services [shinyapps.io](https://www.shinyapps.io/) account. You need to alert the statistics development team of any new dashboard publication as early in development as possible and keep us updated on expected publication date. Update the stats development team on any subsequent data or major functional updates to the dashboard publication at least a week prior to re-publishing with the update. Deploying to shinyapps requires the DfE platform codes to be entered into the repository secrets area of your app. This needs to be done by the stats development team.
Authorisation of a publication should be requested from the relevant G6 or DD and the stats development team (with the former authorisation e-mail being forwarded on to the [Statistics Deveopment Team](mailto:[email protected])).
Authorisation of a publication should be requested from the relevant G6 or DD and the stats development team (with the former authorisation e-mail being forwarded on to the [Statistics Development Team](mailto:[email protected])).

If you are publishing a dashboard using already published data, then all of your code and data should be on GitHub. You may have decided to password-protect the dashboard URL, in which case, you should make the [Statistics Deveopment Team](mailto:[email protected]) aware of your publication date so that they can remove the password-protection at 9:30 on publication day, making the dashboard visible to the public.
If you are publishing a dashboard using already published data, then all of your code and data should be on GitHub. You may have decided to password-protect the dashboard URL, in which case, you should make the [Statistics Development Team](mailto:[email protected]) aware of your publication date so that they can remove the password-protection at 9:30 on publication day, making the dashboard visible to the public.

If you are publishing a new dashboard for the first time that uses unpublished data, then you should have followed the [guidance on using dummy data](#dummy-data). This means that the unpublished data should not be added to GitHub until the day of publication. You should follow steps 1-9 in the below section on the day before and day of publication.

<div class="alert alert-dismissible alert-danger">
Be sure to read the guidance carefully, **do not** commit or push unpublished data to a GitHub repo before the day of the publication of the data. If you think you may have done this by accident, contact [Statistics Deveopment Team](mailto:[email protected]) immediately with the full details of what has been uploaded to GitHub and when.
Be sure to read the guidance carefully, **do not** commit or push unpublished data to a GitHub repo before the day of the publication of the data. If you think you may have done this by accident, contact [Statistics Development Team](mailto:[email protected]) immediately with the full details of what has been uploaded to GitHub and when.
</div>

---
Expand Down

0 comments on commit 31cdd70

Please sign in to comment.