Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[REVIEW]: BoARIO: A Python package implementing the ARIO indirect economic cost model #6547

Closed
editorialbot opened this issue Mar 27, 2024 · 64 comments
Assignees
Labels
accepted published Papers published in JOSS Python recommend-accept Papers recommended for acceptance in JOSS. review TeX Track: 4 (SBCS) Social, Behavioral, and Cognitive Sciences

Comments

@editorialbot
Copy link
Collaborator

editorialbot commented Mar 27, 2024

Submitting author: @spjuhel (Samuel Juhel)
Repository: https://github.com/spjuhel/BoARIO
Branch with paper.md (empty if default branch): main
Version: v0.5.10
Editor: @crvernon
Reviewers: @mwt, @potterzot
Archive: 10.5281/zenodo.11580697

Status

status

Status badge code:

HTML: <a href="https://joss.theoj.org/papers/71386aa01a292ecff8bafe273b077701"><img src="https://joss.theoj.org/papers/71386aa01a292ecff8bafe273b077701/status.svg"></a>
Markdown: [![status](https://joss.theoj.org/papers/71386aa01a292ecff8bafe273b077701/status.svg)](https://joss.theoj.org/papers/71386aa01a292ecff8bafe273b077701)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@mwt & @potterzot, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review.
First of all you need to run this command in a separate comment to create the checklist:

@editorialbot generate my checklist

The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @crvernon know.

Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest

Checklists

📝 Checklist for @mwt

📝 Checklist for @potterzot

@editorialbot
Copy link
Collaborator Author

Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.

For a list of things I can do to help you, just type:

@editorialbot commands

For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1111/jiec.12715 is OK
- 10.1021/es300171x is OK
- 10.2139/ssrn.4101276 is OK
- 10.1093/reep/rez004 is OK
- 10.5334/jors.251 is OK
- 10.1029/2020ef001616 is OK
- 10.5281/zenodo.8383171 is OK
- 10.1038/s41562-020-0896-8 is OK
- 10.1007/s10584-010-9979-2 is OK
- 10.1016/j.jedc.2011.10.001 is OK
- 10.1007/s10584-010-9978-3 is OK
- 10.1111/j.1539-6924.2008.01046.x is OK
- 10.1111/risa.12090 is OK
- 10.1007/s12665-011-1078-9 is OK
- 10.1007/s11069-013-0788-6 is OK
- 10.1111/risa.12300 is OK
- 10.1029/2018ef000839 is OK
- 10.1038/s41893-020-00646-7 is OK
- 10.1080/19475705.2018.1489312 is OK
- 10.31223/x5qd6b is OK
- 10.1038/s41558-018-0173-2 is OK
- 10.1088/1748-9326/ab3306 is OK
- 10.1093/bioinformatics/bts480 is OK
- 10.2139/ssrn.3285818 is OK

MISSING DOIs

- No DOI given, and none found for title: OECD Inter-Country Input-Output Database
- No DOI given, and none found for title: NumPy

INVALID DOIs

- None

@editorialbot
Copy link
Collaborator Author

Software report:

github.com/AlDanial/cloc v 1.90  T=0.14 s (869.9 files/s, 280925.4 lines/s)
-------------------------------------------------------------------------------
Language                     files          blank        comment           code
-------------------------------------------------------------------------------
HTML                            30           5288             94          18663
Python                          19            866           1263           4383
SVG                              7              0              0           1718
CSS                             11            237             86           1183
JavaScript                      12            161            246           1026
TeX                              4             51              4            675
reStructuredText                21            609            646            657
Markdown                         5             48              0            196
YAML                             5             11             20             99
TOML                             2             11              2             88
JSON                             1              0              0             36
DOS Batch                        1              8              1             26
make                             1              5              7             16
-------------------------------------------------------------------------------
SUM:                           119           7295           2369          28766
-------------------------------------------------------------------------------

Commit count by author:

   391	Samuel Juhel
     7	sjuhel
     2	Alessio Ciullo

@editorialbot
Copy link
Collaborator Author

Paper file info:

📄 Wordcount for paper.md is 1111

✅ The paper includes a Statement of need section

@editorialbot
Copy link
Collaborator Author

License info:

🟡 License found: GNU General Public License v3.0 (Check here for OSI approval)

@mwt
Copy link

mwt commented Mar 27, 2024

Review checklist for @mwt

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the https://github.com/spjuhel/BoARIO?
  • License: Does the repository contain a plain-text LICENSE or COPYING file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@spjuhel) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines
  • Data sharing: If the paper contains original data, data are accessible to the reviewers. If the paper contains no original data, please check this item.
  • Reproducibility: If the paper contains original results, results are entirely reproducible by reviewers. If the paper contains no original results, please check this item.
  • Human and animal research: If the paper contains original data research on humans subjects or animals, does it comply with JOSS's human participants research policy and/or animal research policy? If the paper contains no such data, please check this item.

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Does the paper have a section titled 'Statement of need' that clearly states what problems the software is designed to solve, who the target audience is, and its relation to other work?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@potterzot
Copy link

potterzot commented Mar 27, 2024

Review checklist for @potterzot

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the https://github.com/spjuhel/BoARIO?
  • License: Does the repository contain a plain-text LICENSE or COPYING file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@spjuhel) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines
  • Data sharing: If the paper contains original data, data are accessible to the reviewers. If the paper contains no original data, please check this item.
  • Reproducibility: If the paper contains original results, results are entirely reproducible by reviewers. If the paper contains no original results, please check this item.
  • Human and animal research: If the paper contains original data research on humans subjects or animals, does it comply with JOSS's human participants research policy and/or animal research policy? If the paper contains no such data, please check this item.

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Does the paper have a section titled 'Statement of need' that clearly states what problems the software is designed to solve, who the target audience is, and its relation to other work?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

@crvernon
Copy link

👋 @spjuhel, @mwt, and @potterzot - This is the review thread for the paper. All of our communications will happen here from now on.

Please read the "Reviewer instructions & questions" in the first comment above.

Both reviewers have checklists at the top of this thread (in that first comment) with the JOSS requirements. As you go over the submission, please check any items that you feel have been satisfied. There are also links to the JOSS reviewer guidelines.

The JOSS review is different from most other journals. Our goal is to work with the authors to help them meet our criteria instead of merely passing judgment on the submission. As such, the reviewers are encouraged to submit issues and pull requests on the software repository. When doing so, please mention #6547 so that a link is created to this thread (and I can keep an eye on what is happening). Please also feel free to comment and ask questions on this thread. In my experience, it is better to post comments/questions/suggestions as you come across them instead of waiting until you've reviewed the entire package.

We aim for the review process to be completed within about 4-6 weeks but please make a start well ahead of this as JOSS reviews are by their nature iterative and any early feedback you may be able to provide to the author will be very helpful in meeting this schedule.

@crvernon
Copy link

crvernon commented Apr 4, 2024

👋 @spjuhel, @mwt, and @potterzot - Just checking in to see how the review is going. Let me know if you have any questions!

@mwt
Copy link

mwt commented Apr 8, 2024

@spjuhel Hi, I am going through this right now and have identified some issues.

  1. I do not think that the paper explains the state of the field. That is, it does not outline what other software, if any, exists to solve this problem. Otherwise, I think the paper is good.

  2. The documentation is incomplete. For example, EventKapitalRebuild.from_scalar_regions_sectors(), which is used in the example, is undocumented outside of the automatically generated api reference. It seems to be similar to EventKapitalRebuild.from_series(), but it is also not a drop-in replacement.

  3. Some documentation appears to be wrong or confusing. For example, the description of impact_sectoral_distrib says

    A vector of equal size to the list of sectors affected, stating the share of the impact each industry should receive. Defaults to None.

    but the input given in the example is the string "gdp", which is not a share, and a list of actual shares like [0.5, 0.5] results in a KeyError.

For the second point, I think it would be ideal if the quickstart guide contained all of the information that was needed to understand and execute the quickstart example. This means that we should get a rundown of the events api in the quickstart guide with just enough information to do the example. The rest of the docs can go into more details about the options and alternative ways to specify events.

Smaller points:

  • In the quickstart example, it is not necessary to import pandas for the plots because it is already loaded inside your package.
  • I believe that the sector key manufactoring is supposed to be manufacturing, but I may be wrong.
  • Automatically generated docs are not formatted correctly for arguments. It would be good if the function arguments were nicely formatted instead of running together. This "args" section is also not being detected as the same as the "parameters" section, and so the inputs are enumerated twice. There are multiple ways to do this, but the following format has always worked for me:
    Parameters
    ----------
    n : int
        Sample size.
    k : int
        Number of moments.
    alpha : float
        Significance level.
    

@spjuhel
Copy link

spjuhel commented Apr 15, 2024

@mwt Hi, thank you a lot for your remarks,

Indeed, the documentation suffered from several problems.

I did some redesigning, and it should be much better now (Both API and User guide), no huge change for the User guide, but some restructuring. The rework on the API focused on the event module to bring more clarity to the API, and I also switched to a less cluttered and more tree oriented documentation.

In parallel, I should have addressed all your specific points regarding the documentation. See below for more details if you want.

I will push these as a minor patch on the main branch today or tomorrow, alongside an addition in the paper.md regarding the state of the field. Am I correct in thinking this should go into the State of need section?


  1. The documentation is incomplete. For example, EventKapitalRebuild.from_scalar_regions_sectors(), which is used in the example, is undocumented outside of the automatically generated api reference. It seems to be similar to EventKapitalRebuild.from_series(), but it is also not a drop-in replacement.

I think this was due to the function being defined only for the Event main class, the documentation now shows in the subclasses as well, and I have added a lot of information on each subclass specificities notably around the differences in instantiation. Hopefully this is clearer now.

  1. Some documentation appears to be wrong or confusing. For example, the description of impact_sectoral_distrib says

A vector of equal size to the list of sectors affected, stating the share of the impact each industry should receive. Defaults to None.

but the input given in the example is the string "gdp", which is not a share, and a list of actual shares like [0.5, 0.5] results in a KeyError.

I fixed the documentation. I did not have a KeyError on my side, it is possible this was fixed at some point, but I'm unsure... Could you share a minimal example and the version used?

  1. In the quickstart example, it is not necessary to import pandas for the plots because it is already loaded inside your package.

Yes indeed! I removed this.

  1. I believe that the sector key manufactoring is supposed to be manufacturing, but I may be wrong.

Sadly this is a typo from the pymrio package, I will probably suggest a PR on their repo at some point.

  1. Automatically generated docs are not formatted correctly for arguments. It would be good if the function arguments were nicely formatted instead of running together. This "args" section is also not being detected as the same as the "parameters" section, and so the inputs are enumerated twice.

Yes, I think this came from automated docstring generation I tested at some point and I did not realize it was not in the same format. This should be fixed in the minor patch.

@spjuhel
Copy link

spjuhel commented Apr 16, 2024

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@crvernon
Copy link

👋 @spjuhel, @mwt, and @potterzot - Just checking in to see how the review is going. Could you provide a brief update to your status here in this thread?

@mwt
Copy link

mwt commented Apr 26, 2024

@spjuhel, this is a big improvement. I really like the new documentation.

I was running version 0.5.7 and am now on 0.5.9. I no longer get a key error, but the function still fails to converge when I use numerical shares instead of gdp. Perhaps this was fixed though in a version that isn't yet on pypi? Here is my example. For me, there are NaN for every step after 1.

# import pymrio for the test MRIO
import pymrio

# import the different classes
from boario.simulation import Simulation  # Simulation wraps the model
from boario.extended_models import ARIOPsiModel  # The core of the model

from boario.event import EventKapitalRebuild  # A class defining a shock on capital

# Load the IOSystem from pymrio
mrio = pymrio.load_test().calc_all()

# Instantiate the model and the simulation
model = ARIOPsiModel(mrio)
sim = Simulation(model,n_temporal_units_to_sim=730)

# Instantiate an event.
ev = EventKapitalRebuild.from_scalar_regions_sectors(
  impact=500000,
  regions=["reg1"],
  sectors=["manufactoring", "mining"],
  impact_sectoral_distrib = [0.5, 0.5],
  rebuilding_sectors={"construction": 0.55,"manufactoring": 0.45},
  rebuilding_factor=1.0,
  rebuild_tau=90,
)

# Add the event to the simulation
sim.add_event(ev)

# Launch the simulation
sim.loop(progress=False)

# You should be able to generate a dataframe of
# the production with the following line
df = sim.production_realised
# This allows to normalize production at its initial level
df = df / df.loc[0]

df.loc[:, ("reg1", slice(None))].plot()

I think you addressed everything else. The "See Also" links in the function documentation that lead to tutorials is a nice touch.

I ran your tests. When I use test_events.py, I get the following error:

test_events.py::test_EventKapitalRebuild_incorrect_impact[empty_np]
 boario/event.py:497: DeprecationWarning: The truth value of an empty array is ambiguous. Returning False, but in future this will result in an error. Use `array.size > 0` to check that an array is not empty.
    if impact <= 0:

I think you should raise an error in this case? That would cause the test to fail. Do you know why the impact has size zero in this test case?

@mwt
Copy link

mwt commented Apr 26, 2024

@crvernon said:

Just checking in to see how the review is going. Could you provide a brief update to your status here in this thread?

I'm debugging. My current thought is that the functions were designed to work with a particular set of inputs and is very well-tested with those inputs. I'm trying to understand if there are other inputs and edge cases where it fails.

@spjuhel
Copy link

spjuhel commented Apr 29, 2024

@mwt Thanks for noticing these issues,

For the example, I realize that the shock is just too great on the mining sector with this setup, and that make the model "crash", but thus it should at least raise a proper warning. And a functioning example would be more appropriate.

I think you should raise an error in this case? That would cause the test to fail.
Do you know why the impact has size zero in this test case?

I realize the test/code is not very well-designed, I don't have a clear method for instantiating from a numpy array, and in the test this case (empty_np, where input is an empty numpy array) goes to the from_scalar_regions_sectors() which assumes the impact is scalar, hence the warning. The test does raise an error afterwards, as it should, but that could be made more proper and clearer indeed.

I'm trying to understand if there are other inputs and edge cases where it fails.

That would be so helpful, I did look out for these, but testing is not my forte and there's a lot of room for improvements here.


I actually plan (long term) to highly simplify the event module, such that:

  • event instantiation will be done via event.from_x(impact, event_type, ...) (module functions instead of class methods), which will call the correct class instantiation method.
  • all the class attributes that check the validity will be removed from the module and checks will be done in the simulation context instead. At the moment, you cannot instantiate events without first instantiating a model and a simulation, which sets those class attributes. Looking back, this was a terrible idea...

But these require to redesign technical parts of all modules, so I'm not sure if they should be considered in this review?
(Any remarks/suggestions on this are most welcome!!)

@potterzot
Copy link

👋 @spjuhel, @mwt, and @potterzot - Just checking in to see how the review is going. Could you provide a brief update to your status here in this thread?

Hello, I haven't gotten far beyond cloning and installing the package but should have time this Friday (May 3).

@crvernon
Copy link

@potterzot just following up to see when you will have time to start your review? Thanks!

@potterzot
Copy link

@crvernon my apologies, I will have it posted by Tuesday evening.

@crvernon
Copy link

Thanks @potterzot!

@potterzot
Copy link

I was able to install and reproduce the example models as well as successfully run tests. No issues there. I do have several suggestions on the documentation and for the paper. I've also made a PR with some small grammar fixes. I've checked off all of the items in the review checklist, but I think the package would benefit from addressing some of the comments below before publication. Most of these are very minor.

This is beyond the scope of the current paper, but given the goal of the project stated in both the README and the paper, it would be useful to have more in depth case studies that demonstrate both how to conduct various real-world analysis of impacts and how to modify the base assumptions and data used in the model. I think this would substantially help with adoption of this software for use in impact modeling by other researchers and impact modelers. A few well-designed examples would both highlight how users should think about measuring impacts of different kinds of shocks as well as how to modify the model parameters to reflect changes. In addition, an example of how to take an IO table and create a pymrio object from it would allow a user who doesn't have access to a prebuild IO model but does have access to their country's make and use tables to build their own underlying model. I know I've wanted something like this to exist for quite some time. In the US there are some efforts to provide an open source IO model to "democratize" the use of impact modeling in policy advocacy (for example, see Tapestry).

There are some integrations and events that are referenced in the documentation and paper but that aren't currently working. For example, one of the three possible impact events (EventArbitraryProd) is currently not available because of a critical bug. The paper implies a fully working model that has extensive documentation and case studies, but there aren't case studies or multiple types of shocks described or linked in the documentation or the paper. The paper also states that integration to various other modeling platforms and workflows (e.g. "shock on demand, shock on production, shock on both, shock involving reconstruction or not, etc" suggests shock possibilities that are not described in the documentation. Mostly this could be addressed by toning down the promise of boario a bit. I make a few specific suggestions in the comments below.

I hope the following comments are helpful. Some of the comments are very minor. Probably they do not all need to be addressed, but I think they all would improve the overall sense of "completeness" of the software and the documentation.

Documentation comments

  • README: The Hallegatte 2008 citation is not linked as the other citations are.
  • README: It might be nice for the new user who is not familiar with these specific IO models to link to EXIOBASE3 and EORA26.
  • README: being familiar with several IO models in the United States, but not with pymrio or the models mentioned in the README, would it be possible to add some more information about what kind of object a pymrio object is? I've made a small edit in the linked PR. There is also great documentation (example) that describes the model, but it is not linked in the README. I think adding a link to this kind of documentation where it might help a new user who is familiar with IO models generally but not with this project specifically understand some of the capabilities of this model.
  • Quickstart: ""Some attributes of the test MRIOT are not computed. Calling calc_all() ensures all required tables are present in the IOSystem object." It would be nice here to know a little bit more specifically what calc_all() is doing. If the MRIOT is incomplete, is calc_all() using multiple imputation to fill missing values in the IO table? I understand that I can look at the function API reference, but describing what it is doing in the quickstart would also be helpful.
  • Model Parameters: There are some differences in how citations are formatted in the different documents. In the README, it's "(Halegatte 2013)". In the tutorial documentation, it's "[Hal13]". Then in the model description it is back to "Hallegatte 2014" and does not link to a reference. I would make these all follow the same citation style and all link to the actual reference if possible.

Paper comments

  • Summary: "The impacts of economic shocks (caused by natural or technological disasters for instance) often extend far beyond the cost of their local, direct consequences, as the economic perturbations they cause propagate along supply chains." This seems limiting, since perturbations propagate along both supply and demand vectors, and have consequences in environmental and social well being dimensions.
  • Statement of Need: "there were used for" should be "they were used for"
  • Statement of Need: "accompanied by the extensive online documentation (where a more in depth description is available)" I think there is the potential for this to be true, but currently I think this is a little overstated. I would remove "extensive" for the time being until case studies and explanations of why and how different impact estimates may be conducted have been added.
  • Statement of Need: "Other notable ongoing projects, are" I would either expand on the last two of these or remove them. Are there working papers or code repositories available for these projects? As stated, they are ambiguous.
  • Status: "Although its current version is fully operational, further improvements, notably the implementation of additional economic mechanisms or variations of existing ones are already planned." Given that one of the event impacts is not operational I would restate this or remove the part preceding "further improvements ..."
  • Status: "Integration tests can be run using pytest" should have a period at the end.

@spjuhel
Copy link

spjuhel commented May 24, 2024

Thanks a lot for all these precious comments,

it would be useful to have more in depth case studies that demonstrate both how to conduct various real-world analysis of impacts and how to modify the base assumptions and data used in the model. I think this would substantially help with adoption of this software for use in impact modeling by other researchers and impact modelers. A few well-designed examples would both highlight how users should think about measuring impacts of different kinds of shocks as well as how to modify the model parameters to reflect changes.

I totally agree. I intend to extend the documentation with more detailed examples based on my papers, but these are still in the reviewing process, or even unfinished. Another thing I would like to do is to present examples reproducing previous studies / well-known disasters in the indirect impact literature (such as Hurricane Katrina or the 2001 floods in Thailand). This however requires a substantial amount of work, which I'm not at liberty to do at the moment.

In addition, an example of how to take an IO table and create a pymrio object from it would allow a user who doesn't have access to a prebuild IO model but does have access to their country's make and use tables to build their own underlying model.

I agree this would be a nice addition, but I feel like this is something that should be part of the pymrio documentation (and already does to some extent).

There are some integrations and events that are referenced in the documentation and paper but that aren't currently working. For example, one of the three possible impact events (EventArbitraryProd) is currently not available because of a critical bug. The paper implies a fully working model that has extensive documentation and case studies, but there aren't case studies or multiple types of shocks described or linked in the documentation or the paper. The paper also states that integration to various other modeling platforms and workflows (e.g. "shock on demand, shock on production, shock on both, shock involving reconstruction or not, etc" suggests shock possibilities that are not described in the documentation. Mostly this could be addressed by toning down the promise of boario a bit. I make a few specific suggestions in the comments below.

Yes, I might have been too ambitious when writing some parts of the paper. I discovered only during this review that EventArbitraryProd was not working properly. I am working on a new major version which will include a fix for this, but I also took this opportunity to completely rework the way events are handled (as mentioned before). I have rephrased the corresponding part to make it more consistent with the current implementation.

I just pushed these changed to the develop branch, but I still want to look into what @mwt raised and recheck that everything works properly in the examples before merging to main and publishing another release.

@crvernon
Copy link

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@crvernon
Copy link

@editorialbot check references

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1111/jiec.12715 is OK
- 10.1021/es300171x is OK
- 10.2139/ssrn.4101276 is OK
- 10.1093/reep/rez004 is OK
- 10.5334/jors.251 is OK
- 10.1029/2020ef001616 is OK
- 10.5281/zenodo.8383171 is OK
- 10.1038/s41562-020-0896-8 is OK
- 10.1007/s10584-010-9979-2 is OK
- 10.1016/j.jedc.2011.10.001 is OK
- 10.1007/s10584-010-9978-3 is OK
- 10.1111/j.1539-6924.2008.01046.x is OK
- 10.1111/risa.12090 is OK
- 10.1007/s12665-011-1078-9 is OK
- 10.1007/s11069-013-0788-6 is OK
- 10.1111/risa.12300 is OK
- 10.1029/2018ef000839 is OK
- 10.1038/s41893-020-00646-7 is OK
- 10.1080/19475705.2018.1489312 is OK
- 10.31223/x5qd6b is OK
- 10.1038/s41558-018-0173-2 is OK
- 10.1088/1748-9326/ab3306 is OK
- 10.1093/bioinformatics/bts480 is OK
- 10.2139/ssrn.3285818 is OK
- 10.1080/09535314.2016.1232701 is OK
- 10.1038/s41893-020-00649-4 is OK
- 10.1016/j.jedc.2017.08.001 is OK
- 10.1038/s41893-020-0523-8 is OK
- 10.1007/s10584-018-2293-0 is OK

MISSING DOIs

- No DOI given, and none found for title: OECD Inter-Country Input-Output Database
- No DOI given, and none found for title: NumPy

INVALID DOIs

- None

@crvernon
Copy link

crvernon commented Jun 10, 2024

Thanks @spjuhel. Please correct the following errors in the paper:

  • LINE 92: "french" should be "French"
  • LINE 108: something is not correct with this citation (e.g., nil)
  • LINE 112: "covid-19" should be "COVID-19", capitalization can be maintained by placing curly brackets around the letters or words you want to keep capitalized.
  • LINE 115: "katrina" should be "Katrina"
  • LINE 122: "copenhagen" should be "Copenhagen"
  • LINE 149: something is not correct with this citation (e.g., nil)
  • LINE 153: "mumbai" should be "Mumbai"
  • LINE 173: something is not correct with this citation (e.g., nil)
  • LINE 175: "bohai sea" should be "Bohai Sea" and "china" should be "China"
  • LINE 179: "california" should be "California"
  • LINE 182: "wenchuan" should be "Wenchuan"

Please make sure all of the references are formatted correctly and no "nil" values exists. Thank you!

@spjuhel
Copy link

spjuhel commented Jun 10, 2024

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@spjuhel
Copy link

spjuhel commented Jun 11, 2024

Forgot to say,
This last version should include the corrections related to your comments.

(And thanks a lot for your detailed reviewing effort)

@crvernon
Copy link

👋 @spjuhel - we are almost there! Next is just setting up the archive for your new release.

We want to make sure the archival has the correct metadata that JOSS requires. This includes a title that matches the paper title and a correct author list.

So here is what we have left to do:

  • Conduct a GitHub release of the current reviewed version of the software and archive the reviewed software in Zenodo or a similar service (e.g., figshare, an institutional repository). Please ensure that the software archive uses the same license as the license you have posted on GitHub.

  • Check the archival deposit (e.g., in Zenodo) to ensure it has the correct metadata. This includes the title (should match the paper title) and author list (make sure the list is correct and people who only made a small fix are not on it). You may also add the authors' ORCID.

  • Please respond with the DOI of the archived version here

I can then move forward with accepting the submission.

@spjuhel
Copy link

spjuhel commented Jun 11, 2024

I think this should be good: https://doi.org/10.5281/zenodo.11580697

@crvernon
Copy link

@editorialbot set v0.5.10 as version

@editorialbot
Copy link
Collaborator Author

Done! version is now v0.5.10

@crvernon
Copy link

@editorialbot set 10.5281/zenodo.11580697 as archive

@editorialbot
Copy link
Collaborator Author

Done! archive is now 10.5281/zenodo.11580697

@crvernon
Copy link

crvernon commented Jun 11, 2024

🔍 checking out the following:

  • reviewer checklists are completed or addressed
  • version set
  • archive set
  • archive names (including order) and title in archive matches those specified in the paper
  • archive uses the same license as the repo and is OSI approved as open source
  • archive DOI and version match or redirect to those set by editor in review thread
  • paper is error free - grammar and typos
  • paper is error free - test links in the paper and bib
  • paper is error free - refs preserve capitalization where necessary
  • paper is error free - no invalid refs without justification

@crvernon
Copy link

@editorialbot recommend-accept

@editorialbot
Copy link
Collaborator Author

Attempting dry run of processing paper acceptance...

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1111/jiec.12715 is OK
- 10.1021/es300171x is OK
- 10.2139/ssrn.4101276 is OK
- 10.1093/reep/rez004 is OK
- 10.5334/jors.251 is OK
- 10.1029/2020ef001616 is OK
- 10.5281/zenodo.8383171 is OK
- 10.1038/s41562-020-0896-8 is OK
- 10.1007/s10584-010-9979-2 is OK
- 10.1016/j.jedc.2011.10.001 is OK
- 10.1007/s10584-010-9978-3 is OK
- 10.1111/j.1539-6924.2008.01046.x is OK
- 10.1111/risa.12090 is OK
- 10.1007/s12665-011-1078-9 is OK
- 10.1007/s11069-013-0788-6 is OK
- 10.1111/risa.12300 is OK
- 10.1029/2018ef000839 is OK
- 10.1038/s41893-020-00646-7 is OK
- 10.1080/19475705.2018.1489312 is OK
- 10.31223/x5qd6b is OK
- 10.1038/s41558-018-0173-2 is OK
- 10.1088/1748-9326/ab3306 is OK
- 10.1093/bioinformatics/bts480 is OK
- 10.2139/ssrn.3285818 is OK
- 10.1080/09535314.2016.1232701 is OK
- 10.1038/s41893-020-00649-4 is OK
- 10.1016/j.jedc.2017.08.001 is OK
- 10.1038/s41893-020-0523-8 is OK
- 10.1007/s10584-018-2293-0 is OK

MISSING DOIs

- No DOI given, and none found for title: OECD Inter-Country Input-Output Database
- No DOI given, and none found for title: NumPy

INVALID DOIs

- None

@editorialbot
Copy link
Collaborator Author

👋 @openjournals/sbcs-eics, this paper is ready to be accepted and published.

Check final proof 👉📄 Download article

If the paper PDF and the deposit XML files look good in openjournals/joss-papers#5488, then you can now move forward with accepting the submission by compiling again with the command @editorialbot accept

@editorialbot editorialbot added the recommend-accept Papers recommended for acceptance in JOSS. label Jun 11, 2024
@crvernon
Copy link

@editorialbot accept

@editorialbot
Copy link
Collaborator Author

Doing it live! Attempting automated processing of paper acceptance...

@editorialbot
Copy link
Collaborator Author

Ensure proper citation by uploading a plain text CITATION.cff file to the default branch of your repository.

If using GitHub, a Cite this repository menu will appear in the About section, containing both APA and BibTeX formats. When exported to Zotero using a browser plugin, Zotero will automatically create an entry using the information contained in the .cff file.

You can copy the contents for your CITATION.cff file here:

CITATION.cff

cff-version: "1.2.0"
authors:
- family-names: Juhel
  given-names: Samuel
  orcid: "https://orcid.org/0000-0001-8801-3890"
doi: 10.5281/zenodo.11580697
message: If you use this software, please cite our article in the
  Journal of Open Source Software.
preferred-citation:
  authors:
  - family-names: Juhel
    given-names: Samuel
    orcid: "https://orcid.org/0000-0001-8801-3890"
  date-published: 2024-06-11
  doi: 10.21105/joss.06547
  issn: 2475-9066
  issue: 98
  journal: Journal of Open Source Software
  publisher:
    name: Open Journals
  start: 6547
  title: "BoARIO: A Python package implementing the ARIO indirect
    economic cost model"
  type: article
  url: "https://joss.theoj.org/papers/10.21105/joss.06547"
  volume: 9
title: "BoARIO: A Python package implementing the ARIO indirect economic
  cost model"

If the repository is not hosted on GitHub, a .cff file can still be uploaded to set your preferred citation. Users will be able to manually copy and paste the citation.

Find more information on .cff files here and here.

@editorialbot
Copy link
Collaborator Author

🐘🐘🐘 👉 Toot for this paper 👈 🐘🐘🐘

@editorialbot
Copy link
Collaborator Author

🚨🚨🚨 THIS IS NOT A DRILL, YOU HAVE JUST ACCEPTED A PAPER INTO JOSS! 🚨🚨🚨

Here's what you must now do:

  1. Check final PDF and Crossref metadata that was deposited 👉 Creating pull request for 10.21105.joss.06547 joss-papers#5489
  2. Wait five minutes, then verify that the paper DOI resolves https://doi.org/10.21105/joss.06547
  3. If everything looks good, then close this review issue.
  4. Party like you just published a paper! 🎉🌈🦄💃👻🤘

Any issues? Notify your editorial technical team...

@editorialbot editorialbot added accepted published Papers published in JOSS labels Jun 11, 2024
@crvernon
Copy link

🥳 Congratulations on your new publication @spjuhel! Many thanks to @mwt and @potterzot for your time, hard work, and expertise!! JOSS wouldn't be able to function nor succeed without your efforts.

Please consider becoming a reviewer for JOSS if you are not already: https://reviewers.joss.theoj.org/join

@editorialbot
Copy link
Collaborator Author

🎉🎉🎉 Congratulations on your paper acceptance! 🎉🎉🎉

If you would like to include a link to your paper from your README use the following code snippets:

Markdown:
[![DOI](https://joss.theoj.org/papers/10.21105/joss.06547/status.svg)](https://doi.org/10.21105/joss.06547)

HTML:
<a style="border-width:0" href="https://doi.org/10.21105/joss.06547">
  <img src="https://joss.theoj.org/papers/10.21105/joss.06547/status.svg" alt="DOI badge" >
</a>

reStructuredText:
.. image:: https://joss.theoj.org/papers/10.21105/joss.06547/status.svg
   :target: https://doi.org/10.21105/joss.06547

This is how it will look in your documentation:

DOI

We need your help!

The Journal of Open Source Software is a community-run journal and relies upon volunteer effort. If you'd like to support us please consider doing either one (or both) of the the following:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
accepted published Papers published in JOSS Python recommend-accept Papers recommended for acceptance in JOSS. review TeX Track: 4 (SBCS) Social, Behavioral, and Cognitive Sciences
Projects
None yet
Development

No branches or pull requests

5 participants