Skip to content

Latest commit

 

History

History
356 lines (228 loc) · 17.7 KB

CONTRIBUTING.md

File metadata and controls

356 lines (228 loc) · 17.7 KB

Contributing

If you'd like to contribute to PyBOP, please have a look at the guidelines below.

Developer-Installation

To install PyBOP for development purposes, which includes the plotting dependencies, use the [all] and the [dev] flags as demonstrated below:

For zsh:

pip install -e '.[all,dev]'

For bash:

pip install -e .[all,dev]

Pre-commit checks

Before you commit any code, please perform the following checks using Nox:

Installing and using pre-commit

PyBOP uses a set of pre-commit hooks and the pre-commit bot to format and prettify the codebase. The hooks can be installed locally using -

nox -s pre-commit

alternatively, without nox:

pip install pre-commit
pre-commit install

This would run the checks every time a commit is created locally. The checks will only run on the files modified by that commit, but the checks can be triggered for all the files using,

pre-commit run --all-files

If you would like to skip the failing checks and push the code for further discussion, use the --no-verify option with git commit.

Workflow

We use Git and GitHub to coordinate our work. When making any kind of update, we try to follow the procedure below.

A. Before you begin

  1. Create an issue where new proposals can be discussed before any coding is done.
  2. Create a branch of this repo (ideally on your own fork), where all changes will be made
  3. Download the source code onto your local system, by cloning the repository (or your fork of the repository).
  4. Install PyBOP with the developer options.
  5. Test if your installation worked: nox -s unit or $ pytest --unit -v.

You now have everything you need to start making changes!

B. Writing your code

  1. PyBOP is developed in Python, and makes heavy use of NumPy (see also NumPy for MatLab users and Python for R users).
  2. Make sure to follow our coding style guidelines.
  3. Commit your changes to your branch with useful, descriptive commit messages: Remember these are publicly visible and should still make sense a few months ahead in time. While developing, you can keep using the GitHub issue you're working on as a place for discussion. Refer to your commits when discussing specific lines of code. This is achieved by referencing the SHA-hash in the comment. An example of this looks like: the commit 3e5c1e6 solved the issue...
  4. If you want to add a dependency on another library, or re-use code you found somewhere else, have a look at these guidelines.

C. Merging your changes with PyBOP

  1. Test your code!
  2. If you added a major new feature, perhaps it should be showcased in an example notebook.
  3. If you've added new functionality, please add additional tests to ensure ample code coverage in PyBOP.
  4. When you feel your code is finished, or at least warrants serious discussion, create a pull request (PR) on PyBOP's GitHub page.
  5. Once a PR has been created, it will be reviewed by any member of the community. Changes might be suggested which you can make by simply adding new commits to the branch. When everything's finished, someone with the right GitHub permissions will merge your changes into PyBOP main repository.

Finally, if you really, really, really love developing PyBOP, have a look at the current project infrastructure.

Coding style guidelines

PyBOP follows the PEP8 recommendations for coding style. These are very common guidelines, and community tools have been developed to check how well projects implement them.

Ruff

We use ruff to lint and ensure adherence to Python PEP standards. To manually trigger ruff, navigate to the PyBOP directory in a console and type

python -m pip install pre-commit
pre-commit run ruff

ruff is configured inside the file pyproject.toml, allowing us to ignore some errors. If you think a rule should be added or removed, please submit an issue.

When you commit your changes they will be checked against ruff automatically (see Pre-commit checks). If you are having issues getting your commit to pass the linting, it is possible to skip linting for single lines (this should only be done as a last resort) by adding a line comment of #noqa: $ruff_rule where the $ruff_rule is replaced with the rule in question. It is also possible to skip linting altogether by committing your changes by using the --no-verify command-line flag. These rules can be found in the ruff configuration in pyproject.toml or in the failed pre-commit output. Please note the lint skipping in the pull request for reviewers.

Naming

Naming is hard. In general, we aim for descriptive class, method, and argument names. Avoid abbreviations when possible without making names overly long, so mean is better than mu, but a class name like MyClass is fine.

Class names are CamelCase, and start with an upper case letter, for example MyOtherClass. Method and variable names are lower-case, and use underscores for word separation, for example, x or iteration_count.

Dependencies and reusing code

While it's a bad idea for developers to "reinvent the wheel", it's important for users to get a reasonably sized download and an easy install. In addition, external libraries can sometimes cease to be supported, and when they contain bugs it might take a while before fixes become available as automatic downloads to PyBOP users. For these reasons, all dependencies in PyBOP should be thought about carefully and discussed on GitHub.

Direct inclusion of code from other packages is possible, as long as their license permits it and is compatible with ours, but again should be considered carefully and discussed in the group. Snippets from blogs and stackoverflow can often be included but must include attribution to the original by commenting with a link in the source code.

Separating dependencies

On the other hand... We do want to compare several tools, to generate documentation, and speed up development. For this reason, the dependency structure is split into 4 parts:

  1. Core PyBOP: A minimal set, including things like NumPy, SciPy, etc. All infrastructure should run against this set of dependencies, as well as any numerical methods we implement ourselves.
  2. Extras: Other inference packages and their dependencies. Methods we don't want to implement ourselves, but do want to provide an interface to can have their dependencies added here.
  3. Documentation generating code: Everything you need to generate and work on the docs. This is managed by the [docs] set of extras.
  4. Development code: Everything you need to do PyBOP development (so all of the above packages, plus ruff and other testing tools). This is managed by the [dev] set of extras.

Only 'core pybop' is installed by default. The others have to be specified explicitly when running the installation command.

Plotly

We use Plotly in PyBOP, but with two caveats:

First, Plotly should only be used in plotting methods, and these should never be called by other PyBOP methods. So users who don't like Plotly will not be forced to use it in any way. Use in notebooks is OK and encouraged.

Second, Plotly should never be imported at the module level, but always inside methods. For example:

def plot_great_things(self, x, y, z):
   go = pybop.PlotlyManager().go
    ...

This allows people to (1) use PyBOP without ever importing Plotly and (2) configure Plotly's settings in their scripts, which must be done before e.g. graph_objects is first imported.

Building documentation

We use Sphinx to build our documentation. A Nox session has been created to reduce the overhead when building the documentation locally. To run this session, type

nox -s docs

This will build the docs using sphinx-autobuild and render them in your browser. Likewise, to test the docs build, the following nox session is available:

nox -s doctests

Testing

All code requires testing. We use the pytest package for our tests. (These tests typically just check that the code runs without error, and so, are more debugging than testing in a strict sense. Nevertheless, they are very useful to have!)

If you have nox installed, to run unit tests, type

nox -s unit

For individual tests, use:

nox -s tests -- tests/unit/test_costs.py::TestCosts::test_costs

which will run the specified test, alternatively you can run all tests within a file by removing the trailing ::test_costs in the above command.

Alternatively, to run tests standalone with pytest, use:

pytest --unit -v

To run individual test files with nox, you can use

pytest tests/unit/path/to/test.py --unit -v

And for individual tests,

pytest tests/unit/path/to/test.py::TestClass:test_name --unit -v

where --unit is a flag to run only unit tests and -v is a flag to display verbose output. Furthermore, to run all the standard tests, type

nox -s tests

Additionally, to run the standard and docs tests, type

nox -s quick

Writing tests

Every new feature should have its own test. To create ones, have a look at the test directory and see if there's a test for a similar method. Copy-pasting is a good way to start.

Next, add some simple (and speedy!) tests of your main features. If these run without exceptions that's a good start! Next, check the output of your methods using any of these functions.

Debugging

Often, the code you write won't pass the tests straight away, at which stage it will become necessary to debug. The key to successful debugging is to isolate the problem by finding the smallest possible example that causes the bug. In practice, there are a few tricks to help you do this, which we give below. Once you've isolated the issue, it's a good idea to add a unit test that replicates this issue, so that you can easily check whether it's been fixed, and make sure that it's easily picked up if it crops up again. This also means that, if you can't fix the bug yourself, it will be much easier to ask for help (by opening a bug-report issue).

  1. Run individual test scripts instead of the whole test suite:

    pytest tests/unit/path/to/test --unit -v

    You can also run an individual test from a particular script, e.g.

    pytest tests/unit/path/to/test.py::TestClass:test_name --unit -v

    where --unit is a flag to run only unit tests and -v is a flag to display verbose output.

  2. Set break-points, either in your IDE or using the Python debugging module. To use the latter, add the following line where you want to set the break point

    import ipdb
    
    ipdb.set_trace()

    This will start the Python interactive debugger. If you want to be able to use magic commands from ipython, such as %timeit, then set

    from IPython import embed
    
    embed()
    import ipdb
    
    ipdb.set_trace()

    at the break point instead. Figuring out where to start the debugger is the real challenge. Some good ways to set debugging break points are:

    1. Try-except blocks. Suppose the line do_something_complicated() is raising a ValueError. Then you can put a try-except block around that line as:

      try:
          do_something_complicated()
      except ValueError:
          import ipdb
      
          ipdb.set_trace()

      This will start the debugger at the point where the ValueError was raised, and allow you to investigate further. Sometimes, it is more informative to put the try-except block further up the call stack than exactly where the error is raised.

    2. Warnings. If functions are raising warnings instead of errors, it can be hard to pinpoint where this is coming from. Here, you can use the warnings module to convert warnings to errors:

      import warnings
      
      warnings.simplefilter("error")

      Then you can use a try-except block, as in a., but with, for example, RuntimeWarning instead of ValueError.

  3. To isolate whether a bug is in a model, its Jacobian or its simplified version, you can set the use_jacobian and/or use_simplify attributes of the model to False (they are both True by default for most models).

  4. If a model isn't giving the answer you expect, you can try comparing it to other models. For example, you can investigate parameter limits in which two models should give the same answer by setting some parameters to be small or zero. The StandardOutputComparison class can be used to compare some standard outputs from battery models.

  5. To get more information about what is going on under the hood, and hence understand what is causing the bug, you can set the logging level to DEBUG by adding the following line to your test or script:

    pybop.set_logging_level("DEBUG")

Profiling

Sometimes, a bit of code will take much longer than you expect to run. In this case, you can set

from IPython import embed

embed()
import ipdb

ipdb.set_trace()

as above, and then use some of the profiling tools. In order of increasing detail:

  1. Simple timer. In ipython, the command

    %time command_to_time()
    

    tells you how long the line command_to_time() takes. You can use %timeit instead to run the command several times and obtain more accurate timings.

  2. Simple profiler. Using %prun instead of %time will give a brief profiling report 3. Detailed profiler. You can install the detailed profiler snakeviz through pip:

    pip install snakeviz

    and then, in ipython, run

    %load_ext snakeviz
    %snakeviz command_to_time()
    

    This will open a window in your browser with detailed profiling information.

Infrastructure

Installation via pip

Installation of PyBOP and its dependencies is handled via pip through the setuptools build-backend.

Configuration files:

pyproject.toml

Continuous Integration using GitHub actions

Each change pushed to the PyBOP GitHub repository will trigger the test and benchmark suites to be run, using GitHub actions.

Tests are run for different operating systems, and for all Python versions officially supported by PyBOP. If you opened a Pull Request, feedback is directly available on the corresponding page. If all tests pass, a green tick will be displayed next to the corresponding test run. If one or more test(s) fail, a red cross will be displayed instead.

Similarly, the benchmark suite is automatically run for the most recently pushed commit. Benchmark results are compared to the results available for the latest commit on the develop branch. Should any significant performance regression be found, a red cross will be displayed next to the benchmark run.

In all cases, more details can be obtained by clicking on a specific run.

Configuration files for various GitHub actions workflow can be found in .github/workflows.

Codecov

Code coverage (how much of our code is seen by the (Linux) unit tests) is tested using Codecov, a report is visible on https://codecov.io/gh/pybop-team/PyBOP.

GitHub

GitHub does some magic with particular filenames. In particular:

Acknowledgements

This CONTRIBUTING.md file, along with large sections of the code infrastructure, was copied from the excellent Pints repo, and PyBaMM repo