Skip to content

Latest commit

 

History

History
177 lines (130 loc) · 6.43 KB

ci.rst

File metadata and controls

177 lines (130 loc) · 6.43 KB

Continuous integration

Continuous Integration (CI) is a development practice that requires developers to integrate code into a shared repository several times a day. Each check-in is then verified by an automated build, allowing teams to detect problems early.

By integrating regularly, you can detect errors quickly, and locate them more easily.

—ThoughtWorks

honeybadgermpc currently uses Travis CI to perform various checks when one wishes to merge new code into a shared branch under the shared repository initc3/HoneyBadgerMPC. The file .travis.yml under the root of the project is used to instruct Travis CI on what to do whenever a build is triggered.

.travis.yml

Whenever a build is triggered, three checks are currently performed:

  1. tests
  2. code quality via flake8. and
  3. documentation generation.

Each of these checks corresponds to a row in the build matrix:

matrix:
  include:
    - env: BUILD=tests
    - env: BUILD=flake8
    - env: BUILD=docs

Depending on the value of the BUILD variable the various steps (e.g.: install, script) of the build lifecycle may differ.

Using Python 3.7 on Travis CI

In order to use Python 3.7 the following workaround is used in .travis.yml:

os: linux
dist: xenial
language: python
python: 3.7
sudo: true

See currently opened issue on this matter: travis-ci/travis-ci#9815

Using Docker on Travis CI

In order to use Docker the following settings are needed in travis.yml:

sudo: true

services:
  - docker

See :ref:`docker-in-travis` below for more information on how we use docker and docker-compose on Travis CI to run the tests for honeybadgermpc.

Shell scripts under .ci/

In order to simplify the .travis.yml file, shell scripts are invoked for the install, script and after_success steps. These scripts are located under the .ci directory and should be edited as needed but with care since it is important that the results of the checks be reliable.

.travis.compose.yml

For the docs and tests build jobs (i.e.: BUILD=docs and BUILD=tests matrix rows), docker-compose is used. The Dockerfile used is located under the .ci/ directory whereas the docker-compose file is under the root of the project and is named .travis.compose.yml. Both files are similar to the ones used for development. One key difference is that only the docs or tests requirements are installed, depending on the value of the BUILD environment variable.

Note

Some work could perhaps be done to limit the duplication accross the two Dockerfiles, by using a base Dockerfile for instance, but this may also complicate things so for now some duplication is tolerated.

Code coverage

Code coverage is used to check whether code is executed when the tests are run. Making sure that the code is executed when tests are run helps detecting errors.

In the tests build job on Travis CI a code coverage report is generated at the end of the script step, with the --cov-report=xml option:

# .ci/travis-install.sh
$BASE_CMD pytest -v --cov --cov-report=term-missing --cov-report=xml

If the test run was successful the report is uploaded to codecov in the after_success step:

# .travis.yml
after_success: .ci/travis-after-success.sh

Important

It is important to note that the coverage measurement happens in a docker container meanwhile the report upload happens outside the container. There are different ways to handle this situation and the current approach used is a variation of what is outlined in Codecov Outside Docker.

Configuration

Configuring codecov is done via the .codecov.yml file which is in the project root. Consult the codecov documentation for information on how to work with the .codecov.yml configuration file. The most relevant sections are About the Codecov yaml and Coverage Configuration.

Github integration

A pull request may fail the code coverage check and if so the pull request will be marked as failing on Github. The Github integration may require having a team bot set up to be fully operational. See issue initc3#66 for more details.

Recommended readings