Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Document and assess last year's Travis and Deploy strategies. #51

Open
bhgrant8 opened this issue Apr 28, 2018 · 19 comments
Open

Document and assess last year's Travis and Deploy strategies. #51

bhgrant8 opened this issue Apr 28, 2018 · 19 comments

Comments

@bhgrant8
Copy link
Member

Think it maybe valuable to moving forward taking sometime and documenting what we know about how each project was integrated with travis.

Things to look at:

  • what travis was used for
  • what env variables were set, how were they set, what did they do
  • settings in ui
  • .travis.yml
  • testing, how did this work
  • where does travis stop and cloudformation/aws take over?
  • ???? What else

Somehow we got every project moved through the chain, so should be able to point to some learnings.

I plan to take some time over weekend to look into this

@MikeTheCanuck
Copy link
Contributor

Helluva good thought Brian, I’ve been meaning to do something similar, so I’ll contribute here instead.

@bhgrant8
Copy link
Member Author

First Observation: The .travis.yml Files

Looking over .travis.yml files for the last year, all projects seemed to follow a basic pattern, will copy the Team Budget example here:

sudo: required
services:
  - docker
install:
  - pip install --upgrade --user awscli
before_script:
  - ./budget_proj/bin/getconfig.sh
script:
  - './budget_proj/bin/test-proj.sh -t'
after_success:
  - ./budget_proj/bin/docker-push.sh

Breaking this down:

sudo: required - allows travis commands to be run as sudo in the environment

services:
  - docker
  • says that we are using docker in the build cycle
install:
  - pip install --upgrade --user awscli

So two things here, first install is a step in Travis's build lifecycle - this step will install any dependencies we will need on an os level, basically this will be more related to aws/deploy and not anything needed within the docker container or django project.
the pip install command is installing the aws-cli which will be used to deploy the built container to AWS ECR (Elastic Container Registry)

before_script:
  - ./budget_proj/bin/getconfig.sh
  • The before_script is next step in build cycle. If this script ends with a non-zero code, it will break the build cycle immediately. So any config that needs to happen and needs to be successful would go into this step
  • The getconfig.sh script was a pattern used last year to pull database and other python variables into the docker container, we are most likely moving away from this pattern
script:
  - './budget_proj/bin/test-proj.sh -t'

The script command is the bulk of the work of travis. It should include any project build and test tasks. If it exits as non-zero, it will be marked as a failed build but will continue to run through to the after_failure step

The test-proj.sh script builds the containers then runs the test-entrypoint.sh script which runs the tests

after_success:
  - ./budget_proj/bin/docker-push.sh

Provided the script command ends with a successful code, the after_success command will be run.
The docker-push.sh file, essentially verified whether the current build was based on a PR request to the Master branch. Only in this case would it then run the ecs-deploy script which would take the successfully build and tested containers to AWS services using the awscli client that was installed.

So we seem to see 3 main tasks in this:

  1. build the API container we want to deploy
  2. confirm the built container passes any specified tests
  3. provided this is a request to the correct branch, will then deploy the built container off to aws

Observations:

Overall this seems like a fairly basic pattern to continue to use, unless we see a specific case, as in removing the getconfig complexities. There may be some opportunities to use more of the build lifecycle steps to our advantage. For example, possible alerting on after_failure? The "deploy" step is intriguing, however I believe only works if you are using a supported deploy provider, which i am not sure we fit into.

Last years examples:

@znmeb
Copy link
Contributor

znmeb commented Apr 28, 2018

where / when does the "docker-compose" happen, if it does?

@bhgrant8
Copy link
Member Author

bhgrant8 commented Apr 28, 2018

Build and Testing Scripts

docker-compose operations were run as part of the "build-test" script run in the "script" step of the travis build cycle, thus will feed into either "after_success" or "after_failure"

One example again is the "team-budget" script. (as budget situated the api within a sub-directory in the repo, many of the example scripts includes the $PROJ_SETTINGS_DIR in directory paths. this would not be used in examplar, where api is within root of repo):

https://github.com/hackoregon/team-budget/blob/master/budget_proj/bin/test-proj.sh

# Run all configured unit tests inside the Docker container
while getopts ":lt" opt; do
    case "$opt" in
        l)
          docker-compose -f $PROJ_SETTINGS_DIR/local-docker-compose.yml build
          docker-compose -f $PROJ_SETTINGS_DIR/local-docker-compose.yml run \
          --entrypoint /code/bin/test-entrypoint.sh $DOCKER_IMAGE
          ;;
        t)
          docker-compose -f $PROJ_SETTINGS_DIR/travis-docker-compose.yml build
          docker-compose -f $PROJ_SETTINGS_DIR/travis-docker-compose.yml run \
          --entrypoint /code/bin/test-entrypoint.sh $DOCKER_IMAGE
          ;;
        *)
          usage
          ;;
    esac
done

So we see two flags:

  • -l - this is the local build and test connects to the local image
  • -t - this is the travis build and test connects to the travis image

While building the images pulls different compose files we do use the same test-entrypoint.sh in each environment (removed commented out lines for clarity):

#!/bin/bash
export PATH=$PATH:~/.local/bin

python manage.py test --no-input --keepdb

i am not sure completely why we needed to update the PATH?

in terms of the script:

python manage.py test --no-input --keepdb

we see the basic manage.py test being run. Then --no-input, to prevent the script prompting user for input, allowing to be run automatically.

Most important is the --keepdb moniker, meaning that the database that the tests are run on is persistent from one test to the next. Emergency Response followed this pattern as well. Emergency Response ran all tests as read-only against the production database, still have to look at budget to see if this is same (future post)

Observations

We are using the same script to accomplish 2 tasks: building a container, then testing it. There is an entrypoint script to overide the default entrypoint that is run in the containers for the docker-compose up. We may need to use a --noinput flag to make sure the script does not stop and wait for user input. Connecting to a persistent database, is a path some projects used. When doing so, no migrations were run to prevent any changes to the db.

Other Examples

So this is one area there is some differentiation, that are worth looking into:

  • Transportation - This project does not use the "keepdb" option, so looks like was spinning up a test database on each run

  • Homeless - Mostly same as Transportation

  • Housing - Housing used py.test instead of built in testing suite, not too familar with py.test, but might have some specific value?

@znmeb
Copy link
Contributor

znmeb commented Apr 28, 2018

Ah - so the actual work is done in shell scripts, not in travis.yml.

@bhgrant8
Copy link
Member Author

yeah, in our setup, i think once you get past very simple commands, doing so makes things a bit easier.

@bhgrant8
Copy link
Member Author

Testing Database Connections

So continuing to work through the testing setup, before we get to tests themselves being run, lets look at the datastores that teams are connecting to for testing, and how.

Emergency Response

Starting here as I know the most.

When I came into the program to start building the API, we had a fairly developed database already live on AWS. I was given read-only creds to the prod AWS database, after hacking around some options, I ended up configuring my tests to run against the production database, as it was not creating or deleting any data.

This strategy involved:

if 'test' in sys.argv or 'test_coverage' in sys.argv:
    DATABASES = {
        'default': {
            'ENGINE': project_config.AWS['ENGINE'],
            'NAME': project_config.AWS['NAME'],
            'HOST': project_config.AWS['HOST'],
            'PORT': 5432,
            'USER': project_config.AWS['USER'],
            'PASSWORD': project_config.AWS['PASSWORD'],
            'TEST': {
                    'NAME': 'fire',
                },
        }
    }

Team Budget

I tried to step through the repo and could not find any specific test database config. As such, my assumption is that the deployed database, included a test version as well, which was then persisted. Whether this is correct or not, seems would be a good pattern to not test directly on prod dbs but also use the same read-only creds. Questions, is this correct or am I missing something? How would we create, then deploy the test version of the db. Prior to s3 upload, or could this replication be part of devops process?

Team Housing

With housing using py.test, they supplied a pytest.ini which pointed to the test settings:


from .settings import *

DATABASES = {
    "default": {
        "ENGINE": "django.db.backends.sqlite3",
        "NAME": ":memory:",
    }
}

EMAIL_BACKEND = 'django.core.mail.backends.locmem.EmailBackend'

So we see using a sqllite db in the test environment. Not uncommon practice, but doesn't really test actual production database connection and services.

Team Homeless

It appears that they are using django's fixtures to provide some test data, but are not making a connection to an actual backend datastore. Similar to housing, a common pattern, but if we want to verify a functioning database connection with the db, then this does not actual accomplish this. If we are looking to test only the python code, is an acceptable option.
https://github.com/hackoregon/teamHomelessness/blob/master/homelessAPI/homelessApp/tests.py#L6

Team Transportation

Guess you don't need a testing backend if you don't actually write any tests?

@znmeb
Copy link
Contributor

znmeb commented Apr 28, 2018

We didn't have any tests for Transportation ... the best guess as to what the final app looked like was the local development environment running on an Ubuntu 16.04.x LTS laptop. ;-)

https://github.com/hackoregon/transportation-backend/tree/master/ubuntu-local-deploy

@MikeTheCanuck
Copy link
Contributor

In Budget’s case, we knew from the start that we would never write t the database, so it never occurred to me that testing against the production database would be a risk. (Only a risk if someone commits, and someone else merges, Django code that writes to the DB, but certainly a risk the greater the distance from those tribal assumptions.)

Not sure what the best strategy is here - duplicating the databases in production is a huge waste of memory for 99% of the time, but I agree that testing against a local sqlite3 doesn’t catch one of our biggest dependencies.

In theory we could use separate creds (test creds = read-only), but if anyone plans to write to their DB then we’re hosed.

In a monied organisation we’d just have a separate test/QA infrastructure, but I am loathe to spend that kind of money on behalf of an org that just recently asked for tax-deductible individual donations.

@bhgrant8
Copy link
Member Author

I agree in a tradeoff btwn budget and a "pristine" qa environment our budget is the priority. Mostly wanted to make this decision explicit and documented.

@MikeTheCanuck
Copy link
Contributor

MikeTheCanuck commented Apr 29, 2018

Environment Variable usage

In 2017 API projects, the following env vars were configured in each Travis repo:

  • AWS_ACCESS_KEY_ID
  • AWS_DEFAULT_REGION
  • AWS_SECRET_ACCESS_KEY
  • CONFIG_BUCKET
  • DEPLOY_TARGET
  • DJANGO_SETTINGS_MODULE
  • DOCKER_IMAGE
  • DOCKER_REPO
  • ECS_CLUSTER
  • ECS_SERVICE_NAME
  • PROJ_SETTINGS_DIR

Examination of configured Travis env vars
In the analysis below, nearly all findings were based on the team-budget repo. Variations between projects should be accounted for as well, but rather than wait until I had the extra hours to review those as well, I'm posting this for others to build upon.

  • AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY
    • Goal: requiring authentication to push or pull from AWS ECR ensures two things:
      • We reduce the odds someone from the Internet could fill our Container Registry with unauthorized Docker images
      • We increase our comfort with publishing Docker images that contain secrets (e.g. database passwords) that we don’t want available to the public
    • Passed through as environment in travis-docker-compose.yml and local-docker-compose.yml
    • Used explicitly in ecs-deploy.sh as inherited environment variables - this is a script we cloned from (an AWS exemplar?) that is called by docker-push.sh to pull a Docker image from AWS ECR (Elastic Container Registry) and deploy it to the appropriate AWS ECS (EC2 Container Service) service
    • upload the built Docker image to our Docker registry in AWS
    • I believe these creds are also implicitly passed in when running aws ecr get-login --region $AWS_DEFAULT_REGION in the docker-push.sh script - which is used to push a copy of the built Docker image from Travis to AWS ECR - but I’m only 80% sure that aws ecr get-login uses the AWS creds to get a one-time login token that’s used by docker push to authenticate to AWS ECR to push a new Docker image
  • AWS_DEFAULT_REGION
    • Used explicitly in ecs-deploy.sh as an inherited environment variable - this is a script used in docker-push.sh to deploy a Docker image from our AWS ECR registry to the appropriate AWS ECS service
  • CONFIG_BUCKET
    • Goal: enable HackOregon API projects to store per-project secrets in a protected cloud location (i.e. S3 buckets) so that if any one project’s secrets were compromised, recovery would be as simple as updating the S3 copy of the project_config.py file, then re-building and re-deploying that project’s container
    • Passed through as environment in travis-docker-compose.yml and local-docker-compose.yml
    • Used in getconfig.sh to download a copy of project_config.py (which contains secrets configured as environment variables) into the Travis build environment (and which - the project_config.py - become embedded in the built Docker image that gets deployed to ECS via ECR)
  • DEPLOY_TARGET
    • Goal 1: to enable HackOregon API projects to use different project_config.py secrets files for an “integration” (or “staging”) infrastructure and a “production” infrastructure
    • Goal 2: to enable HackOregon API projects to build different Docker images for use in “integration”/“staging” vs “production” infrastructure (presumably because the Docker image for each target infrastructure would have different secrets, especially for different database deployments)
    • Passed through as environment in travis-docker-compose.yml and local-docker-compose.yml
    • Used in getconfig.sh to download a copy of project_config.py (which contains secrets configured as environment variables) into the Travis build environment (and which - the project_config.py - become embedded in the built Docker image that gets deployed to ECS via ECR)
    • Used in docker-push.sh as the “repo” name (in the DOCKER_REPO domain) to accomplish two things:
      • Upload an “integration”-oriented set of Docker images to AWS ECR
      • Deploy an “integration”-focused Docker image to AWS ECS’ “integration” infrastructure
  • DJANGO_SETTINGS_MODULE
    • Used in manage.py and wsgi.py to distinguish between runtime settings needed for non-Docker usage vs those needed for Docker usage; defaults to the dev.py settings
      • dev.py settings include SECRET_KEY, DEBUG, DATABASES {ENGINE, NAME, HOST, PORT, USER, PASSWORD} and settings for the debug_toolbar
    • Hard-coded in Dockerfile to enforce use of the production.py settings
      • production.py settings include SECRET_KEY, ALLOWED_HOSTS (and its companion EC2_PRIVATE_IP), DATABASES {ENGINE, NAME, HOST, PORT, USER, PASSWORD}
    • The settings distinguished via this env var are in addition to what is included via from .. import project_config
      • Question: do we still (in 2018 projects) need to have different settings for “integration” vs “production” and “non-Docker” vs “Docker”?
  • PROJ_SETTINGS_DIR
    • Goal: accommodate those Django projects that wished to keep their entire Django application in a subfolder of the GitHub repo, rather than store the application at the root of the repo
      • Note: this was one of the most problematic decisions of the 2017 project season, exacerbated by the fact that such projects called the application directory the same as the inner application directory (e.g. [repo]/budget_proj/budget_proj), making it extra-challenging to debug runtime context of script commands from outside and inside the Docker container
    • Passed through as environment in travis-docker-compose.yml and local-docker-compose.yml
    • Used in the following scripts to determine where the /bin directory (containing other scripts) was located: docker_push.sh
    • Used in the following scripts to determine where the *-docker-compose.yml files were located: build-proj.sh, test-proj.sh, start-proj.sh
    • Used in the following scripts to determine where to write the downloaded copy of project_config.py: getconfig.sh
    • Hard-coded in env.sh to … … … (?)
  • DOCKER_IMAGE
    • Goal: specify a name for the container image file, different for each project
    • Used in the following scripts to determine which image to push and deploy: docker-push.sh
    • Used in the following scripts to tell Docker which service to run the specified entrypoint commands in: test-proj.sh
      • Note: it is necessary to specify a service because docker-compose enables you to run multiple services in multiple containers from one command, and the Docker engine has no foolproof method of determining in which of the multiple services to launch the entrypoint
    • Used in travis-docker-compose.yml to tag the image, so that later docker push command can find an image with the expected tag - otherwise docker push will return "An image does not exist locally with the tag: 845828040396.dkr.ecr.us-west-2.amazonaws.com/production/transportation-systems-service"
    • Hard-coded in env.sh to … … … (?)
  • DOCKER_REPO
    • Goal: enable HackOregon to swap or duplicate their Docker registry at any time
    • Used in the following scripts to specify the destination registry to which to upload the new Docker image, and the source from which to deploy the latest image to AWS ECS: docker-push.sh
    • Used in travis-docker-compose.yml to to tag the image, so that later docker push command can find an image with the expected tag - otherwise docker push will return "An image does not exist locally with the tag: 845828040396.dkr.ecr.us-west-2.amazonaws.com/production/transportation-systems-service"
  • ECS_CLUSTER
    • Goal: to enable HackOregon to push Docker image updates to one of multiple ECS infrastructures (e.g. if we had deployed both an “integration” cluster and a “production” cluster)
    • Used in the following scripts to specify the destination ECS cluster to which we deploy fresh Docker images: docker-push.sh
  • ECS_SERVICE_NAME
    • Goal: enable HackOregon to update a variety of Docker images to their respective ECS service target
    • Used in the following scripts to specify the specific ECS “service” (collection of ECS “tasks”) to which to push the latest Docker image: docker-push.sh

Implicit Travis env vars

Implicit Docker env vars

  • PYTHONUNBUFFERED - this is hard-coded immediately [don’t remember why]

Env vars unique to projects

  • AWS_LOAD_BALANCER, CIVIC_PDX_ORG_HOST, CIVIC_PDX_COM_HOST - used in emergency-response-backend, declared in settings.py - used to extend ALLOWED_HOSTS
  • DEBUG - used in emergency-response-backend, declared in travis-docker-compose.yml as "False"
  • DOCKER_WEB_IMAGE - used in teamhomelessness, declared in docker-compose.yml - used for ???
  • NEWRELIC_LICENSE_KEY - used in emergency-response-backend, declared in travis-docker-compose.yml
  • CORS_ORIGIN_WHITELIST - used in transportation-backend, declared in settings.py
  • CORS_URLS_REGEX - used in housing-backend, declared in settings.py - seems to be used to intentionally minimize the CORS permissions?
  • CRON_CLASSES - used in housing-backend, declared in settings.py
  • DOCKER_USERNAME, DOCKER_PASSWORD - used in transportation-backend, declared in Travis Settings

Hard-coded environment variables

  • CONFIG_FILE - getconfig.sh defines this for use in downloading a copy of the project_config.py file (which contains secrets configured as environment variables) into the Travis build environment (and which become embedded in the built Docker image that gets deployed to ECS via ECR)
  • PROJ_SETTINGS_DIR, DOCKER_IMAGE: for local development purposes in the team-budget repo, we added an env.sh script to make it easier on developers to set these to appropriate values
  • PROJ_SETTINGS_DIR: for local development purposes (at least in the team-budget repo), we hard-coded this env var in the -l case

QUESTION (maybe just for myself): when passed through docker-compose.yml as env vars, are the passed-in env vars implicitly used by anything else other than the /bin/ scripts?

@MikeTheCanuck
Copy link
Contributor

Proposal for Travis env vars

  • remove CONFIG_BUCKET - if we can figure out the steps necessary to configure all our env vars in both Travis and ECS (and teach developers how to set these up for themselves locally), then we no longer need the S3 bucket storage
  • leave DEPLOY_TARGET - we do not need this for this season, but since we already know where all this needs to be used, it would be a shame to have to replicate this pattern in a future season (or for those who fork Hack Oregon's work) by removing it now.
    • NOTE: we will need to standardize on language around "development", "staging"/"integration" and "production"
  • consolidate DJANGO_SETTINGS_MODULE usage with DEPLOY_TARGET - we don't need two env vars accomplishing the same thing
  • remove PROJ_SETTINGS_DIR - enforce all Django projects to use a flat folder hierarchy by setting the example in the exemplar and in those API repos that are already configured

All other common env vars used in last year's Travis settings (excepting the DOCKER_USERNAME and DOCKER_PASSWORD used in transportation-backend) are still valid and useful.

@MikeTheCanuck
Copy link
Contributor

MikeTheCanuck commented Apr 29, 2018

Travis configuration for builds

There are a number of basic settings in Travis that we use, in conjunction with communicated (tribal?) expectations, to enable Hack Oregon to get consistent builds and deploys:

  • "Build only if .travis.yml is present" - ON for all projects except emergency-response-backend and housing-backend
  • "Limit concurrent jobs" - OFF for all projects
  • "Build pushed branches" - ON for all projects
  • "Build pushed pull requests" - ON for all projects
  • "Auto cancel branch builds" - ON for all projects
  • "Auto cancel pull request builds" - ON for all projects
  • Cron jobs - we have not configured any cron jobs

These settings only work because we have configured the docker-push.sh script to do the following:

# Tag, Push and Deploy only if it's not a pull request
if [ -z "$TRAVIS_PULL_REQUEST" ] || [ "$TRAVIS_PULL_REQUEST" == "false" ]; then
  # Push only if we're testing the master branch
   if [ "$TRAVIS_BRANCH" == "master" ]; then

This sets up a pattern of the following:

  • Travis will build and test any commit to any branch of the repo
  • Travis will build, test and (if build and testing are successful) deploy any commit to the master branch of the repo
  • TBD: remember how this behaves with PRs from a local branch vs PRs from a fork of the repo

@MikeTheCanuck
Copy link
Contributor

MikeTheCanuck commented Apr 29, 2018

How Travis hands off to AWS

This is due to the "magic" of the docker-push.sh script e.g. team-budget:

    export PATH=$PATH:$HOME/.local/bin
    echo Getting the ECR login...
    eval $(aws ecr get-login --region $AWS_DEFAULT_REGION)
    echo Running docker push command... # Troubleshooting
    docker push "$DOCKER_REPO"/"$DEPLOY_TARGET"/"$DOCKER_IMAGE":latest
    echo Running ecs-deploy.sh script...
    ./$PROJ_SETTINGS_DIR/bin/ecs-deploy.sh  \
     -n "$ECS_SERVICE_NAME" \
     -c "$ECS_CLUSTER"   \
     -i "$DOCKER_REPO"/"$DEPLOY_TARGET"/"$DOCKER_IMAGE":latest \
     --timeout 300

There are four key actions here:

  1. export PATH...
  2. eval $(aws ecr get-login)...
  3. docker push...
  4. ecs-deploy.sh...

Breaking this down...

export PATH=$PATH:$HOME/.local/bin

IIRC, this is here to ensure that the aws CLI (installed via .travis.yml) is on the $PATH

eval $(aws ecr get-login --region $AWS_DEFAULT_REGION)
  • The aws ecr get-login command is used to obtain a time-limited authentication token to access ECR (ECS Container Registry).
  • It implicitly inherits the AWS_ACCESS_KEY and AWS_SECRET_ACCESS_KEY credentials that are configured as env vars in the Travis repo's Settings, and uses those creds to authenticate to AWS and obtain back a docker login command.
  • the use of eval immediately runs the docker login command, without printing the time-limited password to the Travis log.
  • once docker-login runs, subsequent commands such as docker push and ecs-deploy.sh will be authenticated to the default ECR registry (i.e. the registry corresponding to the AWS user & AWS_DEFAULT_REGION).
  • NOTE: I have long since forgotten why ECR access requires use of tokens rather than just inheriting authentication context from the AWS* env vars.
docker push "$DOCKER_REPO"/"$DEPLOY_TARGET"/"$DOCKER_IMAGE":latest

This pushes the image that was just built in the Travis environment (by build-proj.sh) up to the AWS ECS registry. IIUC, this pushes the $DOCKER_IMAGE to $DOCKER_REPO server into the $DEPLOY_TARGET repository, and apply the "latest" tag.

What mystifies me (despite great articles like this is whether there's an implicit docker tag command having been run elsewhere in our stack, to have pre-tagged the image before we push it.

./$PROJ_SETTINGS_DIR/bin/ecs-deploy.sh  \
     -n "$ECS_SERVICE_NAME" \
     -c "$ECS_CLUSTER"   \
     -i "$DOCKER_REPO"/"$DEPLOY_TARGET"/"$DOCKER_IMAGE":latest \
     --timeout 300

This final script is a third-party script that enables Travis to tell AWS to pull a copy of the $DEPLOY_TARGET/$DOCKER_IMAGE:latest from $DOCKER_REPO and deploy it to the $ECS_SERVICE_NAME in $ECS_CLUSTER.

For example, for the team-budget project from 2017, this will tell AWS to pull integration/budget-service:latest from 845828040396.dkr.ecr.us-west-2.amazonaws.com and deploy it to hacko-integration-BudgetService-16MVULLFXXIDZ-Service-1BKKDDHBU8RU4 on the hacko-integration cluster.

Travis output

When everything is successful, the Travis build log will display something like the following at the end of the log:

$ ./budget_proj/bin/docker-push.sh
Getting the ECR login...
Flag --email has been deprecated, will be removed in 1.13.
Login Succeeded
Running docker push command...
The push refers to a repository [845828040396.dkr.ecr.us-west-2.amazonaws.com/integration/budget-service]
Running ecs-deploy.sh script...
Using image name: 845828040396.dkr.ecr.us-west-2.amazonaws.com/integration/budget-service:latest
Current task definition: arn:aws:ecs:us-west-2:845828040396:task-definition/budget-service:121
New task definition: arn:aws:ecs:us-west-2:845828040396:task-definition/budget-service:122
Service updated successfully, new task definition running.

@MikeTheCanuck
Copy link
Contributor

.travis.yml configuration

The configuration-in-common for all of last year's API projects' .travis.yml is this:

sudo: required
services:
  - docker
install:
  - pip install --upgrade --user awscli
before_script:
  - ./bin/getconfig.sh
script:
  - './bin/test-proj.sh -t'
after_success:
  - ./bin/docker-push.sh

(That is, except the emergency-response-backend, which somehow skipped the before_script step to get-config.sh)

Two of the projects went much further and embedded a bunch of extra, undocumented setup work (that hopefully we can avoid in this year's projects) in the Travis setup:

  • housing-backend added exclusions for a couple of long-lived branches, and implemented an installation of a specific version of docker-compose (which I suspect is no longer necessary, if we rely on Travis' current containerized build environment)
  • transportation-backend includes a now-commented-out section doing a similar installation of docker-compose, and a commented-out call to build-test-proj.sh (probably because there were no tests to run in the Transportation-backend project)

@bhgrant8
Copy link
Member Author

bhgrant8 commented Apr 29, 2018 via email

@nam20485
Copy link
Member

Comparing what you gave for team budget's .travis.yml to what he have currently in the exemplar, I can see three differences:

  1. The script command line arguments are slightly different but seem to be semantically similar. They use -t and -l while our scripts use -p and -d. I believe our -p- corresponds to their -t.

  2. Our repo separates what they have in test-proj.sh in to two script files, build.sh and test.sh.

  3. Our repo does not contain two of the scripts: docker-push.sh (yet), or getconfig.sh (probably never will)

To achieve the same level of travis behavior as e.g. team budget's backend, we could implement the following changes in the exemplar repo:

  1. Change the script: section to call bin/build.sh -p and bin/test.sh -p
  2. Remove reference to getconfig.sh from before_script: stanza
  3. Create and implement docker-push.sh

Leaving us with a .travis.yml that looks like:

sudo: required

services:
  - docker

install:
  - pip install --upgrade --user awscli

script:
  - ./bin/build.sh -p
  - ./bin/test.sh -p

after_success:
  - ./bin/docker-push.sh

@nam20485
Copy link
Member

Testing on the disaster-resilience-backend repo, Travis builds running with the config outlined in the post above seem to create the api_production docker image successfully, but one key problem is that the .env file is not there, so the PRODUCTION_ environment variables are not set, resulting in the following message:

...
$ ./bin/build.sh -p
WARNING: The PRODUCTION_POSTGRES_USER variable is not set. Defaulting to a blank string.
WARNING: The PRODUCTION_POSTGRES_NAME variable is not set. Defaulting to a blank string.
WARNING: The PRODUCTION_POSTGRES_HOST variable is not set. Defaulting to a blank string.
WARNING: The PRODUCTION_POSTGRES_PORT variable is not set. Defaulting to a blank string.
WARNING: The PRODUCTION_POSTGRES_PASSWORD variable is not set. Defaulting to a blank string.
WARNING: The PRODUCTION_DJANGO_SECRET_KEY variable is not set. Defaulting to a blank string.
Building api_production
...

@MikeTheCanuck
Copy link
Contributor

New env vars in play this year

  • PROJECT_NAME: used in production-docker-entrypoint.sh to define the WSGI
  • DOCKER_SERVICE: used in test.sh and .travis.yml to define the Docker Service named in production-docker-compose.yml that hosts the API

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants