-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Document and assess last year's Travis and Deploy strategies. #51
Comments
Helluva good thought Brian, I’ve been meaning to do something similar, so I’ll contribute here instead. |
First Observation: The .travis.yml Files Looking over
Breaking this down:
So two things here, first
The script command is the bulk of the work of travis. It should include any project build and test tasks. If it exits as non-zero, it will be marked as a failed build but will continue to run through to the after_failure step The test-proj.sh script builds the containers then runs the test-entrypoint.sh script which runs the tests
Provided the script command ends with a successful code, the So we seem to see 3 main tasks in this:
Observations: Overall this seems like a fairly basic pattern to continue to use, unless we see a specific case, as in removing the getconfig complexities. There may be some opportunities to use more of the build lifecycle steps to our advantage. For example, possible alerting on after_failure? The "deploy" step is intriguing, however I believe only works if you are using a supported deploy provider, which i am not sure we fit into. Last years examples:
|
where / when does the "docker-compose" happen, if it does? |
Build and Testing Scripts
One example again is the "team-budget" script. (as budget situated the api within a sub-directory in the repo, many of the example scripts includes the $PROJ_SETTINGS_DIR in directory paths. this would not be used in examplar, where api is within root of repo): https://github.com/hackoregon/team-budget/blob/master/budget_proj/bin/test-proj.sh
So we see two flags:
While building the images pulls different compose files we do use the same
i am not sure completely why we needed to update the PATH? in terms of the script:
we see the basic Most important is the Observations We are using the same script to accomplish 2 tasks: building a container, then testing it. There is an entrypoint script to overide the default entrypoint that is run in the containers for the docker-compose up. We may need to use a Other Examples So this is one area there is some differentiation, that are worth looking into:
|
Ah - so the actual work is done in shell scripts, not in |
yeah, in our setup, i think once you get past very simple commands, doing so makes things a bit easier. |
Testing Database Connections So continuing to work through the testing setup, before we get to tests themselves being run, lets look at the datastores that teams are connecting to for testing, and how. Emergency Response Starting here as I know the most. When I came into the program to start building the API, we had a fairly developed database already live on AWS. I was given read-only creds to the prod AWS database, after hacking around some options, I ended up configuring my tests to run against the production database, as it was not creating or deleting any data. This strategy involved:
Team Budget I tried to step through the repo and could not find any specific test database config. As such, my assumption is that the deployed database, included a test version as well, which was then persisted. Whether this is correct or not, seems would be a good pattern to not test directly on prod dbs but also use the same read-only creds. Questions, is this correct or am I missing something? How would we create, then deploy the test version of the db. Prior to s3 upload, or could this replication be part of devops process? Team Housing With housing using py.test, they supplied a pytest.ini which pointed to the test settings:
So we see using a sqllite db in the test environment. Not uncommon practice, but doesn't really test actual production database connection and services. Team Homeless It appears that they are using django's fixtures to provide some test data, but are not making a connection to an actual backend datastore. Similar to housing, a common pattern, but if we want to verify a functioning database connection with the db, then this does not actual accomplish this. If we are looking to test only the python code, is an acceptable option. Team Transportation Guess you don't need a testing backend if you don't actually write any tests? |
We didn't have any tests for Transportation ... the best guess as to what the final app looked like was the local development environment running on an Ubuntu 16.04.x LTS laptop. ;-) https://github.com/hackoregon/transportation-backend/tree/master/ubuntu-local-deploy |
In Budget’s case, we knew from the start that we would never write t the database, so it never occurred to me that testing against the production database would be a risk. (Only a risk if someone commits, and someone else merges, Django code that writes to the DB, but certainly a risk the greater the distance from those tribal assumptions.) Not sure what the best strategy is here - duplicating the databases in production is a huge waste of memory for 99% of the time, but I agree that testing against a local sqlite3 doesn’t catch one of our biggest dependencies. In theory we could use separate creds (test creds = read-only), but if anyone plans to write to their DB then we’re hosed. In a monied organisation we’d just have a separate test/QA infrastructure, but I am loathe to spend that kind of money on behalf of an org that just recently asked for tax-deductible individual donations. |
I agree in a tradeoff btwn budget and a "pristine" qa environment our budget is the priority. Mostly wanted to make this decision explicit and documented. |
Environment Variable usage In 2017 API projects, the following env vars were configured in each Travis repo:
Examination of configured Travis env vars
Implicit Travis env vars
Implicit Docker env vars
Env vars unique to projects
Hard-coded environment variables
QUESTION (maybe just for myself): when passed through docker-compose.yml as env vars, are the passed-in env vars implicitly used by anything else other than the /bin/ scripts? |
Proposal for Travis env vars
All other common env vars used in last year's Travis settings (excepting the DOCKER_USERNAME and DOCKER_PASSWORD used in |
Travis configuration for builds There are a number of basic settings in Travis that we use, in conjunction with communicated (tribal?) expectations, to enable Hack Oregon to get consistent builds and deploys:
These settings only work because we have configured the
This sets up a pattern of the following:
|
How Travis hands off to AWS This is due to the "magic" of the
There are four key actions here:
Breaking this down...
IIRC, this is here to ensure that the
This pushes the image that was just built in the Travis environment (by What mystifies me (despite great articles like this is whether there's an implicit
This final script is a third-party script that enables Travis to tell AWS to pull a copy of the $DEPLOY_TARGET/$DOCKER_IMAGE:latest from $DOCKER_REPO and deploy it to the $ECS_SERVICE_NAME in $ECS_CLUSTER. For example, for the Travis output When everything is successful, the Travis build log will display something like the following at the end of the log:
|
.travis.yml configuration The configuration-in-common for all of last year's API projects'
(That is, except the Two of the projects went much further and embedded a bunch of extra, undocumented setup work (that hopefully we can avoid in this year's projects) in the Travis setup:
|
I had the getconfig embedded into the other shell scripts on emergency
response
…On Sun, Apr 29, 2018, 12:12 PM Mike Lonergan ***@***.***> wrote:
*.travis.yml configuration*
The configuration-in-common for all of last year's API projects'
.travis.yml is this:
sudo: required
services:
- docker
install:
- pip install --upgrade --user awscli
before_script:
- ./bin/getconfig.sh
script:
- './bin/test-proj.sh -t'
after_success:
- ./bin/docker-push.sh
(That is, except the emergency-response-backend, which somehow skipped
the before_script step to get-config.sh)
Two of the projects went much further and embedded a bunch of extra,
undocumented setup work (that hopefully we can avoid in this year's
projects) in the Travis setup:
- housing-backend
<https://github.com/hackoregon/housing-backend/blob/master/.travis.yml>
added exclusions for a couple of long-lived branches, and implemented an
installation of a specific version of docker-compose (which I suspect
is no longer necessary, if we rely on Travis' current containerized build
environment)
- transportation-backend
<https://github.com/hackoregon/transportation-backend/blob/master/.travis.yml>
includes a now-commented-out section doing a similar installation of
docker-compose, and a commented-out call to build-test-proj.sh
(probably because there were no tests to run in the Transportation-backend
project)
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#51 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AZWY9WoAuow5rUH35L-s9jl2nLpwjLAOks5tthCngaJpZM4TrMZO>
.
|
Comparing what you gave for team budget's .travis.yml to what he have currently in the exemplar, I can see three differences:
To achieve the same level of travis behavior as e.g. team budget's backend, we could implement the following changes in the exemplar repo:
Leaving us with a .travis.yml that looks like:
|
Testing on the disaster-resilience-backend repo, Travis builds running with the config outlined in the post above seem to create the api_production docker image successfully, but one key problem is that the .env file is not there, so the
|
New env vars in play this year
|
Think it maybe valuable to moving forward taking sometime and documenting what we know about how each project was integrated with travis.
Things to look at:
Somehow we got every project moved through the chain, so should be able to point to some learnings.
I plan to take some time over weekend to look into this
The text was updated successfully, but these errors were encountered: