diff --git a/Caddyfile b/Caddyfile index b018cbf..e78d078 100644 --- a/Caddyfile +++ b/Caddyfile @@ -2,14 +2,8 @@ admin off } -:5000 { - # We answer all requests with the contents of this file: - # https://internetarchive.github.io/nomad/ci.yml - rewrite * /nomad/ci.yml - - reverse_proxy { - to https://internetarchive.github.io - # like CLI `--change-host-header`: - header_up Host {upstream_hostport} - } +:8888 { + # We answer all requests this CI/CD yaml file from this repo + file_server + rewrite * /.gitlab-ci.yml } diff --git a/Dockerfile b/Dockerfile index bbc7f76..6f19747 100644 --- a/Dockerfile +++ b/Dockerfile @@ -1,3 +1,22 @@ -FROM caddy:alpine +FROM denoland/deno:alpine -COPY Caddyfile /etc/caddy/ +# add `nomad` +RUN mkdir -m777 /usr/local/sbin && \ + cd /usr/local/sbin && \ + wget -qO nomad.zip https://releases.hashicorp.com/nomad/1.7.6/nomad_1.7.6_linux_amd64.zip && \ + unzip nomad.zip && \ + rm nomad.zip && \ + chmod 777 nomad && \ + # podman for build.sh + apk add bash zsh jq podman caddy && \ + # using podman not docker + ln -s /usr/bin/podman /usr/bin/docker + +WORKDIR /app +COPY .gitlab-ci.yml Caddyfile . + +COPY build.sh deploy.sh / + +USER deno + +CMD ["/usr/bin/caddy", "run"] diff --git a/README.md b/README.md index f87c51b..1cb9f70 100644 --- a/README.md +++ b/README.md @@ -1 +1,545 @@ -deploy that responds with gitlab CI/CD pipeline instructions +Code, setup, and information to: +- setup automatic deployment to Nomad clusters from GitLab's standard CI/CD pipelines +- interact with, monitor, and customize deployments + + +[[_TOC_]] + + +# Overview +Deployment leverages a simple `.gitlab-ci.yml` using GitLab runners & CI/CD ([build] and [test]); +then switches to custom [deploy] phase to deploy docker containers into `nomad`. + +This also contains demo "hi world" webapp. + + +Uses: +- [nomad](https://www.nomadproject.io) **deployment** (management, scheduling) +- [consul](https://www.consul.io) **networking** (service discovery, healthchecking, secrets storage) +- [caddy](https://caddyserver.com/) **routing** (load balancing, automatic https) + +![Architecture](img/overview2.drawio.svg) + + +## Want to deploy to nomad? 🚀 +- verify project's [Settings] [CI/CD] [Variables] has either Group or Project level settings for: + - `NOMAD_TOKEN` `MY-TOKEN` + - `NOMAD_ADDR` `https://MY-HOSTNAME` or `BASE_DOMAIN` `example.com` + - (archive.org admins will often have set this already for you at the group-level) +- simply make your project have this simple `.gitlab-ci.yml` in top-level dir: +```yaml +include: + - remote: 'https://gitlab.com/internetarchive/nomad/-/raw/master/.gitlab-ci.yml' +``` +- if you want a [test] phase, you can add this to the `.gitlab-ci.yml` file above: +```yaml +test: + stage: test + image: ${CI_REGISTRY_IMAGE}/${CI_COMMIT_REF_SLUG}:${CI_COMMIT_SHA} + script: + - cd /app # or wherever in your image + - npm test # or whatever your test scripts/steps are +``` +- [optional] you can _instead_ copy [the included file](.gitlab-ci.yml) and customize/extend it. +- [optional] you can copy this [project.nomad](project.nomad) file into your repo top level and customize/extend it if desired +- _... but there's a good chance you won't need to_ 😎 + +_**Note:** For urls like https://archive.org/services/project -- watch out for routes defined in your app with trailing slashes – they may redirect to project.dev.archive.org. More information [here](https://git.archive.org/services/pyhi/-/blob/main/README.md#notes)._ + +### Customizing +There are various options that can be used in conjunction with the `project.nomad` and `.gitlab-ci.yml` files, keys: +```text +NOMAD_VAR_CHECK_PATH +NOMAD_VAR_CHECK_PROTOCOL +NOMAD_VAR_CHECK_TIMEOUT +NOMAD_VAR_CONSUL_PATH +NOMAD_VAR_COUNT +NOMAD_VAR_COUNT_CANARIES +NOMAD_VAR_CPU +NOMAD_VAR_FORCE_PULL +NOMAD_VAR_HEALTH_TIMEOUT +NOMAD_VAR_HOSTNAMES +NOMAD_VAR_IS_BATCH +NOMAD_VAR_MEMORY +NOMAD_VAR_MULTI_CONTAINER +NOMAD_VAR_NAMESPACE +NOMAD_VAR_NETWORK_MODE +NOMAD_VAR_NO_DEPLOY +NOMAD_VAR_PERSISTENT_VOLUME +NOMAD_VAR_PORTS +NOMAD_VAR_SERVERLESS +NOMAD_VAR_VOLUMES +``` +- See the top of [project.nomad](project.nomad) +- Our customizations always prefix with `NOMAD_VAR_`. +- You can simply insert them, with values, in your project's `.gitlab-ci.yml` file before including _our_ `.gitlab-ci.yml` like above. +- Examples 👇 +#### Don't actually deploy containers to nomad +Perhaps your project just wants to leverage the CI (Continuous Integration) for [buil] and/or [test] steps - but not CD (Continuous Deployment). An example might be a back-end container that runs elsewhere and doesn't have web listener. +```yaml +variables: + NOMAD_VAR_NO_DEPLOY: 'true' +``` + +#### Custom default RAM expectations from (default) 300 MB to 1 GB +This value is the _expected_ value for your container's average running needs/usage, helpful for `nomad` scheduling purposes. It is a "soft limit" and we use *ten times* this amount to be the amount used for a "hard limit". If your allocated container exceeds the hard limit, the container may be restarted by `nomad` if there is memory pressure on the Virtual Machine the container is running on. +```yaml +variables: + NOMAD_VAR_MEMORY: 1000 +``` +#### Custom default CPU expectations from (default) 100 MHz to 1 GHz +This value is the _expected_ value for your container's average running needs/usage, helpful for `nomad` scheduling purposes. It is a "soft limit". If your allocated container exceeds your specified limit, the container _may_ be restarted by `nomad` if there is CPU pressure on the Virtual Machine the container is running on. (So far, CPU-based restarts seem very rare in practice, since most VMs tend to "fill" up from aggregate container RAM requirements first 😊) +```yaml +variables: + NOMAD_VAR_CPU: 1000 +``` +#### Custom healthcheck, change from (default) HTTP to TCP: +This can be useful if your webapp serves using websockets, doesnt respond to http, or typically takes too long (or can't) respond with a `200 OK` status. (Think of it like switching to just a `ping` on your main port your webapp listens on). +```yaml +variables: + NOMAD_VAR_CHECK_PROTOCOL: 'tcp' +``` +#### Custom healthcheck, change path from (default) `/` to `/healthcheck`: +```yaml +variables: + NOMAD_VAR_CHECK_PATH: '/healthcheck' +``` +#### Custom healthcheck run time, change from (default) `2s` (2 seconds) to `1m` (one minute) +If your healthcheck may take awhile to run & succeed, you can increase the amount of time the `consul` healthcheck allows your HTTP request to run. +```yaml +variables: + NOMAD_VAR_CHECK_TIMEOUT: '1m' +``` +#### Custom time to start healthchecking after container re/start from (default) `20s` (20 second) to `3m` (3 minutes) +If your container takes awhile, after startup, to settle before healthchecking can work reliably, you can extend the wait time for the first healthcheck to run. +```yaml +variables: + NOMAD_VAR_HEALTH_TIMEOUT: '3m' +``` +#### Custom running container count from (default) 1 to 3 +You can run more than one container for increased reliability, more request processing, and more reliable uptimes (in the event of one or more Virtual Machines hosting containers having issues). + +For archive.org users, we suggest instead to switch your production deploy to our alternate production cluster. + +Keep in mind, you will have 2+ containers running simultaneously (_usually_, but not always, on different VMs). So if your webapp uses any shared resources, like backends not in containers, or "persistent volumes", that you will need to think about concurrency, potentially multiple writers, etc. 😊 +```yaml +variables: + NOMAD_VAR_COUNT: 3 +``` +#### Custom make NFS `/home/` available in running containers, readonly +Allow your containers to see NFS `/home/` home directories, readonly. +```yaml +variables: + NOMAD_VAR_VOLUMES: '["/home:/home:ro"]' +``` +#### Custom make NFS `/home/` available in running containers, read/write +Allow your containers to see NFS `/home/` home directories, readable and writable. Please be highly aware of operational security in your container when using this (eg: switch your `USER` in your `Dockerfile` to another non-`root` user; use "prepared statements" with any DataBase interactions; use [https://developer.mozilla.org/en-US/docs/Web/HTTP/CSP](Content Security Policy) in all your pages to eliminate [https://developer.mozilla.org/en-US/docs/Glossary/Cross-site_scripting](XSS attacks, etc.) +```yaml +variables: + NOMAD_VAR_VOLUMES: '["/home:/home:rw"]' +``` +#### Custom hostname for your `main` branch deploy +Your deploy will get a nice semantic hostname by default, based upon "[slugged](https://en.wikipedia.org/wiki/Clean_URL#Slug)" formula like: https://[GITLAB_GROUP]-[GITLAB_PROJECT_OR_REPO_NAME]-[BRANCH_NAME]. However, you can override this if needed. This custom hostname will only pertain to a branch named `main` (or `master` [sic]) +```yaml +variables: + NOMAD_VAR_HOSTNAMES: '["www.example.com"]' +``` +#### Custom hostnameS for your `main` branch deploy +Similar to prior example, but you can have your main deployment respond to multiple hostnames if desired. +```yaml +variables: + NOMAD_VAR_HOSTNAMES: '["www.example.com", "store.example.com"]' +``` + +#### Multiple containers in same job spec +If you want to run multiple containers in the same job and group, set this to true. For example, you might want to run a Postgresql 3rd party container from bitnami, and have the main/http front-end container talk to it. Being in the same group will ensure all containers run on the same VM; which makes communication between them extremely easy. You simply need to inspect environment variables. + +You can see a minimal example of two containers with a "front end" talking to a "backend" here +https://gitlab.com/internetarchive/nomad-multiple-tasks + +See also a [postgres DB setup example](#postgres-db). +```yaml +variables: + NOMAD_VAR_MULTI_CONTAINER: 'true' +``` + +#### Force `docker pull` before container starts +If your deployment's job spec doesn't change between pipelines for some reason, you can set this to ensure `docker pull` always happens before your container starts up. A good example where you might see this is a periodic/batch/cron process that fires up a pipeline without any repository commit. Depending on your workflow and `Dockerfile` from there, if you see "stale" versions of containers, use this customization. +```yaml +variables: + NOMAD_VAR_FORCE_PULL: 'true' +``` + +#### Turn off [deploy canaries](https://learn.hashicorp.com/tutorials/nomad/job-blue-green-and-canary-deployments) +When a new deploy is happening, live traffic continues to old deploy about to be replaced, while a new deploy fires off in the background and `nomad` begins healthchecking. Only once it seems healthy, is traffic cutover to the new container and the old container removed. (If unhealthy, new container is removed). That can mean *two* deploys can run simultaneously. Depending on your setup and constraints, you might not want this and can disable canaries with this snippet below. (Keep in mind your deploy will temporarily 404 during re-deploy *without* using blue/green deploys w/ canaries). +```yaml +variables: + NOMAD_VAR_COUNT_CANARIES: 0 +``` + +#### Change your deploy to a cron-like batch/periodic +If you deployment is something you want to run periodically, instead of continuously, you can use this variable to switch to a nomad `type="batch"` +```yaml +variables: + NOMAD_VAR_IS_BATCH: 'true' +``` +Combine your `NOMAD_VAR_IS_BATCH` override, with a small `job.nomad` file in your repo to setup your cron behaviour. + +Example `job.nomad` file contents, to run the deploy every hour at 15m past the hour: +```ini +type = "batch" +periodic { + cron = "15 * * * * *" + prohibit_overlap = false # must be false cause of kv env vars task +} +``` + +#### Custom deploy networking +If your admin allows it, there might be some useful reasons to use VM host networking for your deploy. A good example is "relaying" UDP *broadcast* messages in/out of a container. Please see Tracey if interested, archive folks. :) +```yaml +variables: + NOMAD_VAR_NETWORK_MODE: 'host' +``` + +#### Custom namespacing +A job can be limited to a specific 'namespace' for purposes of ACL 'gating'. +In the example below, a cluster admin could create a custom `NOMAD_TOKEN` that only allows the +bearer to access jobs part of the namespace `team-titan`. +```yaml +variables: + NOMAD_VAR_NAMESPACE: 'team-titan' +``` + + + +#### More customizations +There are even more, less common, ways to customize your deploys. + +With other variables, like `NOMAD_VAR_PORTS`, you can use dynamic port allocation, setup daemons that use raw TCP, and more. + +Please see the top area of [project.nomad](project.nomad) for "Persistent Volumes" (think a "disk" that survives container restarts), additional open ports into your webapp, and more. + +See also [this section](#optional-add-ons-to-your-project) below. + +### Deploying to production nomad cluster (archive.org only) +Our production cluster has 3 VMs and will deploy your repo to a running container on each VM, using `haproxy` load balancer to balance requests. + +This should ensure much higher availability and handle more requests. + +Keep in mind if your deployment uses a "persistent volume" or talks to other backend services, they'll be getting traffic and access from multiple containers simultaneously. + +Setting up your repo to deploy to production is easy! + +- add a CI/CD Secret `NOMAD_TOKEN_PROD` with the nomad cluster value (ask tracey or robK) + - make it: protected, masked, hidden +![Production CI/CD Secret](img/prod.jpg) +- Make a new branch named `production` (presumably from your repo's latest `main` or `master` branch) + - It should now deploy your project to a different `NOMAD_ADDR` url + - Your default hostname domain will change from `.dev.archive.org` to `.prod.archive.org` +- [GitLab only] - [Protect the `production` branch](https://docs.gitlab.com/ee/user/project/protected_branches.html) + - suggest using same settings as your `main` or `master` (or default) branch +![Protect a branch](img/protect.jpg) + + +### Deploying to staging nomad cluster (archive.org only) +Our staging cluster will deploy your repo to a running container on one of its VMs. + +Setting up your repo to deploy to staging is easy! + +- add a CI/CD Secret `NOMAD_TOKEN_STAGING` with the nomad cluster value (ask tracey or robK) + - make it: protected, masked, hidden (similar to `production` section above) +- Make a new branch named `staging` (presumably from your repo's latest `main` or `master` branch) + - It should now deploy your project to a different `NOMAD_ADDR` url + - Your default hostname domain will change from `.dev.archive.org` to `.staging.archive.org` +- [GitLab only] - [Protect the `staging` branch](https://docs.gitlab.com/ee/user/project/protected_branches.html) + - suggest using same settings as your `main` or `master` (or default) branch, changing `production` to `staging` here: +![Protect a branch](img/protect.jpg) + + +### Deploying to ext nomad cluster (archive.org only) +Our "ext" cluster will deploy your repo to a running container on one of its VMs. + +Setting up your repo to deploy to ext is easy! + +- add a CI/CD Secret `NOMAD_TOKEN_EXT` with the nomad cluster value (ask tracey or robK) + - make it: protected, masked, hidden (similar to `production` section above) +- Make a new branch named `ext` (presumably from your repo's latest `main` or `master` branch) + - It should now deploy your project to a different `NOMAD_ADDR` url + - Your default hostname domain will change from `.dev.archive.org` to `.ext.archive.org` +- [GitLab only] - [Protect the `ext` branch](https://docs.gitlab.com/ee/user/project/protected_branches.html) + - suggest using same settings as your `main` or `master` (or default) branch, changing `production` to `ext` here: +![Protect a branch](img/protect.jpg) + + +## Laptop access +- create `$HOME/.config/nomad` and/or get it from an admin who setup your Nomad cluster + - @see top of [aliases](aliases) + - `brew install nomad` + - `source $HOME/.config/nomad` + - better yet: + - `git clone https://gitlab.com/internetarchive/nomad` + - adjust next line depending on where you checked out the above repo + - add this to your `$HOME/.bash_profile` or `$HOME/.zshrc` etc. + - `FI=$HOME/nomad/aliases && [ -e $FI ] && source $FI` + - then `nomad status` should work nicely + - @see [aliases](aliases) for lots of handy aliases.. +- you can then also use your browser to visit [$NOMAD_ADDR/ui/jobs](https://MY-HOSTNAME:4646/ui/jobs) + - and enter your `$NOMAD_TOKEN` in the ACL requirement + + +# Setup a Nomad Cluster +- we use HinD: https://github.com/internetarchive/hind + - you can customize the install with various environment variables + +Other alternatives: +- have DNS domain you can point to a VM? + - nomad/consul with $5/mo VM (or on-prem) + - [[1/2] Setup GitLab, Nomad, Consul & Fabio](https://tracey.archive.org/devops/2021-03-31) + - [[2/2] Add GitLab Runner & Setup full CI/CD pipelines](https://tracey.archive.org/devops/2021-04-07) +- have DNS domain and want on-prem GitLab? + - nomad/consul/gitlab/runners with $20/mo VM (or on-prem) + - [[1/2] Setup GitLab, Nomad, Consul & Fabio](https://tracey.archive.org/devops/2021-03-31) + - [[2/2] Add GitLab Runner & Setup full CI/CD pipelines](https://tracey.archive.org/devops/2021-04-07) +- no DNS - run on mac/linux laptop? + - [[1/3] setup GitLab & GitLab Runner on your Mac](https://tracey.archive.org/devops/2021-02-17) + - [[2/3] setup Nomad & Consul on your Mac](https://tracey.archive.org/devops/2021-02-24) + - [[3/3] connect: GitLab, GitLab Runner, Nomad & Consul](https://tracey.archive.org/devops/2021-03-10) + + +# Monitoring GUI urls (via ssh tunnelling above) +![Cluster Overview](https://tracey.archive.org/images/nomad-ui4.jpg) +- nomad really nice overview (see `Topology` link ☝) + - https://[NOMAD-HOST]:4646 (eg: `$NOMAD_ADDR`) + - then enter your `$NOMAD_TOKEN` +- @see [aliases](aliases) `nom-tunnel` + - http://localhost:8500 # consul + + +# Inspect, poke around +```bash +nomad node status +nomad node status -allocs +nomad server members + + +nomad job run example.nomad +nomad job status +nomad job status example + +nomad job deployments -t '{{(index . 0).ID}}' www-nomad +nomad job history -json www-nomad + +nomad alloc logs -stderr -f $(nomad job status www-nomad |egrep -m1 '\srun\s' |cut -f1 -d' ') + + +# get CPU / RAM stats and allocations +nomad node status -self + +nomad node status # OR pick a node's 1st column, then +nomad node status 01effcb8 + +# get list of all services, urls, and more, per nomad +wget -qO- --header "X-Nomad-Token: $NOMAD_TOKEN" $NOMAD_ADDR/v1/jobs |jq . +wget -qO- --header "X-Nomad-Token: $NOMAD_TOKEN" $NOMAD_ADDR/v1/job/JOB-NAME |jq . + + +# get list of all services and urls, per consul +consul catalog services -tags +wget -qO- 'http://127.0.0.1:8500/v1/catalog/services' |jq . +``` + +# Optional add-ons to your project + +## Secrets +In your project/repo Settings, set CI/CD environment variables starting with `NOMAD_SECRET_`, marked `Masked` but _not_ `Protected`, eg: +![Secrets](img/secrets.jpg) +and they will show up in your running container as environment variables, named with the lead `NOMAD_SECRET_` removed. Thus, you can get `DATABASE_URL` (etc.) set in your running container - but not have it anywhere else in your docker image and not printed/shown during CI/CD pipeline phase logging. + + +## Persistent Volumes +Persistent Volumes (PV) are like mounted disks that get setup before your container starts and _mount_ in as a filesystem into your running container. They are the only things that survive a running deployment update (eg: a new CI/CD pipeline), container restart, or system move to another cluster VM - hence _Persistent_. + +You can use PV to store files and data - especially nice for databases or otherwise (eg: retain `/var/lib/postgresql` through restarts, etc.) + +Here's how you'd update your project's `.gitlab-ci.yml` file, +by adding these lines (suggest near top of your file): +```yaml +variables: + NOMAD_VAR_PERSISTENT_VOLUME: '/pv' +``` +Then the dir `/pv/` will show up (blank to start with) in your running container. + +If you'd like to have the mounted dir show up somewhere besides `/pv` in your container, +you can setup like: +```yaml +variables: + NOMAD_VAR_PERSISTENT_VOLUME: '/var/lib/postgresql' +``` + +Please verify added/updated files persist through two repo CI/CD pipelines before adding important data and files. Your DevOps teams will try to ensure the VM that holds the data is backed up - but that does not happen by default without some extra setup. Your DevOps team must ensure each VM in the cluster has (the same) shared `/pv/` directory. We presently use NFS for this (after some data corruption issues with glusterFS and rook/ceph). + + +## Postgres DB +We have a [postgresql example](https://git.archive.org/www/dwebcamp2019), visible to archive.org folks. But the gist, aside from a CI/CD Variable/Secret `POSTGRESQL_PASSWORD`, is below. + +_Keep in mind if you setup something like a database in a container, using a Persistent Volume (like below) you can get multiple containers each trying to write to your database backing store filesystem (one for production; one temporarily for production re-deploy "canary"; and similar 1 or 2 for every deployed branch (which is probably not what you want). So you might want to look into `NOMAD_VAR_COUNT` and `NOMAD_VAR_COUNT_CANARIES` in that case._ + +It's recommended to run the DB container during the prestart hook as a "sidecar" service (this will cause it to finish starting before any other group tasks initialize, avoiding service start failures due to unavailable DB, see [nomad task dependencies](https://www.hashicorp.com/blog/hashicorp-nomad-task-dependencies) for more info) + +`.gitlab-ci.yml`: +```yaml +variables: + NOMAD_VAR_MULTI_CONTAINER: 'true' + NOMAD_VAR_PORTS: '{ 5000 = "http", 5432 = "db" }' + NOMAD_VAR_PERSISTENT_VOLUME: '/bitnami/postgresql' + NOMAD_VAR_CHECK_PROTOCOL: 'tcp' + # avoid 2+ containers running where both try to write to database + NOMAD_VAR_COUNT: 1 + NOMAD_VAR_COUNT_CANARIES: 0 + +include: + - remote: 'https://gitlab.com/internetarchive/nomad/-/raw/master/.gitlab-ci.yml' +``` +`vars.nomad`: +```ini +# used in @see group.nomad +variable "POSTGRESQL_PASSWORD" { + type = string + default = "" +} +``` +`group.nomad`: +```ini +task "db" { + driver = "docker" + lifecycle { + sidecar = true + hook = "prestart" + } + config { + image = "docker.io/bitnami/postgresql:11.7.0-debian-10-r9" + ports = ["db"] + volumes = ["/pv/${var.CI_PROJECT_PATH_SLUG}:/bitnami/postgresql"] + } + template { + data = <| .env && python ... +``` + +--- + +## Two `group`s, within same `job`, wanting to talk to each other +Normally, we strongly suggest all `task`s be together in the same `group`. +That will ensure all task containers are run on the same VM, and all tasks will get automatically managed and setup `env` vars, eg: +```ini +NOMAD_ADDR_backend=211.204.226.244:27344 +NOMAD_ADDR_http=211.204.226.244:23945 +``` + +However, if for some reason you want to split your tasks into 2+ `group { .. }` stanzas, +here is how you can get the containers to talk to each other (using `consul` and templating): +- https://github.com/hashicorp/nomad/issues/5455#issuecomment-482490116 +You'd end up putting your 2nd `group` in a file named `job.nomad` in the top of your repo. + +--- + +# GitHub repo integrations +## GitHub Actions +- We use GitHub Actions to create [build], [test], and [deploy] CI/CD pipelines. +- There is a lot of great information and links to example repos here: https://github.com/internetarchive/cicd#readme + +## GitHub Customizing +- You can use the same `NOMAD_VAR_` options above to tailor your deploy in the [#Customizing](#Customizing) section above. [Documentation and examples here](https://github.com/internetarchive/cicd#readme). + +## GitHub Secrets +- You can add GitHub secrets to your repo from the GitHub GUI ([documentation](https://docs.github.com/en/actions/security-guides/using-secrets-in-github-actions#creating-secrets-for-a-repository)). You then need to get those secrets to pass through to the [deploy] phase, using the `NOMAD_SECRETS` setting in the GitHub Actions workflow yaml file. +- Note that you may want to test with repository or organizational level secrets before proceeding to setup environment secrets ( [documentation around creating secrets for an environment](https://docs.github.com/en/actions/security-guides/using-secrets-in-github-actions#creating-secrets-for-a-repository) ) +- Here is an example GH repo that passes 2 GH secrets into the [deploy] phase. Each secret will wind up as environment variable that your servers can read, or your `RUN`/`CMD` entrypoint can read: + - https://github.com/traceypooh/staticman/blob/main/.github/workflows/cicd.yml + - [entrypoint setup](https://github.com/traceypooh/staticman/blob/main/Dockerfile) + - [entrypoint script](https://github.com/traceypooh/staticman/blob/main/entrypoint.sh) + +--- + +# Helpful links +- https://youtube.com/watch?v=3K1bSGN7zGA 'HashiConf Digital June 2020 - Full Opening Keynote' +- https://www.nomadproject.io/docs/install/production/deployment-guide/ +- https://learn.hashicorp.com/nomad/managing-jobs/configuring-tasks +- https://www.burgundywall.com/post/continuous-deployment-gitlab-and-nomad +- https://weekly-geekly.github.io/articles/453322/index.html +- https://www.haproxy.com/blog/haproxy-and-consul-with-dns-for-service-discovery/ +- https://www.youtube.com/watch?v=gf43TcWjBrE Kelsey Hightower, HashiConf 2016 +- https://blog.tjll.net/reverse-proxy-hot-dog-eating-contest-caddy-vs-nginx/#results +- https://github.com/hashicorp/consul-template/issues/200#issuecomment-76596830 + +## Pick your container stack / testimonials +- https://www.hashicorp.com/blog/hashicorp-joins-the-cncf/ +- https://www.nomadproject.io/intro/who-uses-nomad/ + - + http://jet.com/walmart +- https://medium.com/velotio-perspectives/how-much-do-you-really-know-about-simplified-cloud-deployments-b74d33637e07 +- https://blog.cloudflare.com/how-we-use-hashicorp-nomad/ +- https://www.hashicorp.com/resources/ncbi-legacy-migration-hybrid-cloud-consul-nomad/ +- https://thenewstack.io/fargate-grows-faster-than-kubernetes-among-aws-customers/ +- https://github.com/rishidot/Decision-Makers-Guide/blob/master/Decision%20Makers%20Guide%20-%20Nomad%20Vs%20Kubernetes%20-%20Oct%202019.pdf +- https://medium.com/@trevor00/building-container-platforms-part-one-introduction-4ee2338eb11 + + + +# Multi-node architecture +![Architecture](img/architecture.drawio.svg) + + +# Requirements for archive.org CI/CD +- docker exec ✅ + - pop into deployed container and poke around - similar to `ssh` + - @see [aliases](aliases) `nom-ssh` +- docker cp ✅ + - hot-copy edited file into _running_ deploy (avoid full pipeline to see changes) + - @see [aliases](aliases) `nom-cp` + - hook in VSCode + [sync-rsync](https://marketplace.visualstudio.com/items?itemName=vscode-ext.sync-rsync) + package to 'copy (into container) on save' +- secrets ✅ +- load balancers ✅ +- 2+ instances HPA ✅ +- PV ✅ +- http/2 ✅ +- auto http => https ✅ +- web sockets ✅ +- auto-embed HSTS in https headers, similar to kubernetes ✅ + - eg: `Strict-Transport-Security: max-age=15724800; includeSubdomains` + + +# Constraints +In the past, we've made it so certain jobs are "constrained" to run on specifc 1+ cluster VM. + +Here's how you can do it: +You can manually add this to 1+ VM `/etc/nomad/nomad.hcl` file: +```ini +client { + meta { + "kind" = "tcp-vm" + } +} +``` + +You can add this as a new file named `job.nomad` in the top of a project/repo: +```ini +constraint { + attribute = "${meta.kind}" + operator = "set_contains" + value = "tcp-vm" +} +``` + +Then deploys for this repo will *only* deploy to your specific VMs. diff --git a/aliases b/aliases new file mode 100644 index 0000000..a81920a --- /dev/null +++ b/aliases @@ -0,0 +1,312 @@ +#!/bin/bash + + +# look for NOMAD_ADDR and NOMAD_TOKEN +[ -e $HOME/.config/nomad ] && source $HOME/.config/nomad + + +# If not running interactively, don't setup autocomplete +if [ ! -z "$PS1" ]; then + # nomad/consul autocompletes + if [ "$ZSH_VERSION" = "" ]; then + which nomad >/dev/null && complete -C $(which nomad) nomad + which consul >/dev/null && complete -C $(which consul) consul + else + # https://apple.stackexchange.com/questions/296477/ + ( which compdef 2>&1 |fgrep -q ' not found' ) && autoload -Uz compinit && compinit + + which nomad >/dev/null && autoload -U +X bashcompinit && bashcompinit + which nomad >/dev/null && complete -o nospace -C $(which nomad) nomad + which consul >/dev/null && complete -o nospace -C $(which consul) consul + fi +fi + + +function nom-app() { + # finds the webapp related to given job/CWD and opens it in browser + [ $# -eq 1 ] && JOB=$1 + [ $# -ne 1 ] && JOB=$(nom-job-from-cwd) + + _nom-url + + URL=$(echo "$URL" |head -1) + + [ "$URL" = "" ] && echo "URL not found - is service running? try:\n nomad status $JOB" && return + open "$URL" +} + + +function nom-ssh() { + # simple way to pop in (ssh-like) to a given job + # Usage: [job name, eg: x-thumb] -OR- no args will use CWD to determine job + [ $# -ge 1 ] && JOB=$1 + [ $# -lt 1 ] && JOB=$(nom-job-from-cwd) + [ $# -ge 2 ] && TASK=$2 # for rarer TaskGroup case where 2+ Tasks spec-ed in same Job + [ $# -lt 2 ] && TASK=http + + ALLOC=$(nomad job status $JOB |egrep -m1 '\srun\s' |cut -f1 -d' ') + echo "nomad alloc exec -i -t -task $TASK $ALLOC" + + if [ $# -ge 3 ]; then + shift + shift + nomad alloc exec -i -t -task $TASK $ALLOC "$@" + else + nomad alloc exec -i -t -task $TASK $ALLOC \ + sh -c '([ -e /bin/zsh ] && zsh) || ([ -e /bin/bash ] && bash) || ([ -e /bin/sh ] && sh)' + fi +} + + +function nom-sshn() { + # simple way to pop in (ssh-like) to a given job with 2+ allocations/containers + local N=${1:?"Usage: [container/allocation number, starting with 1]"} + + local ALLOC=$(nomad job status $JOB |egrep '\srun\s' |head -n $N |tail -1 |cut -f1 -d' ') + echo "nomad alloc exec -i -t $ALLOC" + + nomad alloc exec -i -t $ALLOC \ + sh -c '([ -e /bin/zsh ] && zsh) || ([ -e /bin/bash ] && bash) || ([ -e /bin/sh ] && sh)' +} + + +function nom-cp() { + # copies a laptop local file into running deploy (avoids full pipeline just to see changes) + + # first, see if this is vscode sync-rsync + local VSCODE= + [ "$#" -ge 4 ] && ( echo "$@" |fgrep -q .vscode ) && VSCODE=1 + + if [ $VSCODE ]; then + # fish out file name from what VSCode 'sync-rsync' package sends us -- should be 2nd to last arg + local FILE=$(echo "$@" |rev |tr -s ' ' |cut -f2 -d' ' |rev) + # switch dirs to make aliases work + local DIR=$(dirname "$FILE") + cd "$DIR" + local BRANCH=$(git rev-parse --abbrev-ref HEAD) + local JOB=$(nom-job-from-cwd) + local ALLOC=$(nom-job-to-alloc) + local TASK=http + cd - + + else + local FILE=${1:?"Usage: [src file, locally qualified while 'cd'-ed inside a repo]"} + local BRANCH=$(git rev-parse --abbrev-ref HEAD) + local JOB=$(nom-job-from-cwd) + local ALLOC=$(nom-job-to-alloc) + [ $# -ge 2 ] && TASK=$2 # for rarer TaskGroup case where 2+ Tasks spec-ed in sam Job + [ $# -lt 2 ] && TASK=http + fi + + # now split the FILE name into two pieces -- 'the root of the git tree' and 'the rest' + local DIR=$(dirname "$FILE") + local TOP=$(git -C "$DIR" rev-parse --show-toplevel) + local REST=$(echo "$FILE" | perl -pe "s|^$TOP||; s|^/+||;") + + + for var in FILE DIR TOP REST BRANCH JOB ALLOC; do + echo $var="${(P)var}" + done + echo + + if [ $VSCODE ]; then + local MAIN= + [ "$BRANCH" = "main" ] && MAIN=true + [ "$BRANCH" = "master" ] && MAIN=true + + local RSYNC= + [ $MAIN ] && RSYNC=true + [ ! $MAIN ] && [ "$RSYNC_BRANCHES" ] && RSYNC=true + + [ $RSYNC ] && ( set -x; rsync "$@" ) + + # this is a special exception project where we DONT want to ALSO copy file to nomad deploy + [ $MAIN ] && [ "$JOB" = "ia-petabox" ] && [ ! $NOM_CP_PETABOX_MAIN ] && exit 0 + fi + + + if [ "$JOB" = "" -o "$ALLOC" = "" ]; then + # no relevant job & alloc found - nothing to do + echo 'has this branch run a full pipeline and deployed a Review App yet?' + return + fi + + # HinD updated nomad clusters w/ latest nomad seem to *not* get the stdin close properly + # (and thus hang). So timeout/kill after 2s :( tracey 2024/3 ) + set +e + cat "$FILE" | ( set -x; set +e; nomad alloc exec -i -task $TASK "$ALLOC" sh -c "timeout 2 cat >| '$REST'" ) + echo SUCCESS +} + + +function nom-logs() { + # simple way to view logs for a given job + [ $# -eq 1 ] && JOB=$1 + [ $# -ne 1 ] && JOB=$(nom-job-from-cwd) + # NOTE: the 2nd $JOB is useful for when a job has 2+ tasks (eg: `kv` or DB/redis, etc.) + nomad alloc logs -f -job $JOB http +} + + +function nom-logs-err() { + # simple way to view logs for a given job + [ $# -eq 1 ] && JOB=$1 + [ $# -ne 1 ] && JOB=$(nom-job-from-cwd) + nomad alloc logs -stderr -f -job $JOB http +} + + +function nom-status() { + # prints detailed status for a repo's service and deployment + [ $# -eq 1 ] && JOB=$1 + [ $# -ne 1 ] && JOB=$(nom-job-from-cwd) + + line + echo "nomad status $JOB" + line + nomad status $JOB | grep --color=always -iE 'unhealthy|healthy|$' + line + echo 'nomad alloc status -stats $(nom-job-to-alloc '$JOB')' + line + nomad alloc status -stats $(nom-job-to-alloc $JOB) | grep --color=always -iE 'unhealthy|healthy|Job Version.*|Node Name.*|$' + line +} + + +function nom-urls() { + # Lists all current urls for the services deployed to current nomad cluster (eg: webapps) + # Ideally, this is a faster single-shot call. But to avoid requiring either `consul` addr + # and ACL token _in addition_ to `nomad` - we'll just use `nomad` directly instead. + # consul catalog services -tags + for JOB in $(curl -sH "X-Nomad-Token: ${NOMAD_TOKEN?}" ${NOMAD_ADDR?}/v1/jobs \ + | jq -r '.[] | select(.Type=="service") | "\(.Name)"') + do + _nom-url + echo $URL + done |sort +} + + +function _nom-url() { + # logically private helper function + URL=$(curl -sH "X-Nomad-Token: ${NOMAD_TOKEN?}" ${NOMAD_ADDR?}/v1/job/$JOB \ + | jq -r '.TaskGroups[0].Services[0].Tags' \ + | fgrep . |fgrep -v redirect=308 |tr -d '", ' |perl -pe 's/:443//; s=^urlprefix\-=https://=;' + ) +} + + +function nom-resubmit() { + # Retrieves current job spec from nomad cluster and resubmits it to nomad. + # Useful for when a job has exceedded a setup timeout, is (nonideally) marked 'dead', etc. + [ $# -eq 1 ] && JOB=$1 + [ $# -ne 1 ] && JOB=$(nom-job-from-cwd) + + nomad inspect ${JOB?} |tee .$JOB + + # in case we are trying to _move_ an active/OK deploy + nomad stop ${JOB?} + sleep 5 + + curl -XPOST -H "Content-Type: application/json" -H "X-Nomad-Token: $NOMAD_TOKEN" -d @.${JOB?} \ + $NOMAD_ADDR/v1/jobs + + rm -f .$JOB +} + + +function d() { + # show docker running containers and local images + [ "$#" = "0" ] && clear -x + + local SUDO= + [ $(uname) = "Linux" ] && local SUDO=sudo + [ ! -e /usr/bin/docker ] && local docker=podman + + $SUDO $docker ps -a --format "table {{.ID}}\t{{.Image}}\t{{.Status}}\t{{.Names}}\t{{.State}}" | $SUDO cat >| $HOME/.dockps + chmod 666 $HOME/.dockps + for i in STATE running restarting created paused removing exited dead; do + cat $HOME/.dockps |egrep "$i$" |perl -pe 's/'$i'$//' + done + rm -f $HOME/.dockps + + line + $SUDO $docker images +} + +function nom() { + # quick way to get an overview of a nomad server when ssh-ed into it + d + line + nomad server members + line + nomad status + line +} + + +function nom-job-from-cwd() { + # print the nomad job name based on the current project + # parse out repo info, eg: 'ia-petabox' -- ensure clone-over-ssh or clone-over-https work + local GURL TMP GROUP_PROJECT PROJECT BRANCH SLUG JOB + GURL=$(git config --get remote.origin.url) + [[ "$GURL" =~ https:// ]] && TMP=$(echo "$GURL" |cut -f4- -d/) + [[ "$GURL" =~ https:// ]] || TMP=$(echo "$GURL" |rev |cut -f1 -d: |rev) + GROUP_PROJECT=$(echo "$TMP" |perl -pe 's/\.git//' |tr A-Z a-z |tr / -) + + PROJECT=$(git rev-parse --absolute-git-dir |egrep --color -o '.*?.git' |rev |cut -f2 -d/ |rev) + BRANCH=$(git rev-parse --abbrev-ref HEAD) + SLUG=$(echo "$BRANCH" |tr '/_.' '-' |tr A-Z a-z) + JOB=$GROUP_PROJECT + [ "$SLUG" = "main" -o "$SLUG" = "master" -o "$SLUG" = "staging" -o "$SLUG" = "production" ] || JOB="${JOB}-${SLUG}" + echo $(echo "$JOB" |cut -b1-63) +} + + + +function nom-image-from-cwd() { + # print the registry image based on the current project + # parse out repo info, eg: 'ia-petabox' -- ensure clone-over-ssh or clone-over-https work + local GURL GROUP_PROJECT BRANCH SLUG JOB + GURL=$(git config --get remote.origin.url) + [[ "$GURL" =~ https:// ]] && GROUP_PROJECT=$(echo "$GURL" |cut -f4- -d/) + [[ "$GURL" =~ https:// ]] || GROUP_PROJECT=$(echo "$GURL" |rev |cut -f1 -d: |rev) + + BRANCH=$(git rev-parse --abbrev-ref HEAD) + SLUG=$(echo "$BRANCH" |tr '/_.' '-' |tr A-Z a-z) + echo $(echo "registry.archive.org/$GROUP_PROJECT/$SLUG" |cut -b1-63) +} + + + + +function nom-job-to-alloc() { + # prints alloc of a given job (when in high-availability and 2+ allocations, picks one at random) + # Usage: [job name, eg: x-thumb] -OR- no args will use CWD to determine job + [ $# -eq 1 ] && JOB=$1 + [ $# -ne 1 ] && JOB=$(nom-job-from-cwd) + nomad job status $JOB |egrep -m1 '\srun\s' |cut -f1 -d' ' +} + + +function line () { + # horizontal line break + perl -e 'print "_"x100; print "\n\n";' +} + + +function nom-tunnel() { + # Sets up an ssh tunnel in the background to be able to talk to nomad cluster's consul. + [ "$NOMAD_ADDR" = "" ] && echo "Please set NOMAD_ADDR environment variable first" && return + local HOST=$(echo "$NOMAD_ADDR" | sed 's/:4646\/*$//' |sed 's/^https*:\/\///') + ssh -fNA -L 8500:localhost:8500 $HOST +} + + +function web-logs-tail() { + # admin script that can more easily "tail -f" the caddy (JSON) web logs + ( + set -x + tail -f /var/log/caddy/access.log | jq -r '"\(.request.host)\(.request.uri)\t\t\(.request.headers."User-Agent")"' + ) +} diff --git a/build.sh b/build.sh new file mode 100755 index 0000000..5cc1bcc --- /dev/null +++ b/build.sh @@ -0,0 +1,94 @@ +#!/bin/bash -e + +# Build stage script for Auto-DevOps + +# FROM: registry.gitlab.com/internetarchive/auto-build-image/main +# which was +# FROM registry.gitlab.com/gitlab-org/cluster-integration/auto-build-image:v1.14.0 +# +# then pulled the unused heroku/buildpack stuff/clutter + +# Wondering how to do podman-in-podman? Of course we are. Here's a minimal example: +# +# SOCK=$(sudo podman info |grep -F podman.sock |rev |cut -f1 -d ' ' |rev) +# podman run --rm --privileged --net=host --cgroupns=host -v $SOCK:$SOCK registry.gitlab.com/internetarchive/nomad/master zsh -c 'podman --remote ps -a' + +set -o pipefail + +filter_docker_warning() { + grep -E -v "^WARNING! Your password will be stored unencrypted in |^Configure a credential helper to remove this warning. See|^https://docs.docker.com/engine/reference/commandline/login/#credentials-store" || true +} + +docker_login_filtered() { + # $1 - username, $2 - password, $3 - registry + # this filters the stderr of the `podman --remote login`, without merging stdout and stderr together + { echo "$2" | podman --remote login -u "$1" --password-stdin "$3" 2>&1 1>&3 | filter_docker_warning 1>&2; } 3>&1 +} + +gl_write_auto_build_variables_file() { + echo "CI_APPLICATION_TAG=$CI_APPLICATION_TAG@$(podman --remote image inspect --format='{{ index (split (index .RepoDigests 0) "@") 1 }}' "$image_tagged")" > gl-auto-build-variables.env +} + + +if [[ -z "$CI_COMMIT_TAG" ]]; then + export CI_APPLICATION_REPOSITORY=${CI_APPLICATION_REPOSITORY:-$CI_REGISTRY_IMAGE/$CI_COMMIT_REF_SLUG} + export CI_APPLICATION_TAG=${CI_APPLICATION_TAG:-$CI_COMMIT_SHA} +else + export CI_APPLICATION_REPOSITORY=${CI_APPLICATION_REPOSITORY:-$CI_REGISTRY_IMAGE} + export CI_APPLICATION_TAG=${CI_APPLICATION_TAG:-$CI_COMMIT_TAG} +fi + +DOCKER_BUILDKIT=1 +image_tagged="$CI_APPLICATION_REPOSITORY:$CI_APPLICATION_TAG" +image_latest="$CI_APPLICATION_REPOSITORY:latest" + +if [[ -n "$CI_REGISTRY" && -n "$CI_REGISTRY_USER" ]]; then + echo "Logging in to GitLab Container Registry with CI credentials..." + docker_login_filtered "$CI_REGISTRY_USER" "$CI_REGISTRY_PASSWORD" "$CI_REGISTRY" +fi + + +# xxx seccomp for IA git repos +# o/w opening seccomp profile failed: open /etc/containers/seccomp.json: no such file or directory +build_args=( + --cache-from "$CI_APPLICATION_REPOSITORY" + $AUTO_DEVOPS_BUILD_IMAGE_EXTRA_ARGS + --security-opt seccomp=unconfined + --tag "$image_tagged" +) + +if [ "$NOMAD_VAR_SERVERLESS" = "" ]; then + build_args+=(--tag "$image_latest") +fi + +if [[ -n "${DOCKERFILE_PATH}" ]]; then + build_args+=(-f "$DOCKERFILE_PATH") +fi + +if [[ -n "$AUTO_DEVOPS_BUILD_IMAGE_FORWARDED_CI_VARIABLES" ]]; then + build_secret_file_path=/tmp/auto-devops-build-secrets + "$(dirname "$0")"/export-build-secrets > "$build_secret_file_path" # xxx /build/export-build-secrets + build_args+=( + --secret "id=auto-devops-build-secrets,src=$build_secret_file_path" + ) +fi + + +( + set -x + podman --remote buildx build "${build_args[@]}" --progress=plain . 2>&1 +) + +( + set -x + podman --remote push "$image_tagged" +) +if [ "$NOMAD_VAR_SERVERLESS" = "" ]; then + ( + set -x + podman --remote push "$image_latest" + ) +fi + + +gl_write_auto_build_variables_file diff --git a/build.yml b/build.yml new file mode 100644 index 0000000..66e2624 --- /dev/null +++ b/build.yml @@ -0,0 +1,23 @@ +# Tracey 3/2024: +# This was adapted & simplified from: +# https://gitlab.com/gitlab-org/gitlab/-/raw/master/lib/gitlab/ci/templates/Jobs/Build.gitlab-ci.yml + +build: + stage: build + # If need to rebuild this image while runners are down, `cd` to this directory, then, as root: + # podman login registry.gitlab.com + # podman build --net=host --tag registry.gitlab.com/internetarchive/nomad/master . && sudo podman push registry.gitlab.com/internetarchive/nomad/master + image: registry.gitlab.com/internetarchive/nomad/master + variables: + DOCKER_HOST: 'unix:///run/podman/podman.sock' + DOCKER_TLS_CERTDIR: '' + DOCKER_BUILDKIT: 1 + script: + - /build.sh + artifacts: + reports: + dotenv: gl-auto-build-variables.env + rules: + - if: '$BUILD_DISABLED' + when: never + - if: '$CI_COMMIT_TAG || $CI_COMMIT_BRANCH' diff --git a/deploy.sh b/deploy.sh new file mode 100755 index 0000000..c430a3d --- /dev/null +++ b/deploy.sh @@ -0,0 +1,346 @@ +#!/bin/bash -e + +function verbose() { + if [ "$NOMAD_VAR_VERBOSE" ]; then + echo "$@"; + fi +} + + +function main() { + if [ "$NOMAD_TOKEN" = test ]; then + # during testing, set any var that isn't set, to an empty string, when the var gets used later + NOMAD_VAR_NO_DEPLOY=${NOMAD_VAR_NO_DEPLOY:-""} + GITHUB_ACTIONS=${GITHUB_ACTIONS:-""} + NOMAD_VAR_HOSTNAMES=${NOMAD_VAR_HOSTNAMES:-""} + CI_REGISTRY_READ_TOKEN=${CI_REGISTRY_READ_TOKEN:-""} + NOMAD_VAR_COUNT=${NOMAD_VAR_COUNT:-""} + NOMAD_SECRETS=${NOMAD_SECRETS:-""} + NOMAD_ADDR=${NOMAD_ADDR:-""} + NOMAD_TOKEN_PROD=${NOMAD_TOKEN_PROD:-""} + NOMAD_TOKEN_STAGING=${NOMAD_TOKEN_STAGING:-""} + NOMAD_TOKEN_EXT=${NOMAD_TOKEN_EXT:-""} + PRIVATE_REPO=${PRIVATE_REPO:-""} + fi + + + # IF someone set this programmatically in their project yml `before_script:` tag, etc., exit + if [ "$NOMAD_VAR_NO_DEPLOY" ]; then exit 0; fi + + if [ "$GITHUB_ACTIONS" ]; then github-setup; fi + + ############################### NOMAD VARS SETUP ############################## + + # auto-convert from pre-2022 var name + if [ "$BASE_DOMAIN" = "" ]; then + BASE_DOMAIN="$KUBE_INGRESS_BASE_DOMAIN" + fi + + MAIN_OR_PROD_OR_STAGING_OR_EXT= + MAIN_OR_PROD_OR_STAGING_OR_EXT_SLUG= + PRODUCTION= + STAGING= + EXT= + if [ "$CI_COMMIT_REF_SLUG" = "main" -o "$CI_COMMIT_REF_SLUG" = "master" ]; then + MAIN_OR_PROD_OR_STAGING_OR_EXT=1 + MAIN_OR_PROD_OR_STAGING_OR_EXT_SLUG=1 + elif [ "$CI_COMMIT_REF_SLUG" = "production" ]; then + PRODUCTION=1 + MAIN_OR_PROD_OR_STAGING_OR_EXT=1 + MAIN_OR_PROD_OR_STAGING_OR_EXT_SLUG=1 + elif [ "$BASE_DOMAIN" = "prod.archive.org" ]; then + # NOTE: this is _very_ unusual -- but it's where a repo can elect to have + # another branch name (not `production`) deploy to production cluster via (typically) various + # gitlab CI/CD variables pegged to that branch name. + PRODUCTION=1 + MAIN_OR_PROD_OR_STAGING_OR_EXT=1 + elif [ "$CI_COMMIT_REF_SLUG" = "staging" ]; then + STAGING=1 + MAIN_OR_PROD_OR_STAGING_OR_EXT=1 + MAIN_OR_PROD_OR_STAGING_OR_EXT_SLUG=1 + elif [ "$CI_COMMIT_REF_SLUG" = "ext" ]; then + EXT=1 + MAIN_OR_PROD_OR_STAGING_OR_EXT=1 + MAIN_OR_PROD_OR_STAGING_OR_EXT_SLUG=1 + fi + + + # some archive.org specific production/staging/ext deployment detection & var updates first + if [[ "$BASE_DOMAIN" == *.archive.org ]]; then + if [ $PRODUCTION ]; then + export BASE_DOMAIN=prod.archive.org + if [[ "$CI_PROJECT_PATH_SLUG" == internetarchive-emularity-* ]]; then + export BASE_DOMAIN=ux-b.archive.org + fi + elif [ $STAGING ]; then + export BASE_DOMAIN=staging.archive.org + elif [ $EXT ]; then + export BASE_DOMAIN=ext.archive.org + fi + + if [ $PRODUCTION ]; then + if [ "$NOMAD_TOKEN_PROD" != "" ]; then + export NOMAD_TOKEN="$NOMAD_TOKEN_PROD" + echo using nomad production token + fi + if [ "$NOMAD_VAR_COUNT" = "" ]; then + export NOMAD_VAR_COUNT=3 + fi + elif [ $STAGING ]; then + if [ "$NOMAD_TOKEN_STAGING" != "" ]; then + export NOMAD_TOKEN="$NOMAD_TOKEN_STAGING" + echo using nomad staging token + fi + elif [ $EXT ]; then + if [ "$NOMAD_TOKEN_EXT" != "" ]; then + export NOMAD_TOKEN="$NOMAD_TOKEN_EXT" + echo using nomad ext token + fi + fi + fi + + export BASE_DOMAIN + + + # Make a nice "slug" that is like [GROUP]-[PROJECT]-[BRANCH], each component also "slugged", + # where "-main", "-master", "-production", "-staging", "-ext" are omitted. + # Respect DNS 63 max chars limit. + export BRANCH_PART="" + if [ ! $MAIN_OR_PROD_OR_STAGING_OR_EXT_SLUG ]; then + export BRANCH_PART="-${CI_COMMIT_REF_SLUG}" + fi + export NOMAD_VAR_SLUG=$(echo "${CI_PROJECT_PATH_SLUG}${BRANCH_PART}" |cut -b1-63) + # make nice (semantic) hostname, based on the slug, eg: + # services-timemachine.x.archive.org + # ia-petabox-webdev-3939-fix-things.x.archive.org + # however, if repo has list of 1+ custom hostnames it wants to use instead for main/master branch + # review app, then use them and log during [deploy] phase the first hostname in the list + export HOSTNAME="${NOMAD_VAR_SLUG}.${BASE_DOMAIN}" + # NOTE: YAML or CI/CD Variable `NOMAD_VAR_HOSTNAMES` is *IGNORED* -- and automatic $HOSTNAME above + # is used for branches not main/master/production/staging/ext + + # make even nicer names for archive.org processing cluster deploys + if [ "$BASE_DOMAIN" = "work.archive.org" ]; then + export HOSTNAME="${CI_PROJECT_NAME}${BRANCH_PART}.${BASE_DOMAIN}" + fi + + if [ "$NOMAD_ADDR" = "" ]; then + export NOMAD_ADDR=https://$BASE_DOMAIN + if [ "$BASE_DOMAIN" = archive.org ]; then + # an archive.org specific adjustment + export NOMAD_ADDR=https://dev.archive.org + fi + fi + + if [ "$NOMAD_VAR_HOSTNAMES" != "" -a "$BASE_DOMAIN" != "" ]; then + # Now auto-append .$BASE_DOMAIN to any hostname that isn't a fully qualified domain name + export NOMAD_VAR_HOSTNAMES=$(deno eval 'const fqdns = JSON.parse(Deno.env.get("NOMAD_VAR_HOSTNAMES")).map((e) => e.includes(".") ? e : e.concat(".").concat(Deno.env.get("BASE_DOMAIN"))); console.log(fqdns)') + fi + + if [ "$MAIN_OR_PROD_OR_STAGING_OR_EXT" -a "$NOMAD_VAR_HOSTNAMES" != "" ]; then + export HOSTNAME=$(echo "$NOMAD_VAR_HOSTNAMES" |cut -f1 -d, |tr -d '[]" ' |tr -d "'") + else + NOMAD_VAR_HOSTNAMES= + + if [ "$PRODUCTION" -o "$STAGING" -o "$EXT" ]; then + export HOSTNAME="${CI_PROJECT_NAME}.$BASE_DOMAIN" + fi + fi + + + if [ "$NOMAD_VAR_HOSTNAMES" = "" ]; then + export NOMAD_VAR_HOSTNAMES='["'$HOSTNAME'"]' + fi + + + if [[ "$NOMAD_ADDR" == *crawl*.archive.org:* ]]; then # nixxx + export NOMAD_VAR_CONSUL_PATH='/usr/local/bin/consul' + fi + + + if [ "$CI_REGISTRY_READ_TOKEN" = "0" ]; then unset CI_REGISTRY_READ_TOKEN; fi + + ############################### NOMAD VARS SETUP ############################## + + + + if [ "$ARG1" = "stop" ]; then + nomad stop $NOMAD_VAR_SLUG + exit 0 + fi + + + + echo using nomad cluster $NOMAD_ADDR + echo deploying to https://$HOSTNAME + + # You can have your own/custom `project.nomad` in the top of your repo - or we'll just use + # this fully parameterized nice generic 'house style' project. + # + # Create project.hcl - including optional insertions that a repo might elect to inject + REPODIR="$(pwd)" + cd /tmp + if [ -e "$REPODIR/project.nomad" ]; then + cp "$REPODIR/project.nomad" project.nomad + else + rm -f project.nomad + wget -q https://gitlab.com/internetarchive/nomad/-/raw/master/project.nomad + fi + + verbose "Replacing variables internal to project.nomad." + + ( + grep -F -B10000 VARS.NOMAD--INSERTS-HERE project.nomad + # if this filename doesnt exist in repo, this line noops + cat "$REPODIR/vars.nomad" 2>/dev/null || echo + grep -F -A10000 VARS.NOMAD--INSERTS-HERE project.nomad + ) >| tmp.nomad + cp tmp.nomad project.nomad + ( + grep -F -B10000 JOB.NOMAD--INSERTS-HERE project.nomad + # if this filename doesnt exist in repo, this line noops + cat "$REPODIR/job.nomad" 2>/dev/null || echo + grep -F -A10000 JOB.NOMAD--INSERTS-HERE project.nomad + ) >| tmp.nomad + cp tmp.nomad project.nomad + ( + grep -F -B10000 GROUP.NOMAD--INSERTS-HERE project.nomad + # if this filename doesnt exist in repo, this line noops + cat "$REPODIR/group.nomad" 2>/dev/null || echo + grep -F -A10000 GROUP.NOMAD--INSERTS-HERE project.nomad + ) >| tmp.nomad + cp tmp.nomad project.nomad + + verbose "project.nomad -> project.hcl" + + cp project.nomad project.hcl + + verbose "NOMAD_VAR_SLUG variable substitution" + # Do the one current substitution nomad v1.0.3 can't do now (apparently a bug) + sed -ix "s/NOMAD_VAR_SLUG/$NOMAD_VAR_SLUG/" project.hcl + + case "$NOMAD_ADDR" in + https://work.archive.org|https://hind.archive.org|https://dev.archive.org|https://ext.archive.org) + # HinD cluster(s) use `podman` driver instead of `docker` + sed -ix 's/driver\s*=\s*"docker"/driver="podman"/' project.hcl # xxx + sed -ix 's/memory_hard_limit/# memory_hard_limit/' project.hcl # xxx + ;; + esac + + verbose "Handling NOMAD_SECRETS." + if [ "$NOMAD_SECRETS" = "" ]; then + # Set NOMAD_SECRETS to JSON encoded key/val hashmap of env vars starting w/ "NOMAD_SECRET_" + # (w/ NOMAD_SECRET_ prefix omitted), then convert to HCL style hashmap string (chars ":" => "=") + echo '{}' >| env.env + ( env | grep -qE ^NOMAD_SECRET_ ) && ( + echo NOMAD_SECRETS=$(deno eval 'console.log(JSON.stringify(Object.fromEntries(Object.entries(Deno.env.toObject()).filter(([k, v]) => k.startsWith("NOMAD_SECRET_")).map(([k ,v]) => [k.replace(/^NOMAD_SECRET_/,""), v]))))' | sed 's/":"/"="/g') >| env.env + ) + else + # this alternate clause allows GitHub Actions to send in repo secrets to us, as a single secret + # variable, as our JSON-like hashmap of keys (secret/env var names) and values + cat >| env.env << EOF +NOMAD_SECRETS=$NOMAD_SECRETS +EOF + fi + + verbose "copy current env vars starting with "CI_" to "NOMAD_VAR_CI_" variants & inject them into shell" + deno eval 'Object.entries(Deno.env.toObject()).map(([k, v]) => console.log("export NOMAD_VAR_"+k+"="+JSON.stringify(v)))' | grep -E '^export NOMAD_VAR_CI_' >| ci.env + source ci.env + rm ci.env + + if [ "$NOMAD_TOKEN" = test ]; then + nomad run -output -var-file=env.env project.hcl >| project.json + exit 0 + fi + + set -x + nomad validate -var-file=env.env project.hcl + nomad plan -var-file=env.env project.hcl 2>&1 |sed 's/\(password[^ \t]*[ \t]*\).*/\1 ... /' |tee plan.log || echo + export INDEX=$(grep -E -o -- '-check-index [0-9]+' plan.log |tr -dc 0-9) + + # some clusters sometimes fail to fetch deployment :( -- so let's retry 5x + for RETRIES in $(seq 1 5); do + set -o pipefail + nomad run -var-file=env.env -check-index $INDEX project.hcl 2>&1 |tee check.log + if [ "$?" = "0" ]; then + if grep -E 'Status[ ]*=[ ]*failed' check.log; then + # for example, unhealthy 5x, unable to roll back, ends up failing + exit 1 + fi + + # This particular fail case output doesnt seem to exit non-zero -- so we have to check for it + # ==> 2023-03-29T17:21:15Z: Error fetching deployment + if ! grep -F 'Error fetching deployment' check.log; then + echo deployed to https://$HOSTNAME + return + fi + fi + + echo retrying.. + sleep 10 + continue + done + exit 1 +} + + +function github-setup() { + # Converts from GitHub env vars to GitLab-like env vars + + # You must add these as Secrets to your repository: + # NOMAD_TOKEN + # NOMAD_TOKEN_PROD (optional) + # NOMAD_TOKEN_STAGING (optional) + # NOMAD_TOKEN_EXT (optional) + + # You may override the defaults via passed-in args from your repository: + # BASE_DOMAIN + # NOMAD_ADDR + # https://github.com/internetarchive/cicd + + + # Example of the (limited) GitHub ENV vars that are avail to us: + # GITHUB_REPOSITORY=internetarchive/dyno + + # (registry host) + export CI_REGISTRY=ghcr.io + + local GITHUB_REPOSITORY_LC=$(echo "${GITHUB_REPOSITORY?}" |tr A-Z a-z) + + # eg: ghcr.io/internetarchive/dyno:main (registry image) + export CI_GITHUB_IMAGE="${CI_REGISTRY?}/${GITHUB_REPOSITORY_LC?}:${GITHUB_REF_NAME?}" + # since the registry image :part uses a _branch name_ and not a commit id (like gitlab), + # we can end up with a stale deploy if we happen to redeploy to the same VM. so force a pull. + export NOMAD_VAR_FORCE_PULL=true + + # eg: dyno (project name) + export CI_PROJECT_NAME=$(basename "${GITHUB_REPOSITORY_LC?}") + + # eg: main (branchname) xxxd slugme + export CI_COMMIT_REF_SLUG="${GITHUB_REF_NAME?}" + + # eg: internetarchive-dyno xxxd better slugification + export CI_PROJECT_PATH_SLUG=$(echo "${GITHUB_REPOSITORY_LC?}" |tr '/.' - |cut -b1-63 | sed 's/[^a-z0-9\-]//g') + + if [ "$PRIVATE_REPO" = "false" ]; then + # turn off `docker login`` before pulling registry image, since it seems like the TOKEN expires + # and makes re-deployment due to containers changing hosts not work.. sometimes? always? + unset CI_REGISTRY_READ_TOKEN + fi + + + # unset any blank vars that come in from GH actions + for i in $(env | grep -E '^NOMAD_VAR_[A-Z0-9_]+=$' |cut -f1 -d=); do + unset $i + done + + # see if we should do nothing + if [ "$NOMAD_VAR_NO_DEPLOY" ]; then exit 0; fi + if [ "${NOMAD_TOKEN}${NOMAD_TOKEN_PROD}${NOMAD_TOKEN_STAGING}${NOMAD_TOKEN_EXT}" = "" ]; then exit 0; fi +} + + +ARG1= +if [ $# -gt 0 ]; then ARG1=$1; fi + +main diff --git a/hello-world.hcl b/hello-world.hcl new file mode 100644 index 0000000..661f948 --- /dev/null +++ b/hello-world.hcl @@ -0,0 +1,57 @@ +# Minimal basic project using only GitLab CI/CD std. variables +# Run like: nomad run hello-world.hcl + +# Variables used below and their defaults if not set externally +variables { + # These all pass through from GitLab [build] phase. + # Some defaults filled in w/ example repo "bai" in group "internetarchive" + # (but all 7 get replaced during normal GitLab CI/CD from CI/CD variables). + CI_REGISTRY = "registry.gitlab.com" # registry hostname + CI_REGISTRY_IMAGE = "registry.gitlab.com/internetarchive/bai" # registry image location + CI_COMMIT_REF_SLUG = "main" # branch name, slugged + CI_COMMIT_SHA = "latest" # repo's commit for current pipline + CI_PROJECT_PATH_SLUG = "internetarchive-bai" # repo and group it is part of, slugged + CI_REGISTRY_USER = "" # set for each pipeline and .. + CI_REGISTRY_PASSWORD = "" # .. allows pull from private registry + + # Switch this, locally edit your /etc/hosts, or otherwise. as is, webapp will appear at: + # https://internetarchive-bai-main.x.archive.org/ + BASE_DOMAIN = "x.archive.org" +} + +job "hello-world" { + datacenters = ["dc1"] + group "group" { + network { + port "http" { + to = 5000 + } + } + service { + tags = ["https://${var.CI_PROJECT_PATH_SLUG}-${var.CI_COMMIT_REF_SLUG}.${var.BASE_DOMAIN}"] + port = "http" + check { + type = "http" + port = "http" + path = "/" + interval = "10s" + timeout = "2s" + } + } + task "web" { + driver = "docker" + + config { + image = "${var.CI_REGISTRY_IMAGE}/${var.CI_COMMIT_REF_SLUG}:${var.CI_COMMIT_SHA}" + + ports = [ "http" ] + + auth { + server_address = "${var.CI_REGISTRY}" + username = "${var.CI_REGISTRY_USER}" + password = "${var.CI_REGISTRY_PASSWORD}" + } + } + } + } +} diff --git a/img/architecture.drawio.svg b/img/architecture.drawio.svg new file mode 100644 index 0000000..51fdfda --- /dev/null +++ b/img/architecture.drawio.svg @@ -0,0 +1,452 @@ + + + + + + + + + +
+
+
+ Hashistack +
+
+
+
+ + Hashistack + +
+
+ + + + + + virtual machine 1               + + + + + + + + + +
+
+
+ Nomad daemon +
+
+
+
+ + Nomad daemon + +
+
+ + + + + +
+
+
+ webapp2 +
+
+
+
+ + webapp2 + +
+
+ + + + + + + + + +
+
+
+ docker +
+  daemon +
+
+
+
+ + docker... + +
+
+ + + + + +
+
+
+ webapp2 +
+
+
+
+ + webapp2 + +
+
+ + + + + + + + + + +
+
+
+ svc +
+ discovery +
+
+
+
+ + svc... + +
+
+ + + + + +
+
+
+ Consul +
+ daemon +
+
+
+
+ + Consul... + +
+
+ + + + + + +
+
+
+ Fabio +
+ loadbalancer +
+
+ (has https certs) +
+
+
+
+ + Fabio... + +
+
+ + + + + +      virtual    machine 2 + + + + + + + + + +
+
+
+ Nomad daemon +
+
+
+
+ + Nomad daemon + +
+
+ + + + + +
+
+
+ webapp2 +
+
+
+
+ + webapp2 + +
+
+ + + + + + + + + +
+
+
+ docker +
+  daemon +
+
+
+
+ + docker... + +
+
+ + + + + +
+
+
+ webapp1 +
+
+
+
+ + webapp1 + +
+
+ + + + + + + + + + +
+
+
+ Consul +
+ daemon +
+
+
+
+ + Consul... + +
+
+ + + + + + + + +
+
+
+ http to +
+ webapp1 +
+
+
+
+ + http to... + +
+
+ + + + + + + + +
+
+
+ http to +
+ webapp2 +
+
+
+
+ + http to... + +
+
+ + + + + + +
+
+
+ httpS to either +
+ webapp1 +
+ webapp2 +
+
+
+
+ + httpS to either... + +
+
+ + + + + + + + +
+
+
+ browser +
+
+
+
+ + browser + +
+
+ + + + + +
+
+
+ service discovery +
+
+
+
+ + service discovery + +
+
+ + + + + + + +
+
+
+ gitlab CI/CD pipeline +
+ or admin +
+
+
+
+ + gitla... + +
+
+ + + + + +
+
+
+ web +
+
+
+
+ + web + +
+
+ + +
+ + + + + Viewer does not support full SVG 1.1 + + + +
\ No newline at end of file diff --git a/img/overview.drawio.svg b/img/overview.drawio.svg new file mode 100644 index 0000000..92c3d84 --- /dev/null +++ b/img/overview.drawio.svg @@ -0,0 +1,180 @@ + + + + + + + + + +
+
+
+ Hashistack +
+
+
+
+ + Hashistack + +
+
+ + + + + + +
+
+
+ browser +
+
+
+
+ + browser + +
+
+ + + + + + +
+
+
+
+ + loadbalancer  + +
+
+
+
+
+ + loadbalancer  + +
+
+ + + + +
+
+
+ http to webapp +
+
+
+
+ + http to webapp + +
+
+ + + + +
+
+
+
+
+ + webapp + +
+
+
+
+
+
+
+
+
+
+
+
+
+ + webapp... + +
+
+ + + + + + +
+
+
+ httpS to webapp +
+
+
+
+ + httpS to webapp + +
+
+ + + + + + +
+
+
+ http daemon +
+
+
+
+ + http daem... + +
+
+ + + + +
+
+
+ DB +
+
+
+
+ + DB + +
+
+ + + + +
+ + + + + Viewer does not support full SVG 1.1 + + + +
\ No newline at end of file diff --git a/img/overview2.drawio.svg b/img/overview2.drawio.svg new file mode 100644 index 0000000..5de17a6 --- /dev/null +++ b/img/overview2.drawio.svg @@ -0,0 +1,299 @@ + + + + + + + + + +
+
+
+ Hashistack +
+
+
+
+ + Hashistack + +
+
+ + + + + + +
+
+
+ browser +
+
+
+
+ + browser + +
+
+ + + + + + +
+
+
+
+ + Fabio + + loadbalancer  +
+
+
+
+
+ + Fabio loadbalancer  + +
+
+ + + + +
+
+
+ http to webapp +
+
+
+
+ + http to webapp + +
+
+ + + + +
+
+
+
+ webapp + + TaskGroup + +
+
+
+
+
+
+
+
+
+
+
+
+
+ + webapp TaskGroup... + +
+
+ + + + + + +
+
+
+ httpS to webapp +
+
+
+
+ + httpS to webapp + +
+
+ + + + + + +
+
+
+ http daemon +
+
+
+
+ + http daem... + +
+
+ + + + +
+
+
+ DB +
+
+
+
+ + DB + +
+
+ + + + + + + + + + + + +
+
+
+ gitlab CI/CD  +
+ or admin +
+
+
+
+ + gitlab CI/CD... + +
+
+ + + + + + + + + +
+
+
+ + Nomad +
+ daemon +
+
+
+
+
+ + Nomad... + +
+
+ + + + + + +
+
+
+ + docker +
+ daemon +
+
+
+
+
+ + docker... + +
+
+ + + + +
+
+
+ + Vault +
+ daemon +
+
+
+
+
+ + Vault... + +
+
+ + + + + + +
+
+
+ + Consul +
+ daemon +
+
+
+
+
+ + Consul... + +
+
+ + +
+ + + + + Viewer does not support full SVG 1.1 + + + +
\ No newline at end of file diff --git a/img/prod.jpg b/img/prod.jpg new file mode 100644 index 0000000..55c7cca Binary files /dev/null and b/img/prod.jpg differ diff --git a/img/protect.jpg b/img/protect.jpg new file mode 100644 index 0000000..5fc325c Binary files /dev/null and b/img/protect.jpg differ diff --git a/img/secrets.jpg b/img/secrets.jpg new file mode 100644 index 0000000..44caab5 Binary files /dev/null and b/img/secrets.jpg differ diff --git a/logo.jpg b/logo.jpg new file mode 100644 index 0000000..d8a5351 Binary files /dev/null and b/logo.jpg differ diff --git a/project.nomad b/project.nomad new file mode 100644 index 0000000..2ff454b --- /dev/null +++ b/project.nomad @@ -0,0 +1,459 @@ +# Variables used below and their defaults if not set externally +variables { + # These all pass through from GitLab [build] phase. + # Some defaults filled in w/ example repo "bai" in group "internetarchive" + # (but all 7 get replaced during normal GitLab CI/CD from CI/CD variables). + CI_REGISTRY = "registry.gitlab.com" # registry hostname + CI_REGISTRY_IMAGE = "registry.gitlab.com/internetarchive/bai" # registry image location + CI_COMMIT_REF_SLUG = "master" # branch name, slugged + CI_COMMIT_SHA = "latest" # repo's commit for current pipline + CI_PROJECT_PATH_SLUG = "internetarchive-bai" # repo and group it is part of, slugged + + # NOTE: if repo is public, you can ignore these next 3 registry related vars + CI_REGISTRY_USER = "" # set for each pipeline and .. + CI_REGISTRY_PASSWORD = "" # .. allows pull from private registry + # optional CI/CD registry read token which allows rerun of deploy phase anytime later + CI_REGISTRY_READ_TOKEN = "" # preferred name + + + # This autogenerates from https://gitlab.com/internetarchive/nomad/-/blob/master/.gitlab-ci.yml + # & normally has "-$CI_COMMIT_REF_SLUG" appended, but is omitted for "main" or "master" branches. + # You should not change this. + SLUG = "internetarchive-bai" + + + # The remaining vars can be optionally set/overriden in a repo via CI/CD variables in repo's + # setting or repo's `.gitlab-ci.yml` file. + # Each CI/CD var name should be prefixed with 'NOMAD_VAR_'. + + # default 300 MB + MEMORY = 300 + # default 100 MHz + CPU = 100 + + # A repo can set this to "tcp" - can help for debugging 1st deploy + CHECK_PROTOCOL = "http" + # What path healthcheck should use and require a 200 status answer for succcess + CHECK_PATH = "/" + # Allow individual, periodic healthchecks this much time to answer with 200 status + CHECK_TIMEOUT = "2s" + # Dont start first healthcheck until container up at least this long (adjust for slow startups) + HEALTH_TIMEOUT = "20s" + + # How many running containers should you deploy? + # https://learn.hashicorp.com/tutorials/nomad/job-rolling-update + COUNT = 1 + + COUNT_CANARIES = 1 + + NETWORK_MODE = "bridge" + + NAMESPACE = "default" + + # only used for github repos + CI_GITHUB_IMAGE = "" + + CONSUL_PATH = "/usr/bin/consul" + + FORCE_PULL = false + + # For jobs with 2+ containers (and tasks) (so we can setup ports properly) + MULTI_CONTAINER = false + + # Persistent Volume - set to a (fully qualified) dest dir inside your container, if you need a PV. + # We suggest "/pv". + PERSISTENT_VOLUME = "" + + /* You can overrride this for type="batch" and "cron-like" jobs (they rerun periodically & exit). + Combine this var override, with a small `job.nomad` in your repo to setup a cron, + with contents in the file like this, to run every hour at 15m past the hour: + type = "batch" + periodic { + cron = "15 * * * * *" + prohibit_overlap = false # must be false cause of kv env vars task + } + */ + IS_BATCH = false + + # There are more variables immediately after this - but they are "lists" or "maps" and need + # special definitions to not have defaults or overrides be treated as strings. +} + +variable "PORTS" { + # You must have at least one key/value pair, with a single value of 'http'. + # Each value is a string that refers to your port later in the project jobspec. + # + # Note: use -1 for your port to tell nomad & docker to *dynamically* assign you a random high port + # then your repo can read the environment variable: NOMAD_PORT_http upon startup to know + # what your main daemon HTTP listener should listen on. + # + # Note: if your port *only* talks TCP directly (or some variant of it, like IRC) and *not* HTTP, + # then make your port number (key) *negative AND less than -1*. + # Don't worry -- we'll use the abs() of it; + # negative numbers makes them easily identifiable and partition-able below ;-) + # + # Note: if you want an extra port to only use HTTP and not HTTPS, add 10000 to your desired + # port number (so for 18989, the public url will be http://... mapped internally to :8989 ). + # + # Examples: + # NOMAD_VAR_PORTS='{ 5000 = "http" }' + # NOMAD_VAR_PORTS='{ -1 = "http" }' + # NOMAD_VAR_PORTS='{ 5000 = "http", 666 = "cool-ness" }' + # NOMAD_VAR_PORTS='{ 8888 = "http", 8012 = "backend", 7777 = "extra-service" }' + # NOMAD_VAR_PORTS='{ 5000 = "http", -7777 = "irc" }' + # NOMAD_VAR_PORTS='{ 5000 = "http", 18989 = "db" }' + type = map(string) + default = { 5000 = "http" } +} + +variable "HOSTNAMES" { + # This autogenerates from https://gitlab.com/internetarchive/nomad/-/blob/master/.gitlab-ci.yml + # but you can override to 1 or more custom hostnames if desired, eg: + # NOMAD_VAR_HOSTNAMES='["www.example.com", "site.example.com"]' + type = list(string) + default = ["group-project-branch-slug.example.com"] +} + +variable "VOLUMES" { + # Pass in a list of [host VM => container] direct pass through of volumes, eg: + # NOMAD_VAR_VOLUMES='["/usr/games:/usr/games:ro"]' + type = list(string) + default = [] +} + +variable "NOMAD_SECRETS" { + # this is automatically populated with NOMAD_SECRET_ env vars by @see .gitlab-ci.yml + type = map(string) + default = {} +} + + +locals { + # Ignore all this. really :) + + # Copy hashmap, but remove map key/val for the main/default port (defaults to 5000). + # Then split hashmap in two: one for HTTP port mappings; one for TCP (only; rare) port mappings. + ports_main = {for k, v in var.PORTS: k => v if v == "http"} + ports_extra_tmp = {for k, v in var.PORTS: k => v if v != "http"} + ports_extra_tmp2 = {for k, v in local.ports_extra_tmp: k => v if k > -2} + ports_extra_https = {for k, v in local.ports_extra_tmp2: k => v if k < 10000} + ports_extra_http = {for k, v in local.ports_extra_tmp: abs(k - 10000) => v if k > 10000} + ports_extra_tcp = {for k, v in local.ports_extra_tmp: abs(k) => v if k < -1} + # 1st docker container configures all ports *unless* MULTI_CONTAINER is true, then just main port + ports_docker = values(var.MULTI_CONTAINER ? local.ports_main : var.PORTS) + + # Now create a hashmap of *all* ports to be used, but abs() any portnumber key < -1 + ports_all = merge(local.ports_main, local.ports_extra_https, local.ports_extra_http, local.ports_extra_tcp, {}) + + # Use CI_GITHUB_IMAGE if set, otherwise use GitLab vars interpolated string + docker_image = var.CI_GITHUB_IMAGE != "" ? var.CI_GITHUB_IMAGE : "${var.CI_REGISTRY_IMAGE}/${var.CI_COMMIT_REF_SLUG}:${var.CI_COMMIT_SHA}" + # " + + # GitLab docker login user/pass timeout rather quickly. If admin set CI_REGISTRY_READ_TOKEN key + # in the group/repo [Settings] [CI/CD] [Variables] - then use a token-based alternative to deploy. + # Effectively, use CI_REGISTRY_READ_TOKEN variant if set; else use CI_REGISTRY_* PAIR + docker_user = var.CI_REGISTRY_READ_TOKEN != "" ? "deploy-token" : var.CI_REGISTRY_USER + docker_pass = [for s in [var.CI_REGISTRY_READ_TOKEN, var.CI_REGISTRY_PASSWORD] : s if s != ""] + # Make [true] (array of length 1) if all docker password vars are "" + docker_no_login = length(local.docker_pass) > 0 ? [] : [true] + + + # If job is using secrets and CI/CD Variables named like "NOMAD_SECRET_*" then set this + # string to a KEY=VAL line per CI/CD variable. If job is not using secrets, set to "". + kv = join("\n", [for k, v in var.NOMAD_SECRETS : join("", concat([k, "='", v, "'"]))]) + + volumes = concat( + var.VOLUMES, + var.PERSISTENT_VOLUME == "" ? [] : ["/pv/${var.CI_PROJECT_PATH_SLUG}:${var.PERSISTENT_VOLUME}"], + ) + + auto_promote = var.COUNT_CANARIES > 0 ? true : false + + # make boolean-like array that can logically omit 2 `dynamic` blocks below for type=batch + service_type = var.IS_BATCH ? [] : ["service"] + + # split the 1st hostname into non-domain and domain parts + host0parts = split(".", var.HOSTNAMES[0]) + host0 = local.host0parts[0] + host0domain = join(".", slice(local.host0parts, 1, length(local.host0parts))) + + legacy = var.CI_PROJECT_PATH_SLUG == "www-dweb-ipfs" ? true : (var.CI_PROJECT_PATH_SLUG == "www-dweb-webtorrent" ? true : false) # xxx + + legacy2 = local.host0domain == "staging.archive.org" || local.host0domain == "prod.archive.org" || var.HOSTNAMES[0] == "polyfill.archive.org" || var.HOSTNAMES[0] == "esm.archive.org" || var.HOSTNAMES[0] == "purl.archive.org" || var.HOSTNAMES[0] == "popcorn.archive.org" # xxx + + tags = local.legacy2 ? merge( + {for portnum, portname in local.ports_extra_https: portname => [ + # If the main deploy hostname is `card.example.com`, and a 2nd port is named `backend`, + # then make its hostname be `card-backend.example.com` + "urlprefix-${local.host0}-${portname}.${local.host0domain}" + ]}, + {for portnum, portname in local.ports_extra_http: portname => [ + "urlprefix-${local.host0}-${portname}.${local.host0domain} proto=http" + ]}, + {for portnum, portname in local.ports_extra_tcp: portname => [ + "urlprefix-:${portnum} proto=tcp" + ]}, + ) : merge( + {for portnum, portname in local.ports_extra_https: portname => [ + # If the main deploy hostname is `card.example.com`, and a 2nd port is named `backend`, + # then make its hostname be `card-backend.example.com` + local.legacy ? "https://${var.HOSTNAMES[0]}:${portnum}" : "https://${local.host0}-${portname}.${local.host0domain}" // xxx + ]}, + {for portnum, portname in local.ports_extra_http: portname => [ + "http://${local.host0}-${portname}.${local.host0domain}" + ]}, + {for portnum, portname in local.ports_extra_tcp: portname => []}, + ) +} + + +# VARS.NOMAD--INSERTS-HERE + + +# NOTE: for main or master branch: NOMAD_VAR_SLUG === CI_PROJECT_PATH_SLUG +job "NOMAD_VAR_SLUG" { + datacenters = ["dc1"] + namespace = "${var.NAMESPACE}" + + dynamic "update" { + for_each = local.service_type + content { + # https://learn.hashicorp.com/tutorials/nomad/job-rolling-update + max_parallel = 1 + # https://learn.hashicorp.com/tutorials/nomad/job-blue-green-and-canary-deployments + canary = var.COUNT_CANARIES + auto_promote = local.auto_promote + min_healthy_time = "30s" + healthy_deadline = "10m" + progress_deadline = "11m" + auto_revert = true + } + } + + dynamic "group" { + for_each = [ "${var.SLUG}" ] + labels = ["${group.value}"] + content { + count = var.COUNT + + restart { + attempts = 3 + delay = "15s" + interval = "30m" + mode = "fail" + } + network { + dynamic "port" { + # port.key == portnumber + # port.value == portname + for_each = local.ports_all + labels = [ "${port.value}" ] + content { + to = port.key + } + } + } + + + # The "service" stanza instructs Nomad to register this task as a service + # in the service discovery engine, which is currently Consul. This will + # make the service addressable after Nomad has placed it on a host and + # port. + # + # For more information and examples on the "service" stanza, please see + # the online documentation at: + # + # https://www.nomadproject.io/docs/job-specification/service.html + # + service { + name = "${var.SLUG}" + task = "http" + + tags = [for HOST in var.HOSTNAMES: local.legacy2 ? "urlprefix-${HOST}" : "https://${HOST}"] + + canary_tags = [for HOST in var.HOSTNAMES: "https://canary-${HOST}"] + + port = "http" + check { + name = "alive" + type = "${var.CHECK_PROTOCOL}" + path = "${var.CHECK_PATH}" + port = "http" + interval = "10s" + timeout = "${var.CHECK_TIMEOUT}" + check_restart { + limit = 3 # auto-restart task when healthcheck fails 3x in a row + + # give container (eg: having issues) custom time amount to stay up for debugging before + # 1st health check (eg: "3600s" value would be 1hr) + grace = "${var.HEALTH_TIMEOUT}" + } + } + } + + dynamic "service" { + for_each = merge(local.ports_extra_https, local.ports_extra_http, local.ports_extra_tcp) + content { + # service.key == portnumber + # service.value == portname + name = "${var.SLUG}--${service.value}" + task = var.MULTI_CONTAINER ? service.value : "http" + # NOTE: Empty tags list if MULTI_CONTAINER (private internal ports like DB) + tags = var.MULTI_CONTAINER ? [] : local.tags[service.value] + + port = "${service.value}" + check { + name = "alive" + type = "tcp" + path = "${var.CHECK_PATH}" + port = "${service.value}" + interval = "10s" + timeout = "${var.CHECK_TIMEOUT}" + } + check_restart { + grace = "${var.HEALTH_TIMEOUT}" + } + } + } + + task "http" { + driver = "docker" + + # UGH - have to copy/paste this next block twice -- first for no docker login needed; + # second for docker login needed (job spec will assemble in just one). + # This is because we can't put dynamic content *inside* the 'config { .. }' stanza. + dynamic "config" { + for_each = local.docker_no_login + content { + image = "${local.docker_image}" + image_pull_timeout = "20m" + network_mode = "${var.NETWORK_MODE}" + ports = local.ports_docker + volumes = local.volumes + force_pull = var.FORCE_PULL + memory_hard_limit = "${var.MEMORY * 10}" # NOTE: not podman driver compatible + } + } + dynamic "config" { + for_each = slice(local.docker_pass, 0, min(1, length(local.docker_pass))) + content { + image = "${local.docker_image}" + image_pull_timeout = "20m" + network_mode = "${var.NETWORK_MODE}" + ports = local.ports_docker + volumes = local.volumes + force_pull = var.FORCE_PULL + memory_hard_limit = "${var.MEMORY * 10}" # NOTE: not podman driver compatible + + auth { + # server_address = "${var.CI_REGISTRY}" + username = local.docker_user + password = "${config.value}" + } + } + } + + resources { + # The MEMORY var now becomes a **soft limit** + # We will 10x that for a **hard limit** + cpu = "${var.CPU}" + memory = "${var.MEMORY}" + memory_max = "${var.MEMORY * 10}" + } + + + dynamic "template" { + # Secrets get stored in consul kv store, with the key [SLUG], when your project has set a + # CI/CD variable like NOMAD_SECRET_[SOMETHING]. + # Setup the nomad job to dynamically pull secrets just before the container starts - + # and insert them into the running container as environment variables. + for_each = slice(keys(var.NOMAD_SECRETS), 0, min(1, length(keys(var.NOMAD_SECRETS)))) + content { + change_mode = "noop" + destination = "secrets/kv.env" + env = true + data = "{{ key \"${var.SLUG}\" }}" + } + } + + template { + # Pass in useful hostname(s), repo & branch info to container's runtime as env vars + change_mode = "noop" + destination = "secrets/ci.env" + env = true + data = <&1 | tee $LOG + set -e + while [ $# -gt 0 ]; do + EXPECT=$1 + shift + grep "$EXPECT" $LOG + done +} + +function tags() { + STR=$(jq -cr '[..|objects|.Tags//empty]' /tmp/project.json) + if [ "$STR" != "$1" ]; then + set +x + echo "services tags: $STR not expected: $1" + exit 1 + fi +} + +function ctags() { + STR=$(jq -cr '[..|objects|.CanaryTags//empty]' /tmp/project.json) + if [ "$STR" != "$1" ]; then + set +x + echo "services canary tags: $STR not expected: $1" + exit 1 + fi +} + +function slug() { + STR=$(jq -cr '.Job.ID' /tmp/project.json) + if [ "$STR" != "$1" ]; then + set +x + echo "slug/job name: $STR not expected: $1" + exit 1 + fi +} + +function prodtest() { + CI_PROJECT_NAME=$(echo "$CI_PROJECT_PATH_SLUG" |cut -f2- -d-) + BASE_DOMAIN=${BASE_DOMAIN:-"prod.archive.org"} # default to prod.archive.org unless caller set it + NOMAD_TOKEN_PROD=test + expects "deploying to https://$CI_HOSTNAME" +} + +# test various deploy scenarios (verify expected hostname and cluster get used) +# NOTE: the CI_ * vars are normally auto-poplated by CI/CD GL (gitlab) yaml setup +# NOTE: the GITHUB_* vars are normally auto-poplated in CI/CD GH Actions by GH (github) +( + banner GL to dev + BASE_DOMAIN=dev.archive.org + CI_PROJECT_NAME=av + CI_COMMIT_REF_SLUG=main + CI_PROJECT_PATH_SLUG=www-$CI_PROJECT_NAME + expects 'nomad cluster https://dev.archive.org' \ + 'deploying to https://www-av.dev.archive.org' + tags '[["https://www-av.dev.archive.org"]]' + ctags '[["https://canary-www-av.dev.archive.org"]]' + slug www-av +) +( + banner GL to dev, custom hostname + BASE_DOMAIN=dev.archive.org + CI_PROJECT_NAME=av + CI_COMMIT_REF_SLUG=main + CI_PROJECT_PATH_SLUG=www-$CI_PROJECT_NAME + NOMAD_VAR_HOSTNAMES='["av"]' + expects 'nomad cluster https://dev.archive.org' \ + 'deploying to https://av.dev.archive.org' + tags '[["https://av.dev.archive.org"]]' + ctags '[["https://canary-av.dev.archive.org"]]' + slug www-av +) +( + echo GL to prod, via alt/unusual branch name, custom hostname + BASE_DOMAIN=prod.archive.org + CI_PROJECT_NAME=av + CI_COMMIT_REF_SLUG=avinfo + CI_PROJECT_PATH_SLUG=www-$CI_PROJECT_NAME + NOMAD_VAR_HOSTNAMES='["avinfo"]' + NOMAD_TOKEN_PROD=test + expects 'nomad cluster https://prod.archive.org' \ + 'deploying to https://avinfo.prod.archive.org' \ + 'using nomad production token' + tags '[["urlprefix-avinfo.prod.archive.org"]]' + ctags '[["https://canary-avinfo.prod.archive.org"]]' + slug www-av-avinfo +) +( + echo GL to prod, via alt/unusual branch name, custom hostname + BASE_DOMAIN=prod.archive.org + CI_PROJECT_NAME=plausible + CI_COMMIT_REF_SLUG=plausible-ait + CI_PROJECT_PATH_SLUG=services-$CI_PROJECT_NAME + NOMAD_VAR_HOSTNAMES='["plausible-ait"]' + NOMAD_TOKEN_PROD=test + expects 'nomad cluster https://prod.archive.org' \ + 'deploying to https://plausible-ait.prod.archive.org' \ + 'using nomad production token' +) +( + echo GL to dev, branch, so custom hostname ignored + banner GL to dev, w/ 2+ custom hostnames + BASE_DOMAIN=dev.archive.org + CI_PROJECT_NAME=av + CI_COMMIT_REF_SLUG=main + CI_PROJECT_PATH_SLUG=www-$CI_PROJECT_NAME + NOMAD_VAR_HOSTNAMES='["av1", "av2.dweb.me", "poohbot.com"]' + expects 'nomad cluster https://dev.archive.org' \ + 'deploying to https://av1.dev.archive.org' + # NOTE: subtle -- with multiple names to single port deploy, we expect a list of 3 hostnames + # applying to *one* service + tags '[["https://av1.dev.archive.org","https://av2.dweb.me","https://poohbot.com"]]' + ctags '[["https://canary-av1.dev.archive.org","https://canary-av2.dweb.me","https://canary-poohbot.com"]]' +) +( + banner GL to dev, branch, so custom hostname ignored + BASE_DOMAIN=dev.archive.org + CI_PROJECT_NAME=av + CI_COMMIT_REF_SLUG=tofu + CI_PROJECT_PATH_SLUG=www-$CI_PROJECT_NAME + NOMAD_VAR_HOSTNAMES='["av"]' + expects 'nomad cluster https://dev.archive.org' \ + 'deploying to https://www-av-tofu.dev.archive.org' + slug www-av-tofu +) +( + banner GL to prod + BASE_DOMAIN=dev.archive.org + CI_PROJECT_NAME=plausible + CI_COMMIT_REF_SLUG=production + CI_PROJECT_PATH_SLUG=services-$CI_PROJECT_NAME + NOMAD_TOKEN_PROD=test + expects 'nomad cluster https://prod.archive.org' \ + 'deploying to https://plausible.prod.archive.org' \ + 'using nomad production token' + tags '[["urlprefix-plausible.prod.archive.org"]]' + ctags '[["https://canary-plausible.prod.archive.org"]]' +) +( + banner GL to ext + BASE_DOMAIN=dev.archive.org + CI_PROJECT_NAME=av + CI_COMMIT_REF_SLUG=ext + CI_PROJECT_PATH_SLUG=www-$CI_PROJECT_NAME + NOMAD_TOKEN_EXT=test + expects 'nomad cluster https://ext.archive.org' \ + 'deploying to https://av.ext.archive.org' \ + 'using nomad ext token' + tags '[["https://av.ext.archive.org"]]' + ctags '[["https://canary-av.ext.archive.org"]]' +) +( + banner GL to prod, custom hostname + BASE_DOMAIN=dev.archive.org + CI_PROJECT_NAME=plausible + CI_COMMIT_REF_SLUG=production + CI_PROJECT_PATH_SLUG=services-$CI_PROJECT_NAME + NOMAD_VAR_HOSTNAMES='["plausible-ait.prod.archive.org"]' + NOMAD_TOKEN_PROD=test + expects 'nomad cluster https://prod.archive.org' \ + 'deploying to https://plausible-ait.prod.archive.org' \ + 'using nomad production token' +) +( + banner GH to dev + GITHUB_ACTIONS=1 + GITHUB_REPOSITORY=internetarchive/emularity-engine + GITHUB_REF_NAME=tofu + BASE_DOMAIN=dev.archive.org + expects 'nomad cluster https://dev.archive.org' \ + 'deploying to https://internetarchive-emularity-engine-tofu.dev.archive.org' +) +( + banner GH to staging + GITHUB_ACTIONS=1 + GITHUB_REPOSITORY=internetarchive/emularity-engine + GITHUB_REF_NAME=staging + BASE_DOMAIN=dev.archive.org + NOMAD_TOKEN_PROD=test + expects 'nomad cluster https://staging.archive.org' \ + 'deploying to https://emularity-engine.staging.archive.org' +) +( + banner GH to production + GITHUB_ACTIONS=1 + GITHUB_REPOSITORY=internetarchive/emularity-engine + GITHUB_REF_NAME=production + BASE_DOMAIN=dev.archive.org + NOMAD_TOKEN_PROD=test + expects 'nomad cluster https://ux-b.archive.org' \ + 'deploying to https://emularity-engine.ux-b.archive.org' \ + 'using nomad production token' +) +( + banner "GL repo using 'main' branch to be like 'production'" + BASE_DOMAIN=prod.archive.org + CI_PROJECT_NAME=offshoot + CI_COMMIT_REF_SLUG=main + CI_PROJECT_PATH_SLUG=www-$CI_PROJECT_NAME + NOMAD_TOKEN_PROD=test + NOMAD_VAR_HOSTNAMES='["offshoot"]' + expects 'nomad cluster https://prod.archive.org' \ + 'deploying to https://offshoot.prod.archive.org' + slug www-offshoot +) +( + banner GL repo using one HTTP-only port and 2+ ports/names, to dev + BASE_DOMAIN=dev.archive.org + CI_PROJECT_NAME=lcp + CI_COMMIT_REF_SLUG=main + CI_PROJECT_PATH_SLUG=services-$CI_PROJECT_NAME + NOMAD_VAR_PORTS='{ 9999 = "http" , 18989 = "lcp", 8990 = "lsd" }' + expects 'nomad cluster https://dev.archive.org' \ + 'deploying to https://services-lcp.dev.archive.org' + # NOTE: subtle -- with multiple ports (one thus one service per port), we expect 3 services + # eacho with its own hostname + tags '[["https://services-lcp.dev.archive.org"],["http://services-lcp-lcp.dev.archive.org"],["https://services-lcp-lsd.dev.archive.org"]]' + ctags '[["https://canary-services-lcp.dev.archive.org"]]' +) +( + banner GL repo using one HTTP-only port and 2+ ports/names, to prod + BASE_DOMAIN=dev.archive.org + CI_PROJECT_NAME=lcp + CI_COMMIT_REF_SLUG=production + CI_PROJECT_PATH_SLUG=services-$CI_PROJECT_NAME + NOMAD_VAR_PORTS='{ 9999 = "http" , 18989 = "lcp", 8990 = "lsd" }' + NOMAD_TOKEN_PROD=test + expects 'nomad cluster https://prod.archive.org' \ + 'deploying to https://lcp.prod.archive.org' \ + 'using nomad production token' + # NOTE: subtle -- with multiple ports (one thus one service per port), we expect 3 services + # eacho with its own hostname + tags '[["urlprefix-lcp.prod.archive.org"],["urlprefix-lcp-lcp.prod.archive.org proto=http"],["urlprefix-lcp-lsd.prod.archive.org"]]' + ctags '[["https://canary-lcp.prod.archive.org"]]' +) +( + banner GL repo using one TCP-only port and 2+ ports/names + BASE_DOMAIN=dev.archive.org + CI_PROJECT_NAME=scribe-c2 + CI_COMMIT_REF_SLUG=main + CI_PROJECT_PATH_SLUG=services-$CI_PROJECT_NAME + NOMAD_VAR_PORTS='{ 9999 = "http" , -7777 = "tcp", 8889 = "reg" }' + expects 'nomad cluster https://dev.archive.org' \ + 'deploying to https://services-scribe-c2.dev.archive.org' + # NOTE: subtle -- with multiple ports (one thus one service per port), we'd normally expect 3 services + # eacho with its own hostname -- but one is TCP so the middle Service gets an *empty* list of tags. + tags '[["https://services-scribe-c2.dev.archive.org"],[],["https://services-scribe-c2-reg.dev.archive.org"]]' + ctags '[["https://canary-services-scribe-c2.dev.archive.org"]]' +) + + +# a bunch of quick, simple production deploy tests validating hostnames +( + CI_PROJECT_PATH_SLUG=services-article-exchange + CI_COMMIT_REF_SLUG=production + CI_HOSTNAME=article-exchange.prod.archive.org + prodtest +) +( + CI_PROJECT_PATH_SLUG=services-atlas + CI_COMMIT_REF_SLUG=production + CI_HOSTNAME=atlas.prod.archive.org + prodtest +) +( + CI_PROJECT_PATH_SLUG=services-bwhogs + CI_COMMIT_REF_SLUG=production + CI_HOSTNAME=bwhogs.prod.archive.org + prodtest +) +( + CI_PROJECT_PATH_SLUG=services-ids-logic + CI_COMMIT_REF_SLUG=production + CI_HOSTNAME=ids-logic.prod.archive.org + prodtest +) +( + CI_PROJECT_PATH_SLUG=services-lcp + CI_COMMIT_REF_SLUG=production + CI_HOSTNAME=lcp.prod.archive.org + prodtest +) +( + CI_PROJECT_PATH_SLUG=services-microfilmmonitor + CI_COMMIT_REF_SLUG=production + CI_HOSTNAME=microfilmmonitor.prod.archive.org + prodtest +) +( + CI_PROJECT_PATH_SLUG=services-oclc-ill + CI_COMMIT_REF_SLUG=production + CI_HOSTNAME=oclc-ill.prod.archive.org + prodtest +) +( + CI_PROJECT_PATH_SLUG=services-odyssey + CI_COMMIT_REF_SLUG=production + CI_HOSTNAME=odyssey.prod.archive.org + prodtest +) +( + CI_PROJECT_PATH_SLUG=services-opds + CI_COMMIT_REF_SLUG=production + CI_HOSTNAME=opds.prod.archive.org + prodtest +) +( + CI_PROJECT_PATH_SLUG=services-plausible + CI_COMMIT_REF_SLUG=production + CI_HOSTNAME=plausible.prod.archive.org + prodtest +) +( + CI_PROJECT_PATH_SLUG=services-rapid-slackbot + CI_COMMIT_REF_SLUG=production + CI_HOSTNAME=rapid-slackbot.prod.archive.org + prodtest +) +( + CI_PROJECT_PATH_SLUG=services-scribe-serial-helper + CI_COMMIT_REF_SLUG=production + CI_HOSTNAME=scribe-serial-helper.prod.archive.org + prodtest +) +( + CI_PROJECT_PATH_SLUG=www-av + CI_COMMIT_REF_SLUG=production + CI_HOSTNAME=av.prod.archive.org + prodtest +) +( + CI_PROJECT_PATH_SLUG=www-bookserver + CI_COMMIT_REF_SLUG=production + CI_HOSTNAME=bookserver.prod.archive.org + prodtest +) +( + CI_PROJECT_PATH_SLUG=www-iiif + CI_COMMIT_REF_SLUG=production + CI_HOSTNAME=iiif.prod.archive.org + prodtest +) +( + CI_PROJECT_PATH_SLUG=www-nginx + CI_COMMIT_REF_SLUG=production + CI_HOSTNAME=nginx.prod.archive.org + prodtest +) +( + CI_PROJECT_PATH_SLUG=www-rendertron + CI_COMMIT_REF_SLUG=production + CI_HOSTNAME=rendertron.prod.archive.org + prodtest +) + + +# a bunch of quick, _custom HOSTNAMES_, production deploy tests validating hostnames +( + NOMAD_VAR_HOSTNAMES='["popcorn.archive.org"]' + CI_PROJECT_PATH_SLUG=www-popcorn + CI_COMMIT_REF_SLUG=production + CI_HOSTNAME=popcorn.archive.org + prodtest +) +( + NOMAD_VAR_HOSTNAMES='["polyfill.archive.org"]' + CI_PROJECT_PATH_SLUG=www-polyfill-io-production + CI_COMMIT_REF_SLUG=production + CI_HOSTNAME=polyfill.archive.org + prodtest +) +( + NOMAD_VAR_HOSTNAMES='["purl.archive.org"]' + CI_PROJECT_PATH_SLUG=www-purl + CI_COMMIT_REF_SLUG=production + CI_HOSTNAME=purl.archive.org + prodtest +) +( + NOMAD_VAR_HOSTNAMES='["esm.archive.org"]' + CI_PROJECT_PATH_SLUG=www-esm + CI_COMMIT_REF_SLUG=production + CI_HOSTNAME=esm.archive.org + prodtest +) +( + NOMAD_VAR_HOSTNAMES='["cantaloupe.prod.archive.org"]' + CI_PROJECT_PATH_SLUG=services-ia-iiif-cantaloupe-experiment + CI_COMMIT_REF_SLUG=production + CI_HOSTNAME=cantaloupe.prod.archive.org + prodtest +) +( + NOMAD_VAR_HOSTNAMES='["plausible-ait.prod.archive.org"]' + CI_PROJECT_PATH_SLUG=services-plausible + CI_COMMIT_REF_SLUG=production-ait + CI_HOSTNAME=plausible-ait.prod.archive.org + prodtest +) +( + NOMAD_VAR_HOSTNAMES='["parse_dates"]' + CI_PROJECT_PATH_SLUG=services-parse-dates + CI_COMMIT_REF_SLUG=production + CI_HOSTNAME=parse_dates.prod.archive.org + prodtest +) + +banner SUCCESS diff --git a/vsync b/vsync new file mode 100755 index 0000000..cafd07c --- /dev/null +++ b/vsync @@ -0,0 +1,10 @@ +#!/bin/zsh -e + +# Shell script version of `nom-cp` alias +# Typically used with `sync-rsync` extension to VSCode. + +mydir=${0:a:h} + +source $mydir/aliases + +nom-cp "$@"