Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(na): fix up eks install steps per feedback #1611

Merged
merged 3 commits into from
Oct 14, 2024
Merged
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 4 additions & 3 deletions content/en/contribute/code/core/deploy-on-eks.md
Original file line number Diff line number Diff line change
Expand Up @@ -216,7 +216,7 @@ And then follow these steps:

```shell
kubectl config use-context arn:aws:eks:eu-west-2:720541322708:cluster/prod-cht-eks
./troubleshooting/get-volume-binding <DEPLOYMENT> <COUCH-DB-NAME> | jq '.subPath'`
./troubleshooting/get-volume-binding <DEPLOYMENT> <COUCH-DB-NAME> | jq '.subPath'
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When I first went through this tutorial, I was confused about where I needed to be at this point. I had only read the cloning section from the document, which didn't mention that I needed to be in a specific folder to run certain commands. We only mention cht-deploy in passing earlier saying that we need it to deploy.
So I think it would be helpful to mention here that troubleshooting are commands from the cht-deploy script location.

```

Which shows the path like this:
Expand All @@ -233,13 +233,14 @@ And then follow these steps:
* `secret` - this should match the version you cloned from
* `user` - use `medic` user
* `uuid` - this should match the version you cloned from
* `couchdb_node_storage_size` - use the same size as the volume you just cloned
* `couchdb_node_storage_size` - use the same size as the volume you just cloned
* `account-id` - this should always be `720541322708`
* `host` - this should be your username followed by `dev.medicmobile.org`. For example `mrjones.dev.medicmobile.org`
* `hosted_zone_id` - this should always be `Z3304WUAJTCM7P`
* `preExistingDataAvailable` - set this to be `true`
* `dataPathOnDiskForCouchDB` - use the subPath you got in the step above. For example `storage/medic-core/couchdb/data`
* `preExistingEBSVolumeID-1` - set this to be the ID from step 2. For example `vol-f9dsa0f9sad09f0dsa`
* `preExistingEBSVolumeSize` - use the same size as the volume you just cloned
* `dataPathOnDiskForCouchDB` - use the subPath you got in the step above. For example `storage/medic-core/couchdb/data`

7. Deploy this to development per the [steps above](#starting-and-stopping-aka-deleting). NB - **Be sure to call `kubectl config use-context arn:aws:eks:eu-west-2:720541322708:cluster/dev-cht-eks` before you call** `./cht-deploy`! Always create test instances on the dev cluster.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we also include a section about how to delete the volume and snapshot? I feel like cleanup is important, and tracing dangling snapshots will be really difficult in the future.


Expand Down
Loading