Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(na): fix up eks install steps per feedback #1611

Merged
merged 3 commits into from
Oct 14, 2024
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
20 changes: 16 additions & 4 deletions content/en/contribute/code/core/deploy-on-eks.md
Original file line number Diff line number Diff line change
Expand Up @@ -212,11 +212,11 @@ And then follow these steps:
}
]
```
5. Switch to the production cluster and then find the `subPath` of the deployment you made the snapshot from. The `COUCH-DB-NAME` is usually `cht-couchdb`. But, it can sometimes be `cht-couchdb-1` (check `./troubleshooting/list-deployments <your-namespace>` if you still don't know). Including the `use-context`, the two calls are:
5. Switch to the production cluster and then find the `subPath` of the deployment you made the snapshot from. The `COUCH-DB-NAME` is usually `cht-couchdb`. But, it can sometimes be `cht-couchdb-1` (check `./troubleshooting/list-deployments <your-namespace>` if you still don't know). Including the `use-context`, the two calls are below. Note that `troubleshooting` directory is in the [CHT Core repo](https://github.com/medic/cht-core/tree/master/scripts/deploy/troubleshooting):

```shell
kubectl config use-context arn:aws:eks:eu-west-2:720541322708:cluster/prod-cht-eks
./troubleshooting/get-volume-binding <DEPLOYMENT> <COUCH-DB-NAME> | jq '.subPath'`
./troubleshooting/get-volume-binding <DEPLOYMENT> <COUCH-DB-NAME> | jq '.subPath'
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When I first went through this tutorial, I was confused about where I needed to be at this point. I had only read the cloning section from the document, which didn't mention that I needed to be in a specific folder to run certain commands. We only mention cht-deploy in passing earlier saying that we need it to deploy.
So I think it would be helpful to mention here that troubleshooting are commands from the cht-deploy script location.

```

Which shows the path like this:
Expand All @@ -233,19 +233,31 @@ And then follow these steps:
* `secret` - this should match the version you cloned from
* `user` - use `medic` user
* `uuid` - this should match the version you cloned from
* `couchdb_node_storage_size` - use the same size as the volume you just cloned
* `couchdb_node_storage_size` - use the same size as the volume you just cloned
* `account-id` - this should always be `720541322708`
* `host` - this should be your username followed by `dev.medicmobile.org`. For example `mrjones.dev.medicmobile.org`
* `hosted_zone_id` - this should always be `Z3304WUAJTCM7P`
* `preExistingDataAvailable` - set this to be `true`
* `dataPathOnDiskForCouchDB` - use the subPath you got in the step above. For example `storage/medic-core/couchdb/data`
* `preExistingEBSVolumeID-1` - set this to be the ID from step 2. For example `vol-f9dsa0f9sad09f0dsa`
* `preExistingEBSVolumeSize` - use the same size as the volume you just cloned
* `dataPathOnDiskForCouchDB` - use the subPath you got in the step above. For example `storage/medic-core/couchdb/data`

7. Deploy this to development per the [steps above](#starting-and-stopping-aka-deleting). NB - **Be sure to call `kubectl config use-context arn:aws:eks:eu-west-2:720541322708:cluster/dev-cht-eks` before you call** `./cht-deploy`! Always create test instances on the dev cluster.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we also include a section about how to delete the volume and snapshot? I feel like cleanup is important, and tracing dangling snapshots will be really difficult in the future.


8. Login using the `user` and `password` set above, which should match the production instance.

9. When you're done with this deployment, you can delete it with helm:

```shell
helm delete USERNAME-dev --namespace USERNAME-dev
```

10. Now that no resources are using the volume, you should delete it. If you created a snapshot, you should delete that as well. Be sure to replace `vol-f9dsa0f9sad09f0dsa` and `snap-432490821280432092` with your actual IDs. You only need to delete the snapshot if you created it above, **do no delete snapshots you did not create**:

```shell
aws ec2 delete-volume --region eu-west-2 --volume-id vol-f9dsa0f9sad09f0dsa
aws ec2 delete-snapshot --snapshot-id snap-432490821280432092
```

## References and Debugging

Expand Down
Loading